Humans "operate" using emotions and logical biases, but computers "operate" using logic. To implement the first law, you must be certain that there is always something that an agent can do or must not do in order to "save" humans. This is almost always not true (hence moral disagreements).
Also, even if you change the laws to get rid of logical inconsistencies, you still have to translate the words into logic, by strictly defining them, which is again impossible (as humans disagree what these words mean).