1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.The problem is that the first law doesn't work. Basically, carried to its logical conclusion, it allows and commands robots to do what they tried to do in the movie: herd us all into safe areas and not allow any harm to come to us (we'd basically become pets).
2) A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
So, what's the solution? Anyone?