Hara Ra wrote
>So, what if a robot has this choice:
>Kill someone, and allow 100 others to live, or >not kill, and allow the 100 others to die.
>This would probably immobilize the robot, which is the worst choice,
>so the Zero'th Law is:
>0. A robot, when faced with a choice which results in harm,
> chooses the one resulting in the least harm.
How would such a robot react if required to save 2 very young children whose lives were in mortal danger( say they have fallen out of a window), where the circumstances were such that only one child's life could be saved by the robot. Lets also assume the children were twins and that there was no reason to favour one over the other, or that there were no other characteristics known to the Robot that would allow it to determine which life would, potentially, be that most likely to be of most benefit to humanity in the long run. Could that not immobilize our robot ?