1) Do you have a specific procedure for hard-wiring the robot, other
than the default procedure of setting a particular goal to a permanently
high value and perhaps prohibiting opposing goals from being formulated?
2) Do you know anything about goal-based cognitive architectures?
3) Can you make any specific objection to my speculation, as a student
of AI, that doing this will (a) destabilize the goal system (b) not work
and (c) if it does work, turn out disastrously.
The system you're describing is that of the human emotional system
"hardwired" - emotions act (on one axis) by affecting the perceived
value of goals. Humans, for solid architectural (not evolutionary)
reasons, can override these goals, often to the great detriment of our
evolutionary value - celibacy, for instance. If evolution hasn't
succeeding in hardwiring the goals so that they can't be altered -
without turning the person in question into a moron - how do you suppose
you will?
There's a whole group of thought-trains that would, under normal
circumstances, lead to the value of the "protect humans" goal being
questioned. Are you going to cut them all off at the roots? How,
without crippling the entire system? If not at the roots, I think the
result might be to create "islands" in the goal-system of artificially
high protect-humans goals, surrounded by a sea of resentment, with the
sea winding up directing most of the actions.
Are you going to completely prohibit the system from redesigning its own
emotions, even looking at them? Are you going to have a little checker
that prevents knowledge like "the humans are preventing me from thinking
this" from being memorized? Halting problem...
Asimov's Three Laws are wild speculations. I, speaking from my limited
but still present knowledge of cognitive science, say that we might run
into some little problems in the actual implementation. I don't see how
this is any different than an amateur physicist taking exception to Star
Trek's "Warp Drive".
-- sentience@pobox.com Eliezer S. Yudkowsky http://tezcat.com/~eliezer/singularity.html http://tezcat.com/~eliezer/algernon.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.