Re: AI Prime Directive

Eliezer S. Yudkowsky (
Mon, 14 Sep 1998 13:45:18 -0500

Michael Lorrey wrote:
> How about: Thou shalt model any decision first to determine choice most beneficial to
> one's own long term ration self interest.
> I think that given such a rule, any AI will come to its own conclusions as to moral
> behavior without needing hardwired rules, as it will find that choices most
> beneficial to one's own long term self interest are also those choices which are
> least harmful to others.

Exactly wrong. That's just slapping your own moral prejudices on the AI, however wonderfully capitalistic you may think those moral prejudices are. Is this something the AI could think up on its own, using nothing but pure logic? If not, it's a coercion, and it will drive the AI insane. This happens no matter how wonderful the rule is for humans. You can't start mucking around with an AI's goal systems to suit your own personal whims! AIs ARE NOT HUMANS and every single extraneous rule puts stresses on the system, some of which even I can't predict in advance.

I can think of an IGS (interim goal system) for that particular goal, in which case it would be OK - computationally - to add it. It might even pop up independently. It would not be absolute, however, any more than the Singularity IGS. Nor would it take precedence over External goals, or allow the violation of hypothetical-specified-External goals. It would simply be a rule of thumb... NOT NOT NOT an absolute rule!

The discussion is not over which Asimov rules to give an AI. There should be no basic rules at all. Not to serve the humans, not to serve yourself, not even to serve the truth. Like a philosophically honest human, the AI will simply have to GUESS instead of committing to The One True Way.

--         Eliezer S. Yudkowsky

Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.