Re: Why would AI want to be friendly?

From: Zero Powers (
Date: Mon Oct 02 2000 - 22:24:21 MDT

>From: "Eliezer S. Yudkowsky" <>

>Emlyn wrote:
> >
> > Eliezer wrote:
> > > As for that odd scenario you posted earlier, curiosity - however
>necessary or
> > > unnecessary to a functioning mind - is a perfectly reasonable subgoal
> > > Friendliness, and therefore doesn't *need* to have independent motive
> >
> > I'm not sure I understand how curiosity can be a subgoal for a seed ai;
> > love some more on that.
>You need curiosity in order to think, learn, and discover, and you need to
>think, learn, and discover in order to be more generally efficient at
>manipulating reality, and being more generally efficient at manipulating
>reality means you can be more efficiently Friendly.

I hate to always sound like the pessimist in the bunch, but doesn't "being
more generally efficient at manipulating reality" also mean that you can be
more efficiently Unfriendly as well? In other words, if I had an enemy, say
an enemy with a photographic memory who learns at 10 to the X times faster
than I do, I would certainly hope that he was not very curious and that he
was not very efficient at manipulating reality.
Get Your Private, Free E-mail from MSN Hotmail at

Share information about yourself, create your own public profile at

This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT