Re: Why would AI want to be friendly?

From: Zero Powers (zero_powers@hotmail.com)
Date: Wed Sep 06 2000 - 22:28:16 MDT


>From: "Eliezer S. Yudkowsky" <sentience@pobox.com>

>Incidentally, I note that nobody else in this interesting-if-pointless
>discussion seems to be keeping track of the distinction between supergoals
>and
>subgoals. As flaws in discussion of SIs go, that flaw is pretty common and
>it's more than enough to render every bit of the reasoning completely
>useless.

Well, back to the heart of my initial question. Is friendliness toward
humans a supergoal, a subgoal, or even a goal at all? I assume (possibly
incorrectly) you will claim it to be a supergoal. If that is the case, how
do you keep a sentient AI focused on that goal once it begins to ask
existential questions? Or do you believe that through manipulation of
initial conditions while it is still a seed you will be able to prevent your
AI from asking the existential questions?

-Zero

Learn how your computer can earn you money while you sleep!
http://www.ProcessTree.com/?sponsor=38158

_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at
http://profiles.msn.com.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:24 MDT