Re: Why would AI want to be friendly?

From: Zero Powers (zero_powers@hotmail.com)
Date: Sun Sep 24 2000 - 16:30:33 MDT


>From: "Eliezer S. Yudkowsky" <sentience@pobox.com>

>Oh, why bother. I really am starting to get a bit frustrated over here.
>I've
>been talking about this for weeks and it doesn't seem to have any effect
>whatsoever. Nobody is even bothering to distinguish between subgoals and
>supergoals. You're all just playing with words.

To a large extent you are right. You are talking about computer programming
strategies. I, and I believe others in this thread, are really talking
about philosophy. Perhaps that is because you have in mind the seed AI you
are working on, and I have in mind a more fully developed super-intelligent,
sentient AI.

The programming strategies you are focusing on are relevant to *developing*
the *seed*. However, once the AI becomes super-intelligent and sentient,
talking about it in terms of your supergoal-subgoal programming terms will
be just as convenient as trying to describe your own behaviors in that way.
At some point in the AI's development you will have to make the jump to the
level of philosophy in order to discuss its goals in a meaningful way. That
is what I am trying to discuss here. If that is frustrating to you then,
yes, perhaps you ought to sit this thread out.

-Zero

Learn how your computer can earn you money while you sleep!
http://www.ProcessTree.com/?sponsor=38158

_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at
http://profiles.msn.com.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:49 MDT