Re: Why would AI want to be friendly?

From: Franklin Wayne Poley (culturex@vcn.bc.ca)
Date: Sun Sep 24 2000 - 17:55:16 MDT


On Sun, 24 Sep 2000, Zero Powers wrote:

> >From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
>
> >Oh, why bother. I really am starting to get a bit frustrated over here.
> >I've
> >been talking about this for weeks and it doesn't seem to have any effect
> >whatsoever. Nobody is even bothering to distinguish between subgoals and
> >supergoals. You're all just playing with words.
>
> To a large extent you are right. You are talking about computer programming
> strategies. I, and I believe others in this thread, are really talking
> about philosophy. Perhaps that is because you have in mind the seed AI you
> are working on, and I have in mind a more fully developed super-intelligent,
> sentient AI.

How about just telling us what you want your AI machinery to accomplish,
in plain-language, down-to-earth, operational terms.
FWP



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:49 MDT