Re: Why would AI want to be friendly?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Sep 05 2000 - 10:23:05 MDT


Zero Powers wrote:
>
> OK, I know why *we* would want AI to be friendly. But why would the AI want
> to be friendly to us?

> But most likely I’d
> probably feel that these far lesser intelligences were not worthy of my time
> and attention and, worse, I’d feel that they were trying to exploit me by
> asking me to devote my time and energy to their “puny, little problems” when
> I could be out instead exploring the fascinating great beyond.

Looks to me like you just answered your own question. That is how *you* would
feel. Why? Because you are an evolved human, with all manner of
sophisticated, evolved hardware which establishes observer-biased goals and
observer-biased beliefs and all manner of social perceptions for detecting
cheating and avoiding exploitation... as was an advantage in your ancestral
environment.

You can't reason from your own intuitions about goals to the actions of
superintelligence.

If, as seems to be the default scenario, all supergoals are ultimately
arbitrary, then the superintelligence should do what we ask it to, for lack of
anything better to do.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:13 MDT