Re: Why would AI want to be friendly?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Sep 24 2000 - 10:21:07 MDT


Samantha Atkins wrote:
>
> "Eliezer S. Yudkowsky" wrote:
>
> > Oh, why bother. I really am starting to get a bit frustrated over here. I've
> > been talking about this for weeks and it doesn't seem to have any effect
> > whatsoever. Nobody is even bothering to distinguish between subgoals and
> > supergoals. You're all just playing with words.
>
> Hey! Wait a second. If you are going to be a leader in the greatest
> project in human history (or in any project for that matter) you have to
> learn and learn damn fast to be able to motivate and enroll people.

No, actually I should expect that the seed AI project will have smaller total
expenditures, frcom start to finish, than a typical major corporation's Y2K
projects. I used to think in terms of the greatest project in human history,
but I no longer think that'll be necessary, and a damn good thing, as I don't
think we're gonna *get* the largest budget in human history.

> You need other human
> beings to understand enough to keep you from getting lynched or shut
> down for trying such a thing.

Yes. A finite and rather small number of human beings, most of whom will
almost certainly Get It on the first try. If, hypothetically, I were a
pessimistic and cynical person, then I were start saying things like: "And if
I spend time talking to anyone else, then that just increases the probability
that I'll get lynched or shut down. Foresight tried to play nicey-nice with
everyone, as a result of which they are now being screwed over by the National
Nanotechnology Research Initiative."

> You are so brilliant is so many ways but I think you have a lot to learn
> about reaching and working with people. The success of the project
> depends hugely on you learning that.

I wish I knew more about dealing with people, but I no longer give it as high
a priority as I once did.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:48 MDT