Re: Why would AI want to be friendly?

From: Samantha Atkins (samantha@objectent.com)
Date: Sun Sep 24 2000 - 18:47:28 MDT


"Eliezer S. Yudkowsky" wrote:
>
> Samantha Atkins wrote:
> >
> > "Eliezer S. Yudkowsky" wrote:
> >
> > > Oh, why bother. I really am starting to get a bit frustrated over here. I've
> > > been talking about this for weeks and it doesn't seem to have any effect
> > > whatsoever. Nobody is even bothering to distinguish between subgoals and
> > > supergoals. You're all just playing with words.
> >
> > Hey! Wait a second. If you are going to be a leader in the greatest
> > project in human history (or in any project for that matter) you have to
> > learn and learn damn fast to be able to motivate and enroll people.
>
> No, actually I should expect that the seed AI project will have smaller total
> expenditures, frcom start to finish, than a typical major corporation's Y2K
> projects. I used to think in terms of the greatest project in human history,
> but I no longer think that'll be necessary, and a damn good thing, as I don't
> think we're gonna *get* the largest budget in human history.

Huh? I wasn't using "greatest" in respect to budget or size of
development team (although I think both will be greater than you
think). I was using it in terms of the criticality of this project.
You continuously tell us it is our only hope. It is difficult to
imagine a project much "greater" than that.

>
> > You need other human
> > beings to understand enough to keep you from getting lynched or shut
> > down for trying such a thing.
>
> Yes. A finite and rather small number of human beings, most of whom will
> almost certainly Get It on the first try. If, hypothetically, I were a
> pessimistic and cynical person, then I were start saying things like: "And if
> I spend time talking to anyone else, then that just increases the probability
> that I'll get lynched or shut down. Foresight tried to play nicey-nice with
> everyone, as a result of which they are now being screwed over by the National
> Nanotechnology Research Initiative."
>

I think that is a highly questionable attitude. I do not believe that
even yourself "got it" on the first try. Nor do I believe the evolution
of the idea is complete and finished for anyone to get in one try.
Frankly I think this part of your thinking is dangerously immature.
  
> > You are so brilliant is so many ways but I think you have a lot to learn
> > about reaching and working with people. The success of the project
> > depends hugely on you learning that.
>
> I wish I knew more about dealing with people, but I no longer give it as high
> a priority as I once did.
>

How can that be anything but a mistake when you require people, since
they are the only intelligences to use in getting this thing off the
ground, and their resources in order to produce the Seed? Do you really
believe that all of those you need will just automatically think enough
like you or agree enough with your conclusions that little/no effort is
necessary on your part to understand and deal with them further? What
kind of model leads you to this conclusion?

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:49 MDT