Re: Why would AI want to be friendly?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Sep 24 2000 - 19:36:50 MDT


Samantha Atkins wrote:
>
> Huh? I wasn't using "greatest" in respect to budget or size of
> development team (although I think both will be greater than you
> think). I was using it in terms of the criticality of this project.
> You continuously tell us it is our only hope. It is difficult to
> imagine a project much "greater" than that.

Okay, so why would I necessarily need more than a finite and limited amount of
charisma to handle that? If I needed incredibly skilled and talented hackers
to act as janitors, then yes, I (or someone) would need a lot of charisma.
But attracting people to the most important job in the entire world? How much
leadership talent do you need for *that*?

Actually herding the cats once you've got them, now, that's another issue.
So's PR.

> I think that is a highly questionable attitude. I do not believe that
> even yourself "got it" on the first try. Nor do I believe the evolution
> of the idea is complete and finished for anyone to get in one try.
> Frankly I think this part of your thinking is dangerously immature.

There's a difference between Getting It and getting everything right on the
first try (see below).

> > I wish I knew more about dealing with people, but I no longer give it as high
> > a priority as I once did.
>
> How can that be anything but a mistake when you require people, since
> they are the only intelligences to use in getting this thing off the
> ground, and their resources in order to produce the Seed?

My purpose, above all else, is to design the Seed. Other people can
persuade. I have to complete the design. If being charismatic requires
patterns of thought that interfere with my ability to complete the design, or
even if it starts taking up too much time, then forget charismatic. I'll stay
in the basement and someone else will be charismatic instead.

> Do you really
> believe that all of those you need will just automatically think enough
> like you or agree enough with your conclusions that little/no effort is
> necessary on your part to understand and deal with them further? What
> kind of model leads you to this conclusion?

Past experience, actually. The people I need seem to Get It on the first try,
generally speaking. I'm not saying that they don't argue with me, or that
they don't ask questions. Mitchell Porter has been right where I have been
wrong, on a major issue, on at least two separate occasions.

The difference is pretty hard to put into words. I am not the judge of who is
or isn't asking "intelligent questions", and that's not what I'm trying to
say. What I'm trying to say rather is that there is a pattern. Mitchell
Porter groks the pattern; if he says, "Eliezer, you're flat wrong about X",
then at least we're both arguing within the same pattern. People who Get It
may agree or disagree with me, but they understand the pattern. Rarely, if
ever, do I see someone who didn't get the pattern suddenly get it after long
and acrimonious argument; the only person I can ever recall seeing do that is
Eric Watt Forste, which still impresses me.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:50 MDT