Re: Why would AI want to be friendly?

From: Samantha Atkins (samantha@objectent.com)
Date: Sun Sep 24 2000 - 22:42:35 MDT


"Eliezer S. Yudkowsky" wrote:
>

> Okay, so why would I necessarily need more than a finite and limited amount of
> charisma to handle that? If I needed incredibly skilled and talented hackers
> to act as janitors, then yes, I (or someone) would need a lot of charisma.
> But attracting people to the most important job in the entire world? How much
> leadership talent do you need for *that*?

Little would be needed if it was as obvious as it appears to be to you.
A great deal is needed to make it more obvious in actuality. There is
no reason to believe that all persons bright enough and motivated enough
to be useful to the work will "get it" the first time around.

>
> Actually herding the cats once you've got them, now, that's another issue.
> So's PR.
>

OK.
 
>
> > > I wish I knew more about dealing with people, but I no longer give it as high
> > > a priority as I once did.
> >
> > How can that be anything but a mistake when you require people, since
> > they are the only intelligences to use in getting this thing off the
> > ground, and their resources in order to produce the Seed?
>
> My purpose, above all else, is to design the Seed. Other people can
> persuade. I have to complete the design. If being charismatic requires
> patterns of thought that interfere with my ability to complete the design, or
> even if it starts taking up too much time, then forget charismatic. I'll stay
> in the basement and someone else will be charismatic instead.
>

That's fair enough. As long as you have some people to do the necessary
job of explanation and persuasion all should be reasonably well on that
score.
 
> > Do you really
> > believe that all of those you need will just automatically think enough
> > like you or agree enough with your conclusions that little/no effort is
> > necessary on your part to understand and deal with them further? What
> > kind of model leads you to this conclusion?
>
> Past experience, actually. The people I need seem to Get It on the first try,
> generally speaking. I'm not saying that they don't argue with me, or that
> they don't ask questions. Mitchell Porter has been right where I have been
> wrong, on a major issue, on at least two separate occasions.
>

Or at least all the people you know you can work with and/or are on
board today got it quickly (or at least seemingly so in hindsight).
 
> The difference is pretty hard to put into words. I am not the judge of who is
> or isn't asking "intelligent questions", and that's not what I'm trying to
> say. What I'm trying to say rather is that there is a pattern. Mitchell
> Porter groks the pattern; if he says, "Eliezer, you're flat wrong about X",
> then at least we're both arguing within the same pattern. People who Get It
> may agree or disagree with me, but they understand the pattern. Rarely, if
> ever, do I see someone who didn't get the pattern suddenly get it after long
> and acrimonious argument; the only person I can ever recall seeing do that is
> Eric Watt Forste, which still impresses me.

But it is certainly within the possible that the basic pattern itself
has some flaws or at least reasonably questionable spots. Then anyone
who can or tries to point those out would be excluded if you were overly
attached to this past-exerience based notion. I trust that you are not
so overly attached since it would be counter-productive to your central
goal and I am certain that you are dedicated to that goal and aware of
the possible problem.

For myself I think one reason I question this so much is because it both
is something that strongly appeals to me for several reasons and
something that repels me simultaneously. It is not an easy or easily
accepted thing to believe you can in large part decide for the entire
world by building that which out-thinks the world. I and I am sure many
of us doubt that humanity, without at least massive augmentation can and
will grow up fast enough for what is coming. Many of us also doubt that
such a powerful AI, even as a seed, can successfully be built very soon
and that even if built, it would actually be a viable solution or
outcome.

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:51 MDT