Re: Why would AI want to be friendly?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Sep 24 2000 - 11:20:10 MDT


Darin Sunley wrote:
>
> It's so seldom I get a chance to contribute meaningfully to the conversation
> here, but let me jump in...
>
> You cannot win a negotiation against something that quite rightly views you
> as a deterministic process.

Exactly. Humans sometimes look down on people they can consider easy to
predict, so I should emphasize that "looking down" is an attitude and not an
inevitable consequence - but yes, a mortal human is probably pretty much a
deterministic and manipulable process to a transhuman, never mind a
superintelligence.

> I think the relevant image here was of an AI carrying on an optimal
> conversation with a human being by testing it's repsonses against a trillion
> high fidelity simulations of that human, during the time it took the human
> to draw a breath.

This is the usual image, but even that level of superintelligence shouldn't be
necessary. As I once said, Deep Blue and Kasparov were evenly matched (more
or less) despite the disparity in moves-per-second because Deep Blue was
playing chess, while Kasparov was playing the regularities in the game of
chess. Very different search trees!

Running a trillion high-fidelity simulations is the Deep Blue path to
irresistable persuasiveness, but that shouldn't be necessary. There are
regularities in the Game of Us. I cannot view a human being on that level,
but I can see enough to know that level exists. There's a limited number of
emotional tones and intuitions in the human mind. There are a finite number
of pieces and a finite number of moves, and the "pieces" and "moves" are
regularities far, far above the neural level.

A human being can't model another human being this way; first of all, nobody
knows what all the pieces are - have you ever seen a complete list of the
emotional tones? And even if someone did know it all, completely - a level of
knowledge considerably in excess of that needed to build an AI - the size of
the search tree would probably overflow the limits of human abstract thought.

The knowledge and intelligence needed to see a human being as a manipulable
process lie considerably above the level of human intelligence. But it
doesn't require superintelligence, just fairly mild transhumanity. You have
to know all the pieces and be able to keep track of the most likely
possibilities; that's around it. I'll never be able to do it, or even come
close, but I can see enough to know it can be done.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:48 MDT