Re: Why would AI want to be friendly? (Was: Congratulations to Eli,Brian ...)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Sep 05 2000 - 15:55:19 MDT


"Michael S. Lorrey" wrote:
>
> I suppose if you start from the ground up fresh this would be an appropriate
> statement. However I predict that the first SI will be largely structured on
> many processes inherent in the human mind, since it is an example that we do
> know of that works...and programmers hate to have to do something from the
> ground up when existing code is already present....

Yes, I use a lot of stuff that I got from looking at the human mind. I don't
use it blindly. The observer-biased stuff isn't exactly subtle.

> I am of course assuming that any SI would have a characteristic curiosity, like
> any being of higher intelligence (basing this on more than just humans, but
> dolphins, apes, etc).

You just cited a bunch of evolved organisms. Non sequitur.

> You are erroneously assuming that an SI would be allowed to develop hard
> capabilities in the physical world consistent with its capabilities in the
> virtual.

An SI *has* capabilities in the physical world, as long as even one binary bit
of information is allowed to escape the computer. This includes a single line
of text on a computer monitor in a locked and guarded research laboratory.
You don't have to give the SI full access to Zyvex for it to get loose. Do
you think that because there are none of the objects that humans call "tools"
that the SI has no capabilities? To any sufficiently advanced intelligence,
the external Universe is a manipulable continuum. The parts of that continuum
called "humans" are not distinct from the rest of it. Do you think that
because you have placed a lot of objects you call "locks" and "guards" near
the SI that it is locked and guarded?

> Eli, one does not hand a three year old the controls to nuclear bombs.

Mikey, we don't have a choice.

> > One does not perform "research" in this area. One gets it right the first
> > time. One designs an AI that, because it is one's friend, can be trusted to
> > recover from any mistakes made by the programmers.
>
> What about programming the SI does to itself?

If it's smart enough, then it won't make mistakes. If it's not smart enough,
then it should be able to appreciate this fact, and help us add safeguards to
prevent itself from making mistakes that would interfere with its own goals.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:14 MDT