Re: Why would AI want to be friendly? (Was: Congratulations to Eli,Brian ...)

From: Technotranscendence (neptune@mars.superlink.net)
Date: Tue Sep 05 2000 - 20:27:45 MDT


On Tuesday, September 05, 2000 2:55 PM Eliezer S. Yudkowsky
sentience@pobox.com wrote:
> > > One does not perform "research" in this area. One gets it right the
first
> > > time. One designs an AI that, because it is one's friend, can be
trusted to
> > > recover from any mistakes made by the programmers.
> >
> > What about programming the SI does to itself?
>
> If it's smart enough, then it won't make mistakes. If it's not smart
enough,
> then it should be able to appreciate this fact, and help us add safeguards
to
> prevent itself from making mistakes that would interfere with its own
goals.

This is almost like saying "If it's perfect, it's perfect. If it's not
perfect, it's perfect."

Nevertheless, continue.

Cheers!

Daniel Ust
http://uweb.superlink.net/neptune/



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:14 MDT