Re: Why would AI want to be friendly? (Was: Congratulations to Eli,Brian ...)

From: Zero Powers (zero_powers@hotmail.com)
Date: Tue Sep 05 2000 - 18:15:54 MDT


>From: "Eliezer S. Yudkowsky" <sentience@pobox.com>

> > > One does not perform "research" in this area. One gets it right the
>first
> > > time. One designs an AI that, because it is one's friend, can be
>trusted to
> > > recover from any mistakes made by the programmers.
> >
> > What about programming the SI does to itself?
>
>If it's smart enough, then it won't make mistakes. If it's not smart
>enough,
>then it should be able to appreciate this fact, and help us add safeguards
>to
>prevent itself from making mistakes that would interfere with its own
>goals.

That sounds like a very thin line of "what-ifs." If it's not smart enough
to self-program it "should" be smart enough to realize that fact? And it
should be smart (or dumb) enough to trust us to build in constraints to keep
it from interfering with its own goals (as opposed to interfering with our
goals)?

You keep saying that we cannot predict the behavior of an intelligence who
has not evolved like us. But you seem to think that if you are friendly to
it, it will be friendly to you. Friendship and reciprocation are evolved
human social constructs. You think you can engineer these signal human
characteristics without suggesting by implication their opposites? You can
teach friendship, without teaching about enmity? You can engineer
reciprocation, without suggesting cheating?

-Zero

Learn how your computer can earn you money while you sleep!
http://www.ProcessTree.com/?sponsor=38158

_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at
http://profiles.msn.com.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:15 MDT