Re: Why would AI want to be friendly? (Was: Congratulations to Eli,Brian ...)

From: Barbara Lamar (shabrika@juno.com)
Date: Tue Sep 05 2000 - 17:24:31 MDT


In an earlier email in this thread, Eliezer wrote the following:

>>If, as seems to be the default scenario, all supergoals are ultimately
arbitrary, then the superintelligence should do what we ask it to, for
lack of
anything better to do.<<

In another email, Eliezer wrote this:

>
> If it's smart enough, then it won't make mistakes. If it's not
> smart enough,
> then it should be able to appreciate this fact, and help us add
> safeguards to
> prevent itself from making mistakes that would interfere with its
> own goals.

I'm having a difficult time reconciling these 2 statements. If the SI
has no preferences and would do whatever asked for lack of anything
better to do, then how could it be said to have its own goals?

Barbara

________________________________________________________________
YOU'RE PAYING TOO MUCH FOR THE INTERNET!
Juno now offers FREE Internet Access!
Try it today - there's no risk! For your FREE software, visit:
http://dl.www.juno.com/get/tagj.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:14 MDT