Re: Why would AI want to be friendly?

From: xgl (xli03@emory.edu)
Date: Fri Sep 08 2000 - 08:34:41 MDT


On 7 Sep 2000, Christian Weisgerber wrote:

>
> xgl <xli03@emory.edu> wrote:
>
> > as eliezer points out in his various writings, if such a mind does
> > anything at all, it would be because it was objectively _right_ --
> ^^^^^^^^^^^^^^^^^^^
> Assuming there is such a thing as objective ethics.
>

        i think a revision is in order here. according to recent posts
from eliezer to the extropians mailing list, his design does take into
consideration that there might be no such thing as objective ethics. so if
a mind does anthing at all, it would be because it was objectively right
_or_ because it had nothing better to do. for all i know, eliezer's design
might have incorporated this view from the beginning; but this the first
time i've seen it confirmed.

-x

-------------------------------------------------------------------------------



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:32 MDT