Re: Why would AI want to be friendly?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Sep 29 2000 - 12:28:48 MDT


Andrew Lias wrote:
>

> I've been following the debates regarding the possibilities of friendly vs.
> unfriendly AI and I have a question. It seems that we are presuming that a
> friendly AI would be friendly towards us in a manner that we would recognize
> as friendly. Indeed, what, precisely, do we mean by friendly?

"Any Friendly behavior that follows the major use-cases - that avoids
destruction or modification of any sentient without that sentient's
permission, and that attempts to fulfill any legitimate request after checking
for unintended consequences - would count as at least a partial success from
an engineering perspective."
        -- from a work in progress

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:26 MDT