Re: Why would AI want to be friendly?

From: Ken Clements (Ken@Innovation-On-Demand.com)
Date: Fri Sep 29 2000 - 16:09:29 MDT


"Eliezer S. Yudkowsky" wrote:

>
> "Any Friendly behavior that follows the major use-cases - that avoids
> destruction or modification of any sentient without that sentient's
> permission, and that attempts to fulfill any legitimate request after checking
> for unintended consequences - would count as at least a partial success from
> an engineering perspective."
> -- from a work in progress
>

How can this work? If someone tells me something that I did not know, they have
modified me (assuming I remember what they told me). If an IA is required not to
modify me without my permission, it will have to refrain from telling me anything
I do not already know, because it will not be able to get my informed consent to
be told the thing to be told, without telling me the thing.

What is a "legitimate request"?

How do you check for "unintended consequences" without running a simulation of the
entire Universe out to heat death? Even in the short run, how will the AI account
for the impact of its own future actions in the matter without first running a
simulation of itself?

-Ken



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:27 MDT