Re: Why would AI want to be friendly?

From: Emlyn (emlyn@one.net.au)
Date: Sat Sep 30 2000 - 01:23:28 MDT


> "Eliezer S. Yudkowsky" wrote:
>
> >
> > "Any Friendly behavior that follows the major use-cases - that avoids
> > destruction or modification of any sentient without that sentient's
> > permission, and that attempts to fulfill any legitimate request after
checking
> > for unintended consequences - would count as at least a partial success
from
> > an engineering perspective."
> > -- from a work in progress
> >
>
> How can this work? If someone tells me something that I did not know,
they have
> modified me (assuming I remember what they told me). If an IA is required
not to
> modify me without my permission, it will have to refrain from telling me
anything
> I do not already know, because it will not be able to get my informed
consent to
> be told the thing to be told, without telling me the thing.
>
> What is a "legitimate request"?
>
> How do you check for "unintended consequences" without running a
simulation of the
> entire Universe out to heat death? Even in the short run, how will the AI
account
> for the impact of its own future actions in the matter without first
running a
> simulation of itself?
>
> -Ken

I guess AI will have to make a judgement call. That's the truly dangerous
part.

Emlyn



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:28 MDT