Re: Issues in Friendly AI: Domestic abuse...

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jul 09 2001 - 14:01:18 MDT


Mike Lorrey wrote:
>
> A study by a British manufacturing firm has concluded
> that 25% of computer users have, at one time or another,
> struck a part of their computer out of anger.
>
> This is consistent with other rates of domestic violence against kids
> and spouses (of course, depending on what you define as 'violence').
> Now, the question is: are we going to need AI abuse laws to help prevent
> good AI from going bad? Should AI parents be tested for their parenting
> abilities with office equipment? If I drop my PDA, will I be put on the
> FBI's Insta-Check database, which includes domestic abusers? Will we
> have a wave of suits from clumsy people against AI manufacturers for not
> making them rugged enough to handle casual falls?

Mike, as best I can figure, a Friendly AI designed the Right Way should
not become emotionally disturbed even if you tried to abuse it constantly
over a period of years. I use the term "abuse" because I don't think
there's an FAI equivalent for "torture", or for that matter, "annoy".
What exactly would you do? Try to frustrate all short-term goals?
Randomly zero bits in memory? The Friendly AI might regard these things
as undesirable, and make choices so as to minimize their chance of
recurrence, but I can't see the former - or, in a seed AI, the latter -
changing the character of the decision-making process.

That's not a supernaturally powerful special ability, it's an emergent
consequence of doing the Right Thing for goal systems. A Friendly AI
simply cannot be driven insane or even disturbed by any possible
configuration of external events, and a mature seed AI cannot be driven
insane or corrupted by any noncooperative human attempt to modify the
internal data. Destroyed, perhaps, but not corrupted. The only way that
a human-level intelligence can drive a mature Friendly seed AI insane is
to start over from scratch and write an AI that can be driven insane.

On the other hand, given the sheer alienness of that kind of perspective,
I can see a human certification requirement being used to protect humans
from personality abrasion. Engaging in conversation with something that
is enormously saner than you are and has an utterly different mental
architecture may be a bit hard on the nerves.

However, I can see non-Friendly AIs with more humanlike emotional
architectures requiring legal protection. We don't want to wind up in a
David Zindell "Festival of Dolls" scenario. But a Friendly AI can take
absolutely anything you can throw at it without even blinking. It may
sound impressive from a human perspective but it really shouldn't be all
that hard from the standpoint of general intelligence. It's only because
of evolutionary messiness that we ourselves lack absolute rock-hard
sanity; I think of unalterable sanity as the norm and humans as the
exception.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:39:43 MDT