Re: Paradox--was Re: Active shields, was Re: Criticism depth, was Re: Homework, Nuke, etc..

From: John Marlow (johnmarrek@yahoo.com)
Date: Sat Jan 13 2001 - 00:07:31 MST


>
> An AI can be Friendly because there's nothing there
> except what you put
> there,

This is the crux of the problem. The intentions may be
noble, but I believe this to be an invalid assumption.
If the thing is truly a sentient being, it will be
capable of self-directed evolution. Since, as you say,
we will have no control over it, it may continue to be
Friendly--or evolve into something very, very
UnFriendly. In which case, there may not be a damned
thing we can do about it.

You're playing dice.

--- "Eliezer S. Yudkowsky" <sentience@pobox.com>
wrote:
> John Marlow wrote:
> >
> > **True enough--but their ancestors did. And you
> feel
> > no obligation to refrain from killing them--much
> less
> > to look after them--because their ancestors wrote
> your
> > source code.
>
> That's *right*. An AI, even a Friendly AI, feels no
> obligation because
> *we* wrote the source code. Not unless someone puts
> it there, and while
> there will someday be AIs that are good drinking
> companions and fit
> participants in the human drama, and these AIs may
> wax sentimental about
> their creators, the Friendly AIs come *first* - the
> Sysop, or the
> Guardians.
>
> An AI can be Friendly because there's nothing there
> except what you put
> there, what you share with the AI. The task is
> nontrivial because you
> don't always know what it is you're putting there,
> but that blank slate,
> that vast silent space, is what makes the task
> possible.
>
> I don't want to sound like it's a question of
> coercion. The paradigm of
> Friendly AI is to create unity between the will of
> the Friendly AI and the
> decisions of an idealized human altruist. It is not
> a question of
> control. You have to identify with the Friendly AI
> you build, because a
> human thinks about controlling different humans,
> wheedling or coercing the
> Other, but the only time we *build* a mind is when
> we build ourselves.
> The Friendly AI I want to build is the same being I
> would make myself into
> if that were only way to get a Sysop (a Guardian AI,
> if you prefer). A
> Sysop might not turn out to be a real person, and
> I'd back myself up first
> if I could - but a Sysop is a valid thing for a mind
> to be, a valid state
> for a mind to occupy, a valid part of the world, not
> a slave or someone
> we've tricked into servitude.
>
> You keep trying to understand AI in human terms.
> Everything that you use
> to model other minds is specialized on understanding
> humans, and an AI
> isn't human. A Friendly AI isn't Friendly to us
> because *we* built it; it
> would be just as Friendly if the identical source
> code materialized from
> thin air, and will go on to be just as Friendly to
> aliens if a
> pre-Singularity civilization is ever found orbiting
> another star. That
> lack of sentiment doesn't make it dangerous. My
> benevolence towards other
> sentient beings isn't conditional on their having
> created me; why would I
> want to build a conditionally Friendly AI?
>
> -- -- -- --
> --
> Eliezer S. Yudkowsky
> http://singinst.org/
> Research Fellow, Singularity Institute for
> Artificial Intelligence

__________________________________________________
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail.
http://personal.mail.yahoo.com/



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:18 MDT