Re: Paradox--was Re: Active shields, was Re: Criticism depth, was Re: Homework, Nuke, etc..

From: John Marlow (johnmarrek@yahoo.com)
Date: Sat Jan 13 2001 - 19:53:14 MST


**See below

--- "Eliezer S. Yudkowsky" <sentience@pobox.com>
wrote:
> John Marlow wrote:
> >
> > >
> > > An AI can be Friendly because there's nothing
> there
> > > except what you put there,
> >
> > This is the crux of the problem. The intentions
> may be
> > noble, but I believe this to be an invalid
> assumption.
> > If the thing is truly a sentient being, it will be
> > capable of self-directed evolution.
>
> Hence the phrase "seed AI". But don't say
> "evolution"; "evolution" is a
> word with extremely specific connotations. Say
> "capable of recursive
> self-improvement".

**Naaah; that will never catch on. Besides--who says
it will be improvement (as viewed by us)?

>
> > Since, as you say,
> > we will have no control over it, it may continue
> to be
> > Friendly--or evolve into something very, very
> > UnFriendly.
>
> Again - not "evolve". Whatever modification it
> chooses to make to itself,
> it will make that choice as a Friendly AI.

**Ah. Obviously we have radically differing concepts
of the term "friendly." Or "Friendly."

>
> > In which case, there may not be a damned
> > thing we can do about it.
>
> Yep.

**GAAAK! And you proceed! Or are you simply that
convinced of our own incompetence? (But then, this
would actually argue AGAINST proceeding, if you think
about it...)

>
> > You're playing dice.
>
> No, I'm choosing the path of least risk in a world
> where risk cannot be
> eliminated. You seem to be thinking in terms of
> "see risk - don't take
> risk". A popular viewpoint. It could very easily
> get us all killed.

**No--I am, for example, strongly in favor of
developing nanotechnology, and that is by far the
biggest risk I can see.

**Building an AI brighter than we are might run a
close second. That I'm not for.

**The first offers nearly limitless advantages for the
risk; the second offers only certain disaster. It's a
risk-benefit analysis.

john marlow

>
> -- -- -- --
> --
> Eliezer S. Yudkowsky
> http://singinst.org/
> Research Fellow, Singularity Institute for
> Artificial Intelligence

__________________________________________________
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail.
http://personal.mail.yahoo.com/



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:19 MDT