--- "Eliezer S. Yudkowsky" <firstname.lastname@example.org>
> John Marlow wrote:
> > >
> > > An AI can be Friendly because there's nothing
> > > except what you put there,
> > This is the crux of the problem. The intentions
> may be
> > noble, but I believe this to be an invalid
> > If the thing is truly a sentient being, it will be
> > capable of self-directed evolution.
> Hence the phrase "seed AI". But don't say
> "evolution"; "evolution" is a
> word with extremely specific connotations. Say
> "capable of recursive
**Naaah; that will never catch on. Besides--who says
it will be improvement (as viewed by us)?
> > Since, as you say,
> > we will have no control over it, it may continue
> to be
> > Friendly--or evolve into something very, very
> > UnFriendly.
> Again - not "evolve". Whatever modification it
> chooses to make to itself,
> it will make that choice as a Friendly AI.
**Ah. Obviously we have radically differing concepts
of the term "friendly." Or "Friendly."
> > In which case, there may not be a damned
> > thing we can do about it.
**GAAAK! And you proceed! Or are you simply that
convinced of our own incompetence? (But then, this
would actually argue AGAINST proceeding, if you think
> > You're playing dice.
> No, I'm choosing the path of least risk in a world
> where risk cannot be
> eliminated. You seem to be thinking in terms of
> "see risk - don't take
> risk". A popular viewpoint. It could very easily
> get us all killed.
**No--I am, for example, strongly in favor of
developing nanotechnology, and that is by far the
biggest risk I can see.
**Building an AI brighter than we are might run a
close second. That I'm not for.
**The first offers nearly limitless advantages for the
risk; the second offers only certain disaster. It's a
> -- -- -- --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for
> Artificial Intelligence
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:19 MDT