Re: Keeping AI at bay (was: How to help create a singularity)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Apr 30 2001 - 15:30:30 MDT


Eugene.Leitl@lrz.uni-muenchen.de wrote:
>
> I do not care what the AIs want, as long as they're something
> really big, fast, powerful, and growing exponentially, and having
> -- even unbiased -- agendas of their own. And they will be that, very
> soon, if they're subject to evolutionary constraints. Which seems
> about the only bit of prejudice I have.

http://singinst.org/CaTAI/friendly/info/indexfaq.html#q_2.1
http://singinst.org/CaTAI/friendly/anthro.html#observer
http://singinst.org/CaTAI/friendly/design/seed.html#directed

> What is it with you people, are you just generically unreasonable,
> or do you habitually let your kids play russian roulette; or
> soccer on a minefield?
>
> I don't even have to be absolutely 100.1% positive about it, as
> long as I think there's a nonnegligable probability of Ragnar0k,
> I just think it's a monumentally bad idea to attempt. Billie Joy
> is right on the money on this one. Relinquishment, shmerlinquishment,
> just hold the horses until we're ready.

It is, I suppose, emotionally understandable that you should have an
allergy to planetary risks (what Nick Bostrom calls "existential" risks -
is that paper out yet, Nick?). But the simple fact is that it is not
entirely possible to eliminate existential risks, no matter how hard we
try. The question is how to minimize existential risk. For example, it
is better to develop Friendly AI before nanotechnology than vice versa,
because success in Friendly AI increases the chance of success in
nanotechnology more than success in nanotechnology increases the chance of
success in Friendly AI.

An emotional intolerance to existential risk leads to disaster; it means
that a 1% probability of a 0% risk and a 99% probability of a 60% risk
will be preferred to a 100% probability of a 20% risk. This is the basic
emotional driver behind Bill Joy's theory of relinquishment - as far as
he's concerned, there's no such thing as a necessary existential risk, so
even a 1% chance of avoiding all existential risks must be preferable to
accepting a single existential risk - even if the actions required to
create that 1% probability result in a 99% probability of a *much* *worse*
existential risk, such as AI or nanotechnology being created in secrecy by
rogue factions. This only falls out of the equation above if you, like
Bill Joy, treat the possibility of a 60% existential risk and the
possibility of a 20% existential risk as identical total disasters. Which
is what I mean when I say than an emotional intolerance of existential
risk leads to disaster.

> Is it too much too ask?

Unless you explain why slowing down decreases risk, in at least as much
detail as I've explained why slowing down increases risk, yes.

http://singinst.org/CaTAI/friendly/policy.html#comparative

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:01 MDT