Re: Eugene's nuclear threat

From: hal@finney.org
Date: Mon Oct 02 2000 - 11:40:04 MDT


Eugene writes:
> What can be possibly more poisonous than the end of the world as we
> know it(tm) and death of all or most human beings?

You need to make several assumptions to get to this from the starting
point of someone trying to develop a "wild" self-enhancing AI through
evolutionary techniques:

 - that the process will work at all to get any kind of AI
 - that the AI won't hit a bottleneck somewhere on its self-improvement
   trajectory
 - that the super-intelligent AI will want to wipe out humans
 - that it will succeed

and also, I should add,

 - that this will be a bad thing.

Without belaboring this last point, I do think that there is a
difference between seeing humanity wiped out by mindless gray goo,
versus having the human race supplanted by an expanding, curious,
exploratory new super-intelligent life form. If we step back a bit
from our humanist chauvinism (and forgetting that we ourselves might be
targets of the malicious AI), we generally do justify the destruction of
less intellectually advanced life forms in favor of more advanced ones.
People do this every day when they eat. From a sufficiently removed
perspective, replacing the human race with an intelligence vastly more
aware, more perceptive, more intelligent and more conscious may not be
entirely evil. I don't say it should happen, but it is something to
consider in evaluating the morality of this outcome.

As far as the first points, I have argued before that the concept of
self enhancement is far from certain as a path to super-intelligent AI.
To even get started we must reach near-human level intelligence on our
own efforts, a goal which has thwarted decades of effort. Then, once
this monumental achievement is reached, is an ordinary human being with
IQ 100 smart enough to contribute materially to an AI project which has
reached human-level intelligence? It's pretty questionable, considering
how complex the algorithms are likely to be.

More intelligent brains will be even more complex. It seems very
plausible to me that we could hit a point where the complexity grows
faster than the intelligence, so that the smarter the brain got, the
harder time it would have seeing how to improve itself. At that point
the trajectory to super-AI would fizzle out.

Even if it does work, it is a leap to assume that an evolved AI would
want to wipe out the human race. Maybe the thing can be made "friendly"
in Eliezer's sense while still using an evolutionary paradigm. Even if
it is less amiable, the AI might find that humans are useful tools in
accomplishing its ends. See, for example, Greg Bear's society in Moving
Mars, where shadowy super-intelligences run Earth behind the scenes of
a seemingly diverse and pluralistic society of physically and mentally
enhanced humans.

Then you have to assume that the AI will be capable of carrying out
its threat despite the physical handicaps it faces. Yes, it has
the advantage of intelligence, but there is plenty of historical and
evolutionary evidence that this is not definitive. Super-intelligence is
not omnipotence.

You have to be awfully sure of yourself to contemplate taking the kinds
of final actions you have described in the face of a threat with so
many uncertainties. Our visions of the future can be so intense and
detailed that we forget how many other possibilities there are. We need
to remember our own limitations and weaknesses in foreseeing the future
before taking precipitate action based on distant extrapolations.

Hal



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT