Hal Finney pointed out,
> You need to make several assumptions to get to this from the starting
> point of someone trying to develop a "wild" self-enhancing AI through
> evolutionary techniques:
Even better than making assumptions would be to run some actual experiments to
see how wild self-enhancing AI might in fact evolve.
> Without belaboring this last point, I do think that there is a
> difference between seeing humanity wiped out by mindless gray goo,
> versus having the human race supplanted by an expanding, curious,
> exploratory new super-intelligent life form. If we step back a bit
> from our humanist chauvinism (and forgetting that we ourselves might be
> targets of the malicious AI),
Even worse, we might be targets of the Friendly AI.
(What's that say about us!)
> ...we generally do justify the destruction of
> less intellectually advanced life forms in favor of more advanced ones.
> People do this every day when they eat. From a sufficiently removed
> perspective, replacing the human race with an intelligence vastly more
> aware, more perceptive, more intelligent and more conscious may not be
> entirely evil. I don't say it should happen, but it is something to
> consider in evaluating the morality of this outcome.
The "morality" is this outcome will be determined (as it always is) by the
winners. The losers don't get to write history, the winners do (and in their own
language).
> As far as the first points, I have argued before that the concept of
> self enhancement is far from certain as a path to super-intelligent AI.
Nothing is "certain" except for the Uncertainty Principle. Nevertheless, without
self enhancement, we will have less chance of understanding the issues at stake.
> To even get started we must reach near-human level intelligence on our
> own efforts, a goal which has thwarted decades of effort.
It was *centuries* after Sci-Fi and da Vinci postulated flying machines that
humans went to the Moon. Thwarted effort seems to spur people on to overcome the
obstacles, for some reason. Perhaps it's the challenge of the thing. Airplanes
weren't good for much in 1910... but sixty years (six decades) later, men walked
on the Moon.
Then, once
> this monumental achievement is reached, is an ordinary human being with
> IQ 100 smart enough to contribute materially to an AI project which has
> reached human-level intelligence? It's pretty questionable, considering
> how complex the algorithms are likely to be.
An ordinary human being with IQ 100 could contribute quite a lot to an AI
project. For starters, let's see if we could double this IQ via enhancement
technology. (Hey, I'd volunteer.)
> More intelligent brains will be even more complex. It seems very
> plausible to me that we could hit a point where the complexity grows
> faster than the intelligence, so that the smarter the brain got, the
> harder time it would have seeing how to improve itself. At that point
> the trajectory to super-AI would fizzle out.
That doesn't make sense, because if the brain gets smarter, then it has
amplified capability to see how to improve itself. Conversely, the dumber brains
get (uh-oh, here comes an anti-drug ad), the difficult it would be for them to
see how to improve themselves. Right?
>
> Even if it does work, it is a leap to assume that an evolved AI would
> want to wipe out the human race.
I don't know about that. Coincidentally, the smartest people also seem to have
the most problems relating to the human race at large. If an evolved AI
succinctly and compellingly explained its reasons for wiping out the human race,
perhaps we'd treat it like unabomber Ted Kacynski, who has his share of
endorsements among the Green Party.
>Even if
> it is less amiable, the AI might find that humans are useful tools in
> accomplishing its ends.
Interesting that many people think of AI in the singular. Imagine treating
natural intelligence the same way:
*The* biological intelligence (BI) or *the* native intelligence (NI) sounds kind
of cryptic doesn't it? After all, "the" human intelligence resides in billions
of craniums. What reason do we have to suppose that AI might not reside in
millions of desktop or laboratory machines? I tend to think of them as AIs, Mind
Children, Robo sapiens, Spiritual Machines, and Artilects, rather than as *the*
AI.
> See, for example, Greg Bear's society in Moving
> Mars, where shadowy super-intelligences run Earth behind the scenes of
> a seemingly diverse and pluralistic society of physically and mentally
> enhanced humans.
Shades of _The Matrix_, and deja vu all over again.
> Then you have to assume that the AI will be capable of carrying out
> its threat despite the physical handicaps it faces. Yes, it has
> the advantage of intelligence, but there is plenty of historical and
> evolutionary evidence that this is not definitive. Super-intelligence is
> not omnipotence.
Right on. Stephen Hawking is only as menacing as his electric wheelchair can
make him.
--J. R.
"Sushi... how come they don't cook that stuff?"
--Alfredo Benzene
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT