Thinking about the future...

Eric Watt Forste (
Tue, 3 Sep 1996 18:56:51 -0700

At 11:47 AM 9/3/96, wrote:
> But when transhumanists talk about >AI they hardly mean a
> moderate >AI -like a very brilliant human and then some. We
> speculate about a machine that would be a million times
> faster than any human brain, and with correspondingly great
> memory capacity. Could such a machine, given some time, not
> manipulate a human society by subtle suggestions that seem
> very reasonable but unnoticeable affects a general change in
> attitude and policy? And all the time it would look as if it
> were a perfectly decent machine, always concerned about our
> wellfare...

The ability of such a superintelligence to do this I don't doubt at all. I
don't know if it's possible, but likewise, I can't assert that it's
impossible. I've seen too much turmoil in the world caused by mere books
written by mehums. ;)

> How likely is it that a malicious >AI could bring disaster
> to a human society that were initially determined to take
> the necessary precautions?
> My contention is that with only one full-blown >AI in the
> world, if it were malicious, the odds would be on the side
> that it could annihilate humanity within decades.

What I do doubt, however, is that any superintelligence could develop a set
of values and understandings that would lead it to conclude that such a
course of action would be conducive to whatever *other* projects it was
trying to undertake. If the superintelligence actually had, as its primary
value and its primary project, the destruction of human civilization, then
yes, it could succeed. But this project would be directly incompatible with
the swift and efficient implementation of so many other possible projects
that the superintelligence might plausibly value, that I think a "basic"
value for the destruction of human civilization would rapidly get damped
down by competition from other projects the SI might value that would
benefit from using cooperation with human civilization as a means. Just as
a "basic" value of bloodlust in human beings *usually* gets damped down by
competition from the other basic values for sex, money, power-by-consent,
etc. which in the long run and in the context of modern civilization are
ill-served by a taste for nonconsensual violence. The strength of this
damping process for humans will depend on the degree of violence that is
routine in the society in which the human finds emself, but I suspect that
"superintelligence" (whatever that turns out to mean) will strengthen this
process of damping down value-projects that are too incompatible with too
many other competing value-projects.

If the superintelligence is ambitious enough and powerful enough to be a
true value pluralist, to find it boring and trivial to undertake any course
of action other than the simultaneous execution of many different
orthogonal projects in realtime (and I think this does describe the
behavior we observe in many of the brightest human beings so far), then I
don't think we'll have too much to fear from it. Perhaps I'm being an
ostrich about this, but I'd love to hear some solid counterarguments to the
position I'm taking.

Eric Watt Forste <>