Re: Hyper-AI's vs Transhumans

Dan Clemmensen (
Sat, 09 May 1998 09:45:41 -0400

Anders Sandberg wrote:
[SNIPPED a cogent discussion from Anders, supporting his view that
the development fo superintelligence will not be instantanous.]

I nave now stated the 1998 version of the "spike",(the "instantaneous"
scenario) and Anders has restated the "swell" (historically-rapid-but-not-instant
scenario.) Anders' preferred spot in the scenario space is:
--transition initiation team size: a large subset of the research community,
or perhaps society as a whole.
--transition time: "historically rapid" (a decade?)
--post-transition SI population: large(?)

This is in contrast to the "spike":
--transition initiation team size: 1 human, or at most a few.
--transition time: days or weeks
--post-transition SI population: single (perhaps incorporating humans
and/or transhumans with apparent autonomy)

It may be useful to try to agree on the essence of the difference,
and perhaps on our points of agreement also. I think the essence
of the difference centers on the nature of intelligence
augmentation (IA).

The "spike" assumes that a human-computer collaboration can yield
an effective intelligence amplification without first solving
other hard problems in the many fields of knowledge related to
intelligence. The spike further assumes that the resulting augmented
intelligence can further improve itself by identifying and relieving
its constraints, perhaps by solving some of those hard problems.
In my paper on the subject, I focus on rapid augmentation of the
computer capacity available to the collaboration.

By contrast, the "swell" argues that the problems are in fact hard
and that many of them must be solved before IA can occur. Perhaps
solving a human-computer interface problem can yield a modest IA,
but this is not sufficient to rapidly solve the next problem: you'll
still need a lab full of human specialists for each problem. I will not
generally be possible for a group of (say) human interface specialists
to discover an IA technique based on a new human interface and then
apply the IA to discover another IA technique in another field (say
computer networking.) Instead, the human interface IA technique will
be diseminated, and a computer networking group will apply it.
Anders, please correct this if it's wrong.

Thus, IMO the central difference is that the "spike" assumes the
discovery of an IA technique that boosts intelligence enough so
that the resulting augmented intelligence can rapidly discover another
such technique. The "swell" assumes that no such event will occur, and
each IA discovery is non-singularity event: it is either incremental,
or it requires a large external resource input, or it is otherwise

The "swell" hypothesis has history on its side: we have in fact
discovered IA techniques in the past, (notably moveable type, the
computer, the GUI) and none of them resulted in a "spike". The
"spike" proponents believe that history cannot be a guide here:
the "spike" can only happen once.

Newcomers who are interested in this may wish to read Damien's
book "The Spike." You'll have to mail-order it from Australia,
but that's what the web is for, isn't it?

> Perhaps a better discussion would be how we *practically* immanentize
> the transcension? :-)

The best I can do is to try to keep up with new advances in relevant
technical fields. For me, the most relevant fields are data communications,
computer architecture, distributed computing, human/computer interfaces,
nanotechnology, information storage and retrieval, and software development
tools. I'm very aware that others of us use radically different lists,
but IMO that's good. You can be in my SI if I can be in yours :-)