Singularity: Vinge responds

Robin Hanson (hanson@econ.berkeley.edu)
Wed, 16 Sep 1998 09:38:11 -0700

Vinge asks me to forward the following to the list:


Notions of great change raise the vision of all sorts of things that might be called a singularity. In the past, discussions of what I've written about have more often than not spread out to things quite different.

Early on in this discussion I got my point distilled down to: 1. The creation of superhuman intelligence appears to be a plausible

eventuality if our technical progress proceeds for another few years.
2. The existence of superhuman intelligence would yield forms of

progress that are qualitatively less understandable than advances of the past.

Given that, however, the form of the post-human environment is not at all specified or restricted! I like speculation about it, and I like to speculate about it (usually after acknowledging that I shouldn't have any business doing so :-). The speculation often leads to conflicting scenarios; some I regard as more likely than others. But if they arise from the original point, I feel they are relevant.

For planning purposes, my vision the high level taxonomy is:

o (the null): The singularity doesn't happen (or is not recognizable).
o We have a hard takeoff.
o We have a soft takeoff.

There is a large variety of mechanisms for each of these. (Some, such a bio-tech advances, might be only indirectly connected with Moore's Law.)

Thus, I don't consider that I have written off Nick's upload scenario. (Actually, Robin may have a better handle on what I've said on this than I do, so maybe I have words to eat.) Uploading has a special virtue in that it sidesteps most people's impossibility arguments. Of course, in its most conservative form it gives only weak superhumanity (as the clock rate is increased for the uploads). If that were all we had, then the participants would not become much greater within themselves. Even the consequences of immortality would be limited to the compass of the original blueprint. But to the outside world, a form of superhumanity would exist.

In my 1993 essay, I cited several mechanisms: o AI [PS: high possibility of a hard takeoff] o IA [PS: I agree this might be slow in developing. Once it happened, it might be a explosive. (Also, it can sneak up on us, out of research that might not seem relevant to the public.)]
o Growing out of the Internet
o Biological

Since then:

o The evolutionary path of fine-grained distributed systems has impressed me a lot, and I see some very interesting singularity scenarios arising from it. o Greg Stockman (and ~Marvin) have made the Metaman scenario much more plausible to me: probably a very "gentle" take off. (At AAAI-82 one of the people in the audience said he figured this version had happened centuries ago.)

Discussion of others is of interest, too.

If I were betting (really foolish now :-), as of Tue Sep 15 10:58:51 PDT 1998 I would rate likelihoods (from most to least probable):

  1. very hard takeoff with fine-grained distribution;
  2. no Singularity because we never figure out how to get beyond software engineering and we fail to manage complexity;
  3. IA (tied in likelihood with:) 4. Metaman
  4. ...

PS: I like Doug Bailey's postscript:
>[Note: My apologies for the less-than-stellar organization but I wrote this in
>one pass. If I had waited and posted it after I had time to optimize it, it
>would have never seen the light of day.]

I find myself near-frozen by the compulsion to optimize :-)

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/ RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614