Vernor Vinge writes:
>Notions of great change raise the vision of all sorts of things that
>might be called a singularity. In the past, discussions of what I've
>written about have more often than not spread out to things quite
>Early on in this discussion I got my point distilled down to:
>1. The creation of superhuman intelligence appears to be a plausible
> eventuality if our technical progress proceeds for another few
>2. The existence of superhuman intelligence would yield forms of
> progress that are qualitatively less understandable than advances
> of the past.
>Given that, however, the form of the post-human environment is
>not at all specified or restricted! I like speculation about it,
>and I like to speculate about it (usually after acknowledging
>that I shouldn't have any business doing so :-). The speculation
>often leads to conflicting scenarios; some I regard as more
>likely than others. But if they arise from the original point,
>I feel they are relevant. ...
O.K. Uncle. It seems I was mistaken in my attempt to create a focused discussion on singularity by focusing on Vinge's concept and analysis. I incorrectly assumed that Vinge had in mind a specific enough concept of and analysis of singularity to hold discussants' attention. In fact, by "singularity" Vinge seems to just mean "big changes will come when we get superhumans." And while Vinge has dramatic opinions about how soon this will happen and how fast those changes will come afterward, these opinions are not part of his concept of "singularity", and he is not willing to elaborate on or defend them.
This seems analogous to Eric Drexler, who written extensively on nanotech, and privately expressed dramatic opinions about how soon nanotech will come and how fast change will then be, but who has not to my knowledge publicly defended these opinions.
email@example.com http://hanson.berkeley.edu/ RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614