> I nave now stated the 1998 version of the "spike",(the
> "instantaneous" scenario) and Anders has restated the "swell"
> (historically-rapid-but-not-instant scenario.) Anders' preferred
> spot in the scenario space is:
> --transition initiation team size: a large subset of the research community,
> or perhaps society as a whole.
> --transition time: "historically rapid" (a decade?)
> --post-transition SI population: large(?)
>
> This is in contrast to the "spike":
> --transition initiation team size: 1 human, or at most a few.
> --transition time: days or weeks
> --post-transition SI population: single (perhaps incorporating humans
> and/or transhumans with apparent autonomy)
A good summary. I indeed believe in a large SI population,
although I hold open the possibility of borganisms and functional
soups.
> It may be useful to try to agree on the essence of the difference,
> and perhaps on our points of agreement also. I think the essence
> of the difference centers on the nature of intelligence
> augmentation (IA).
>
> The "spike" assumes that a human-computer collaboration can yield
> an effective intelligence amplification without first solving
> other hard problems in the many fields of knowledge related to
> intelligence. The spike further assumes that the resulting augmented
> intelligence can further improve itself by identifying and relieving
> its constraints, perhaps by solving some of those hard problems.
> In my paper on the subject, I focus on rapid augmentation of the
> computer capacity available to the collaboration.
>
> By contrast, the "swell" argues that the problems are in fact hard
> and that many of them must be solved before IA can occur. Perhaps
> solving a human-computer interface problem can yield a modest IA,
> but this is not sufficient to rapidly solve the next problem: you'll
> still need a lab full of human specialists for each problem. I will not
> generally be possible for a group of (say) human interface specialists
> to discover an IA technique based on a new human interface and then
> apply the IA to discover another IA technique in another field (say
> computer networking.) Instead, the human interface IA technique will
> be diseminated, and a computer networking group will apply it.
> Anders, please correct this if it's wrong.
Sounds about right again.
In some sense the spike scenario assumes that IA can improve general
intelligence and that it can fairly quickly gain the skills needed to
apply it to new areas. The swell scenario assumes IA methods mainly
amplify specific intelligence or skills rather than general
intelligence (even if the method is applicable to most of them).
> Thus, IMO the central difference is that the "spike" assumes the
> discovery of an IA technique that boosts intelligence enough so
> that the resulting augmented intelligence can rapidly discover another
> such technique. The "swell" assumes that no such event will occur, and
> each IA discovery is non-singularity event: it is either incremental,
> or it requires a large external resource input, or it is otherwise
> rate-constrained.
Exactly. There can of course be cascades of discoveries, but in the
end they run into constraints. After all, how easy is it to learn a
whole new field (like nanotech) in order to get around a hardware
problem? It might be quicker to involve a team of already educated
nanotechnologists and give them access to IA (which hopefully is easy
to use, one problem might be that IA becomes so personalized over time
that one cannot simply give away one's interface to another, but the
person needs to "grow into" it over some time).
> The "swell" hypothesis has history on its side: we have in fact
> discovered IA techniques in the past, (notably moveable type, the
> computer, the GUI) and none of them resulted in a "spike". The
> "spike" proponents believe that history cannot be a guide here:
> the "spike" can only happen once.
Yes. The spike is supported by the idea that IA techniques of this
capacity as we now assume have not occured in the past.
> > Perhaps a better discussion would be how we *practically* immanentize
> > the transcension? :-)
>
> The best I can do is to try to keep up with new advances in relevant
> technical fields. For me, the most relevant fields are data communications,
> computer architecture, distributed computing, human/computer interfaces,
> nanotechnology, information storage and retrieval, and software development
> tools. I'm very aware that others of us use radically different lists,
> but IMO that's good. You can be in my SI if I can be in yours :-)
It's a deal! :-)
My list includes yours, with cognitive psychology, neuroscience and
neural networks as additions. We all have our biases...
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y