Eliezer S. Yudkowsky wrote:
> Billy Brown wrote:
> > Scenario 2 - Nanotech Doomsday
> > Assumptions:
> > Automated engineering is much easier than nanotech, and will thus be
> > implemented substantially sooner..
> > Scenario 3 - The Hard Takeoff to Singularity
> > Assumptions:
> > Automated engineering and nanotech are problems of similar
> difficulty, and
> > will develop together..
> I'd reverse the outcomes. Primitive nanotech results in destabilizing
> competition and wars fought with half-baked weapons (sc. 3). Instant
> omnipotent drextech lets the winner take over the world without much
> fuss and even evacuate the planet in case of emergency. Problem is, I
> think nanotech will start primitive.
Hmm. Depend on what you mean by 'primitive' and 'advanced'. I don't think instant mature nanotech is probable - the computers you need to design it can't be built without primitive nanotech or many decades of top-down evolution.
In general, however, I think you are correct. If we consider a range of possible innovation speeds, there is a distinct danger zone in which technology advances faster than our institutions can adapt, but not fast enough to allow the inventor to solve all problems himself. This environment is likely to panic governments and other powerful organizations, and has a high probability of leading to irrational, cataclysmic abuse of nanotech.
On the high side of the danger zone, the leading power advances so quickly that nothing anyone else does can threaten it. On the low side the rate of change is slow enough that the social order can adapt, or at least avoid being suicidally stupid.
The scenarios I listed would fall out like this:
slow advance danger zone fast advance <------------------------------|---------------------|-------------------->
<----4----> <----3----> <---------2--------->
I suppose we could put in a '1.5' for very fast advance, but it seems very unlikely - if automated engineering is that easy, a seed AI should also be feasible and we're back to scenario 1.
Billy Brown, MCSE+I