RE: Major Technologies

Billy Brown (bbrown@conemsco.com)
Fri, 8 Jan 1999 10:07:41 -0600

A survey of plausible scenarios for the next century:

Scenario 1 - AI Transcendence
Assumptions:
Sentient AI can be implemented by humans. Open-ended IE is possible for such an AI. Such an AI will be implemented before human IE becomes a major factor. Outcome:
The AI becomes a Power, and decides the fate of humanity.

Scenario 2 - Nanotech Doomsday
Assumptions:
Automated engineering is much easier than nanotech, and will thus be implemented substantially sooner.

Computers fast enough to design advanced nanotech will exist years before the first general-purpose assembler is built. Result:
The first nanotech immediately overturns our entire social/political system. Chaos and nano-warfare probably follow. There may or may not be survivors.

Scenario 3 - The Hard Takeoff to Singularity Assumptions:
Automated engineering and nanotech are problems of similar difficulty, and will develop together.
Computers fast enough to design advanced nanotech will exist years before the first general-purpose assembler is built. Open-ended IE is possible if you have a whole society working on it. Result:
The first nanotech pitches almost all forms of technology into a fast improvement cycle similar to the modern computer industry. The first Powers arise within a decade - possibly much sooner.

Scenario 4 - The Soft Takeoff to Singularity Assumptions:
Either automated engineering is harder than nanotech, or nanotech appears before we have fast computers, or both.
Open-ended IE is possible if you have a whole society working on it. Result:
The first nanotech is very specialized, and it matures very slowly (no faster than the computer industry). It takes decades to go from the first assembler to mature nanotech, and we get to play with lots of neat toys for awhile before we figure out how to become SIs.

Some implausible (IMO) alternatives:
If open-ended IE isn't possible at all, we get scenario 4 with no singularity, and advanced nanotech takes centuries or millennia to arrive.

If gray goo is really easy to make, scenario 1 is the only way to avoid getting eaten.

If sentient AI is really, really easy, we might get a variation on scenario 1 where a whole population of AIs grow into SIs as computer speeds improve.

For the record, my money goes on scenario 3, with a side bet on #1.

Does anyone want to add another option to the list?

Billy Brown, MCSE+I
bbrown@conemsco.com