Major Technologies

Eliezer S. Yudkowsky (sentience@pobox.com)
Tue, 05 Jan 1999 17:49:00 -0600

Bryan Moss wrote in "Re: Paths to Uploading":

>

> Since we're talking about plausible future scenarios it might be fun, being
> in the midst of millennium fever, to come up with some. No dates or
> predictions, just how you think the next few major technologies will
> pan-out. How about it? (And fifty years from now, when we're all six
> centimetres tall and living in habitat domes on the moon, we can have a good
> laugh at them.)

Unless somebody can do a "quick kill" in AI, I expect us to head into a major storm within the next 15 years. Call it the Horizon Storm, or, if you're being pessimistic, the Horizon Wars.

The next Big Thing, on the order of the computer, will be collaborative filtering. This will spark the transition to the next Speed Phase, a doubling time of about two to five years, where things will probably remain until Singularity or extinction.

Around the same time collaborative filtering starts to exert substantial stresses on the social fabric, the Age of the Neurohackers will make its debut. The first major application will be (what I've named) Sunlight, systems for taking off the evolved shackles on mental energy. There will be a major fuss about neurohacking, extending into religious riots. The majority of governments, including the U.S., will ban it. The neurohackers will oppose in the first major confrontation with the government, using collaborative filtering to rally the populace. I don't know how this will turn out, but there is a substantial chance the technical community will secede from the United States, or that major scientific centers will be reduced by mobs, or both. It might (or might NOT) be a good idea to first provoke an artificial confrontation on better ground.

Nanotechnology is a wild card that could stay unplayed or enter the game at any time. Nanotech's first applications will be entirely destructive. The researchers at Zyvex or Foresight will naively release the information and someone will reduce the Earth to grey goo. I don't see a good way to bring in enough researchers to do anything interesting without one of them trying to take over the world. If nanotech has useful applications, it will be more powerful than anyone has predicted and in different ways, just like electricity and computers. The instant nanotech can create the computer hardware to run a seed AI, a Singularity must be attempted immediately, before anything can go wrong. However, the existing seed AIs will not have enough sophistication to Transcend, and the ones designed to run on unlimited power will need months of debugging and twiddling, during which someone will reduce the Earth to grey goo.

At present, I expect nanotechnology to be developed before genuine intelligence enhancement via neurohacking can overcome the infant-to-adult lead times needed to create true Specialists. Humanity's primary hope of survival lies in a quick kill via AI, and the best way I see to do that is an Open Source effort on the scale of Linux, which I intend to oversee at some point. Some IE via neurohacking may be developed fast enough to be decisive, and the existing Specialists (such as myself) may be sufficient.

Nanotechnological destruction can best be prevented by strong security precautions at Zyvex, researchers required to work in triples (pairs isn't good enough), everyone required to submit to the best truth tests available, and (unlikely) the actual nanotech equipment on the Moon next to a nuclear weapon.

Major technologies: Collaborative filtering; neurohacking; nanotech. Our best hope: A seed AI.
Most probable kill: Grey goo; nuclear war provoked by a nanotech threat. One way or the other, it should all be over by: 2025, 2005 min, 2040 max.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.