Re: Human AI to superhuman

Eliezer S. Yudkowsky (sentience@pobox.com)
Thu, 17 Sep 1998 17:32:25 -0500

Robin Hanson wrote:
>
> Eliezer S. Yudkowsky seems to be the only person here willing to defend

                                                   ^^^^
I object to the implication that I'm fighting a lone cause. The key word is _here_. There are many others, not-surprisingly including AIers (like Moravec), and nanotechnologists (like Drexler), and others who actually deal in the technology of transcendence, who are also Strong Singularitarians. In fact, I can't think offhand of a major SI-technologist who's heard of the Strong Singularity but prefers the Soft.

> "explosive growth," by which I mean sudden very rapid world economic growth.

I don't know or care about "very rapid world economic growth". I think that specific, concrete things will happen, i.e. the creation of superintelligence operating at a billion times a human's raw power, because that kind of power is easily achievable with technologies (quantum computing, nanotech) that can be envisioned right now, and to which the only theoretical barrier is our currently inadequate intelligence to solve certain problems (theoretical physics, molecular engineering, protein folding). I think the trigger will be transhumans (of neurotech or silicate) who can either solve the problems or rapidly design the next generation.

You can call that "explosive growth", if you want. You can describe this however you like, using any analogies you like, and it won't make a difference whether the "analogies" are to the rise of Cro-Magnons or the introduction of the printing press. The actual event will not be driven by analogies. It will be driven by the concrete facts of cognitive science and fast-infrastructure technology.

> So I'd like to see what his best argument is for this.

Arguments AGAINST a Singularity generally use the following assumptions:

(The phrase "20th-century humans" is used when the same would not apply to "19th-century humans".)

Arguments FOR a Singularity usually require some, but not all, of the following assumptions:

(Many of these arguments are technical, rather than philosophical, which is as it should be.)

The following arguments AGAINST a Strong Singularity have no visible flaws, and I have no arguments against them.

CONCLUSION: Assuming that none of the above are true, I believe that a Strong Singularity is the most likely result.

> So first, please make clear which (if any) other intelligence growth
> processes you will accept as relevant analogies.  These include the evolution of
> the biosphere, recent world economic growth, human and animal learning, the
> adaptation of corporations to new environments, the growth of cities, the
> domestication of animals, scientific progress, AI research progress, advances
> in computer hardware, or the experience of specific computer learning programs.

None, of course. I wrote "Coding a Transhuman AI"; I don't need to reason from analogies. There aren't many positive statements I can make thereby, but I can find definite flaws in other people's simulations. My theory has advanced enough to disprove, but not to prove - and personally, I find nothing wrong with that. A scientist doesn't have a Grand Ultimate Theory of Everything, but he can easily disprove the deranged babblings of crackpots. It is only the philosophers who attempt the proofs before establishing disproofs, and come to think of it, it's mainly philosophers who argue by analogy, starting with the arch-traitor Plato. Some of the things that get disproved by obvious flaws are all of the analogies listed above.

My theory does not disprove Moravecian leakage, SI suicide, nuclear war, or nanowar (or the SI obliteration of humanity), and therefore I frame no hypothesis concerning these events. Years ago I had "proofs" that some such unpleasant things were impossible, and some of my best theories rose from the methods I used to tear these treasured proofs to shreds. There are many unpleasant things for which I have no disproof, but the wonderfully pleasant vision of a slow, humanly-understandable, material-omnipotence, chicken-in-every-pot UnSingular scenario is both impossible and unimaginative and that is that.

When was there ever a point when the large-scale future could be predicted by analogy with the past? Could Cro-Magnons be predicted by extrapolating from Neanderthals? (And would the Neanderthals have sagely pointed out that no matter how intelligent a superneanderthal is, there are still only so many bananas to pick?) I am not saying that the rise of superintelligence is analogous to the rise of Cro-Magnons; I am saying that reasoning by analogy is worthless. The analogies that argue for the Singularity, no less than the analogies that argue against it. The argument from Moore's Law is specious, unless you're a researcher in computing techniques who can describe exactly how a given level is achievable, in which case Moore's Law is the default assumption for time frames.

> If no analogies are relevant, and so this is all theory driven, can the theory
> be stated concisely?  If not, where did you get the theory you use?  Does
> anyone else use it, and what can be its empirical support, if analogies are
> irrelevant?

The _hypothesis_ can be stated concisely: Transhuman intelligence will move through a very fast trajectory from O(5X) human intelligence to nanotechnology and billions of times human intelligence. The valid reasons for believing this to be the most probable hypothesis are accessible only through technical knowledge, as is usually the case.

I can't visualize nanotechnology, but I take Drexler's word that it allows very high computing power, and I am given to understand that solving a set of research problems is sufficient to use a standard (current-tech) DNA-to-protein synthesizer or STM (there's one accessible from the Internet) to produce the basic replicator. I can visualize cognitive science and AI, and I say that human intelligence can be improved by adding neurons to search processes, and that the transcend point of an AI is architectural design, and both require Manhattan projects but are still within reach.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.