Singularity: The Moore's Law Argument

Robin Hanson (hanson@econ.berkeley.edu)
Fri, 18 Sep 1998 11:04:44 -0700

The closest I've seen to a coherent argument for explosive economic growth is offered by Eliezer Yudkowsky, in http://www.tezcat.com/~eliezer/singularity.html

>Every couple of years, computer performance doubles. ... That is the
>proven rate of improvement as overseen by constant, unenhanced minds,
>progress according to mortals. Right now the amount of computing power
>on the planet is ... operations per second times the number of humans.
>The amount of artificial computing power is so small as to be
>irrelevant, ... At the old rate of progress, computers reach
human-
>equivalence levels ... at around 2035. Once we have human-equivalent
>computers, the amount of computing power on the planet is equal to the
>number of humans plus the number of computers. The amount of
>intelligence available takes a huge jump. Ten years later, humans
>become a vanishing quantity in the equation. ... That is actually a
>very pessimistic projection, Computer speeds don't double due to some
>inexorable physical law, but because researchers and technicians find
>ways to make them faster. If some of the scientists and technicians
>are computers - well, a group of human-equivalent computers spends 2
>years to double computer speeds. Then they spend another 2 subjective
>years, or 1 year in human terms, to double it again. ... six months,
>to double it again. Six months later, the computing power goes to
>infinity. ... This function is known mathematically as a singularity.
>... a fairly pessimistic projection, ... because it assumes that only
>speed is enhanced. What if the quality of thought was enhanced? ...

Eliezer's model seems to be that the doubling time of computer hardware efficiency is proportional to the computer operations per second devoted to R&D in computer hardware, or within all of computer-aided "humanity." (I'm not clear which Eliezer intends.)

The "all of humanity" model has the problem that computer hardware wasn't improving at all for a long time while humanity increased greatly in size. And while humanity has tripled since 1930, computer hardware doubling times have tracked this increase. It is also not clear why animal and other biological computation is excluded from this model.

The "within computer hardware R&D" model has the problem that over the last half century doubling times have not increased in proportion to the number of people doing computer hardware R&D. (Anyone have figures on this?) Furthermore, is it plausible that the near zero doubling times of ~1700 could have been much improved by putting lots of people into computer R&D then? And if we next year doubled the number of people doing computer R&D, I don't think we'd expect a doubling of the hardware doubling time.

It is not at all clear what Eliezer thinks "quality" improvements scale as, so there isn't much on this issue to discuss.

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/ RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614