Singularity: Human AI to superhuman

Robin Hanson (hanson@econ.berkeley.edu)
Thu, 10 Sep 1998 10:19:36 -0700

Eliezer S. Yudkowsky writes:
>> >You can't draw conclusions from one system to the other. ...
>> But where *do* you draw your conclusions from, if not by analogy with
>> other intelligence growth processes? ...
>
>Basically, "I designed the thing and this is how I think it will work and this
>is why." ... this is not a time for analogic reasoning. ... A seed AI
>trajectory consists of a series of sharp snaps and bottlenecks. ... either the
>going is easy or the going is very hard ... this is the behavior exhibited by
>all current AIs. ...
>The history of AI seems to me to consist of a few big wins in a vast wasteland
>of useless failures. HEARSAY II, Marr's 2.5D vision, neural nets, Copycat,
>EURISKO. Sometimes you have a slow improvement in a particular field when the
>principles are right but there just isn't enough computing power - voice
>recognition, for example. Otherwise: Breakthroughs and bottlenecks.

Sounds like you *are* making analogies, but with your impression of AI progress.

If most of AI progress comes from a half dozen big win insights, then you should be able to write them all down on one short page. Anyone who read and understood that page would be nearly as good an AI programmer as anyone else. This is very far from the case, suggesting the importance of lots of little insights which require years of reading and experience to accumulate.

You cite EURISKO as a big win, but ignore the big lesson its author, Lenat, drew from it: that we've found most of the big wins, and progress now depends on collecting lots and lots of small knowledge chunks. How can one read Lenat on CYC and get any other impression?

>> We humans have been improving ourselves in a great many ways for a long time.
>
>There aren't any self-enhancing intelligences in Nature, ... The [human]
>structure is still a lot different. What you have is humans being
>optimized by evolution. "A" being optimized by "B". This is a lot different
>than a seed AI, which is "C" being optimized by "C". Even if humans take
>control of genetics, "A" being optimized by "B" being optimized by "A" is
>still vastly different from "C" being optimized by "C", in terms of trajectory.

AIs *will* be subject to evolutionary selection, just as humans have been. They will have "genes," things that code for their structure, and some structures will reproduce more, inducing more similar genes in future AIs. AI evolution may be Lamarkian, vs. Darwinian evolution for DNA, but evolution there will be.

>> [human] experience with intelligence growth seems highly relevant to me.
>> ... growth is mainly due to the accumulation of many small improvements. ...
>
>With respect to human genetic evolution, I agree fully, but only for the past
>50,000 years.

But there hasn't been much human genetic evolution over the last 50K years! There has been *cultural* evolution, but cultural evolution is Lamarkian. Cultures can choose to change what adults teach the young, and thereby change themselves. They are as "self-enhancing" as you like.

>On any larger scale, punctuated equilibrium seems to be the
>rule; slow stability for eons, then a sudden leap. The rise of the
>Cro-Magnons was a very sharp event. A fundamental breakthrough leads to a
>series of big wins, after _that_ it's slow optimization until the next big
>win opens up a new vista. A series of breakthroughs and bottlenecks. ...

There many have been breakthroughs within "short" times, though these times were only "short" when compared to a million years. But if so they seem to have been breakthroughs in the growth rates feasible, not in the absolute level. The total number of Cro-Magnons didn't increase by a factor of one hundred in a few years, for example.

>every technological aid to intelligence (such as writing or the printing press)
>produces sharp decreases in time-scale - well, what godforsaken reason is
>there to suppose that the trajectory will be slow and smooth? It goes against
>everything I know about complex systems.

It seems you now *are* accepting analogies to the history of human intelligence growth. And this history seems mainly to show only sharp transitions in growth rates, not in levels. See: http://hanson.berkeley.edu/longgrow.html History does suggest future increases in growth rates, but this is far from support for the sudden "AI gods in a few weeks" scenarios proposed.

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/ RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614