Re: Singularity: Human AI to superhuman

Eliezer S. Yudkowsky (sentience@pobox.com)
Thu, 10 Sep 1998 20:01:29 -0500

Robin Hanson wrote:
>
> If most of AI progress comes from a half dozen big win insights, then you
> should be able to write them all down on one short page. Anyone who read and
> understood that page would be nearly as good an AI programmer as anyone else.
> This is very far from the case, suggesting the importance of lots of little
> insights which require years of reading and experience to accumulate.

I find it peculiarly coincidental that right after I published my statements on punctuated equilibrium, I found this paper on Eurekalert:

http://www.rochester.edu/pr/releases/bio/orr.htm

Orr challenges the theory that evolution consists of many tiny genetic mutations: "The distribution of mutations causing adaptation neatly fits an exponential curve: While few major mutations are needed, the number of more minor mutations rises exponentially with their genetic insignificance. Orr's theory is based on mathematical modeling and computer simulations, and assumes that a population is well-positioned to adapt to environmental pressures. He now plans to use a common laboratory technique called quantitative trait locus, or QTL, analysis -- capable of examining how species' genetic compositions differ -- to examine whether his theory holds up."

You can write down the essential insight behind General Relativity in two sentences. Understanding it takes years. Ready? "Gravity isn't a force, it's a curvature of space-time: The 4-D trajectory of an object in free fall is a straight line." Now, you can say that's composed of a lot of little insights, but that's just not true. It's one very complex big insight, into a very complex structure called science. And that structure is made up of a lot of small insights, some medium insights, and a few large insights. The small insights are important, but so are the big insights. Genius counts for fully as much as experience, and is more commonly focused on because it is rarer.

Oh, and the AIs?

HEARSAY II: "Don't use a hierarchy of procedures with local variables: Use lots of little procedures, invoked by global data and modifying global data." Marr's 2.5D: "Vision proceeds by extracting low-level features from pixels, and high-level features from low-level features." Neural nets: "Catch a pattern in a lot of small, trainable elements instead of programming it explicitly."
Copycat: "Analogies are composed of smaller features like low-level and high-level bonds and groups, with the overall coherence determined by bonds between high-level features like symbolic distinguishing descriptors." EURISKO: "Heuristics which apply to any domain can apply to the domain of heuristics and improve themselves."

> You cite EURISKO as a big win, but ignore the big lesson its author, Lenat,
> drew from it: that we've found most of the big wins, and progress now depends
> on collecting lots and lots of small knowledge chunks. How can one read Lenat
> on CYC and get any other impression?

Lenat's wrong. You'll notice that Cyc has failed all its major goals and has not been making much progress. Lenat's "commonsense knowledge" may be important, but it obviously can't be implemented as a semantic net. Lenat produced a big win using a big insight, and then failed when trying to use a lot of smaller insights.

> >There aren't any self-enhancing intelligences in Nature, ... The [human]
> >structure is still a lot different. What you have is humans being
> >optimized by evolution. "A" being optimized by "B". This is a lot different
> >than a seed AI, which is "C" being optimized by "C". Even if humans take
> >control of genetics, "A" being optimized by "B" being optimized by "A" is
> >still vastly different from "C" being optimized by "C", in terms of trajectory.
>
> AIs *will* be subject to evolutionary selection, just as humans have been.
> They will have "genes," things that code for their structure, and some
> structures will reproduce more, inducing more similar genes in future AIs.
> AI evolution may be Lamarkian, vs. Darwinian evolution for DNA, but evolution
> there will be.

I disagree. Speaking of "Lamarckian" evolution applying to a single entity covers every form of improvement in existence. Lamarckian evolution does _not_ exhibit the properties of evolution as that technical term is understood in the field. And an AI which does not reproduce and which does not suffer _random_ mutations is not undergoing evolution as we know it.

_Coding_ just had the section on "Pattern-catchers: Neural networks and evolutionary programming" added. Evolving processes would be used internally, either for intuitions, or as an architectural component of symbol abstraction. I did not propose using them to mutate the AI as a whole. I did say that evolving processes had to be small, to allow for a large breeding population. I do not think it plausible that we'll have the raw power to run a thousand evolving seed AIs before we have the raw power to build a self-enhancing one.

> >With respect to human genetic evolution, I agree fully, but only for the past
> >50,000 years.
>
> But there hasn't been much human genetic evolution over the last 50K years!
> There has been *cultural* evolution, but cultural evolution is Lamarkian.
> Cultures can choose to change what adults teach the young, and thereby change
> themselves. They are as "self-enhancing" as you like.

Exactly my point. Since evolution is punctuated, if you view it on a sufficiently small scale, you can conveniently miss the big wins. Thus there are no big evolutionary wins _since_ the rise of the Cro-Magnons, no big social changes _since_ the rise of agriculture and cities, no big informational wins _since_ the invention of the printing press, no big computer wins _since_ the invention of the transistor...

> >On any larger scale, punctuated equilibrium seems to be the
> >rule; slow stability for eons, then a sudden leap. The rise of the
> >Cro-Magnons was a very sharp event. A fundamental breakthrough leads to a
> >series of big wins, after _that_ it's slow optimization until the next big
> >win opens up a new vista. A series of breakthroughs and bottlenecks. ...
>
> There many have been breakthroughs within "short" times, though these times
> were only "short" when compared to a million years. But if so they seem to
> have been breakthroughs in the growth rates feasible, not in the absolute
> level. The total number of Cro-Magnons didn't increase by a factor of one
> hundred in a few years, for example.

I don't see what that has to do with anything. Evolution (E) caused a sharp rise in smartness (S). After that, there was no sharp increase in the growth rate of S. The growth rate of culture (C) or population (P), maybe, but not intelligence. E enhanced S, which enhanced C and P. S didn't enhance E and S didn't enhance S.

> >every technological aid to intelligence (such as writing or the printing press)
> >produces sharp decreases in time-scale - well, what godforsaken reason is
> >there to suppose that the trajectory will be slow and smooth? It goes against
> >everything I know about complex systems.
>
> It seems you now *are* accepting analogies to the history of human intelligence
> growth. And this history seems mainly to show only sharp transitions in growth
> rates, not in levels. See: http://hanson.berkeley.edu/longgrow.html History
> does suggest future increases in growth rates, but this is far from support for
> the sudden "AI gods in a few weeks" scenarios proposed.

Alas, the _Coding_ section about the difference between analogies and simulation is not yet available. But I am not proceeding by analogies; I am proceeding by simulating the behavior of a highly abstract system. The behavior of self-enhancing AI is not like the rise of human intelligence, or like the speed of human civilization given technological improvement, or like anything at all. The equation for progress P is different because the differential equation has a different form: y' = P(y), not y' = P(evolution) or P(technology) or whatever.

I don't project the results from the equation; that's just an analogy showing the flaws of other analogies. In fact, I would bet all my worldly possessions that the AI would _not_ follow the above equation. I project the result from modeling the seed AI, and I project:

The major design gap in the AI is symbol formation. Given symbol formation, the major bottleneck will be architectural design. After breaking through, the AI will proceed to human intelligence. Human intelligence can be easily enhanced. -- Begin guesses --
There is a sudden snap of optimization which ends at certain minimums of power in various cognitive abilities, all of which are very far above human. Given higher intelligence, fast infrastructures for computing power are possible.

At this point, everyone else's major projections use flawed assumptions, except for of course for those which agree with my own. By the Principle of Mediocrity, my own assumptions are also flawed. So here I stop.

--
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.