Re: AI big wins (was: Punctuated Equilibrium Theory)

Eliezer S. Yudkowsky (sentience@pobox.com)
Thu, 24 Sep 1998 14:17:31 -0500

Robin Hanson wrote:
>
> Eliezer S. Yudkowsky writes:
> >Well, my other reason for expecting a breakthrough/bottleneck architecture,
> >even if there are no big wins, is that there's positive feedback involved,
> >which generally turns even a smooth curve steep/flat. And I think my
> >expectation about a sharp jump upwards after architectural ability is
> >independent of whether my particular designs actually get there or not. In
> >common-sense terms, the positive feedback arrives after the AI has the ability
> >humans use to design programs.
>
> Let me repeat my call for you to clarify what appears to be a muddled argument.
> We've had "positive feedback", in the usual sense of the term, for a long time.
> We've also been able to modify and design AI architectures for a long time.
> Neither of these considerations obviously suggests a break with history.

Sigh. Okay, one more time: The total trajectory is determined by the relation between power (raw teraflops), optimization (the speed and size of code) and intelligence (the ability to do interesting things with code or invent fast-infrastructure technologies).

Is intelligence well-defined? Of course not, because it consists of symbolic architecture and goals and causality and memories and analogies and similarities and concepts and simulations and twenty different modules of domain-specific intuitions and God knows what else. The best we can do is focus down on code-writing intelligence, which suggests specific metrics such as particular programming tasks, or fast-infrastructure intelligence, which focuses on domains such as molecular engineering and protein folding.

Is power well-defined? Pretty much. A teraflops from a thousand parallel gigaflops processors isn't the same as a linear teraflops, but all this means is that the type of power shades over into optimization.

Is optimization well-defined? Yes. If intelligence A can come up with the same solution in the same time using half the processing power or RAM of intelligence B, intelligence A is more optimized. If intelligence A writes code with similar speedups, intelligence A has better optimizing ability.

Given constant power, the trajectory at time T is determined by whether the AI can optimize itself enough to get an intelligence boost which further increases the ability at optimization enough for another intelligence boost. Presumably the sum of this series converges to a finite amount. If the amount is small, we say the trajectory bottlenecks; if the amount is large, we say a breakthrough has occurred. The key question is whether the intelligence reached is able to build fast-infrastructure nanotechnology and the like, or of exhibiting unambiguously better-than-human abilities in all domains.

Now that I've basically repeated everything I said earlier, will you please ask specific questions about a specific term? I'm starting to feel like I can't win, or that your requests for clarification are only conversational tactics...

> >My understanding of the AI Stereotype is that the youngster only has a single
> >great paradigm, and is loath to abandon it. I've got whole toolboxes full ...
>
> I think you're mistaken - lots of those cocky youngsters have full toolboxes.
> ("Yup, mosta gunslingers get kilt before winter - but they mosta got only one
> gun, and looky how many guns I got!")

Heh. I must be moving in the wrong circles, because the "cocky youngsters" I encounter generally send me email filled with silly, overcomplex, badly-defined theories based on a single unworkable "insight" which they praise to the high heavens. While I plead guilty to the charges of not defining every damn term I use in an email discussion, I think that the original literature - my Web page - is fairly good about doing so. I don't gush about how wonderful the paradigms are, although I've been known to warn about the horrible consequences of ignoring them. And above all else, I don't attribute Moral Significance, the universal bane of youngsters. I may do so when discussing the ultimate point of building an AI, and in the very special case of an AI's ultimate goals, but I don't do so when I'm talking about the workhorse principles.

At this point, the key question for me is "How much of _Coding a Transhuman AI_ did you actually read?" You really can't judge my tendency to oversimplify from a 5K post about a 200K page. And can you tell me where I can find the writings of these eager youngsters to which you keep referring?

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.