Re: AI big wins

Robin Hanson (
Fri, 25 Sep 1998 09:36:21 -0700

Eliezer S. Yudkowsky writes:
>> "I think a more realistic model is" is not a sufficiently detailed argument.
>> for t' = a*e^(b*(t+c)). And even accepting this form, substantial growth
>> could still take centuries, depending on the values of a,b,c.
>> Your argument here seems awfully close to the claim that "the doubling time
>> of computer hardware efficiency is proportional to the computer operations
>> per second devoted to R&D in computer hardware, or within all of computer-aided
>> `humanity.'"
>If you mean with the t' = e^t, I agree absolutely that it's just as worthless
>as the Argument from Moore's Law; it's a simple analogy, having none of the
>reductionistic detail that's needed for a realistic discussion of AI
>trajectories. As you recall, my actual analysis clamped power at an arbitrary
>value and treated strictly with optimization and intelligence. Obviously, I
>don't think that Moore's Law, or for that matter any simple equation for
>growth rates, is of real value.
>Can we return to the ABC/POI discussion now? Focus on the technical
>discussion, not the immoderate speculations <grin>. But seriously, what about
>the feedback into the system, which is my main argument?

Mere mention of the work "feedback" is not sufficient to argue for a sudden and sustained acceleration in growth rates, which is what you seem to claim.

I'm not saying your claim is false, just that you haven't presented a coherent *argument* for it. Lots of systems have feedback without having explosive growth. Granted, some abstract differential equation systems do have explosive growth, but such systems are very rare in practice, so your job is to present some argument why we should think our future growth is more like those systems than other systems.

I searched for such an argument in your volumous web pages, and finding the Moore's law argument, I tried to critique it. But you say you "renounced, repented, and abjured" that argument (though one can't tell from your web pages). Just yesterday you offer t' = e^t apparently as an argument in support of "Why is a sharp jump upwards plausible at any given point?", but a few hours later you say it's just as worthless.

I am happy to consider simplified models of systems as a means to understanding. My complain with your simple models isn't that they are too simple, it is that it is not clear why they are better models than equally simple models without explosive growth.

>> There really is a rich economic growth literature on when various equations
>> like this describe different growing systems, including intelligent systems.
>> Growth depends on many factors, and just because a previously fixed factor is
>> allowed to grow, that doesn't mean growth suddenly explodes.
>"Intelligence is not a factor, it is the equation itself." You've never
>responded to my basic assertion, which is that sufficient intelligence (which
>is probably achievable) suffices for nanotech; which in turn suffices to turn
>the planet into a computer; which in turn counts as "explosive growth" by my
>standards. It's difficult to see how the literature on the rise of
>agriculture relates...
>"Sufficient" = Wili Wachendon with a headband.
>"Achievable" = The end of my seed AI's trajectory, running on 10^13 ops.
>"Nanotech" = What Drexler said in _Engines of Creation_.

(Intelligence is an equation?)
The question is *how fast* a nanotech enabled civilization would turn the planet into a computer. You have to make an argument about *rates* of change, not about eventual consequences.

Robin Hanson RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614