Re: AI big wins

Robin Hanson (hanson@econ.berkeley.edu)
Thu, 24 Sep 1998 14:49:10 -0700

Eliezer S. Yudkowsky writes:
>>>Well, my other reason for expecting a breakthrough/bottleneck architecture,
>>>even if there are no big wins, is that there's positive feedback involved,
>>...
>>Let me repeat my call for you to clarify what appears to be a muddled argument.
>...
>Sigh. Okay, one more time: The total trajectory is determined by the
>relation between power (raw teraflops), optimization (the speed and size of
>code) and intelligence (the ability to do interesting things with code or
>invent fast-infrastructure technologies).
>Given constant power, the trajectory at time T is determined by whether the AI
>can optimize itself enough to get an intelligence boost which further
>increases the ability at optimization enough for another intelligence boost.

Can I translate you so far as follows?
Let P = power, O = optimization, I = intelligence. For any X, let X' = time derivative of X. The AI can work on improving itself, its success given by functions A,B,C.

If the AI devoted itself to improving P, it would get P' = A(P,O,I), O'=I'=0.
If the AI devoted itself to improving O, it would get O' = B(P,O,I), P'=I'=0.
If the AI devoted itself to improving I, it would get I' = C(P,O,I), P'=O'=0.
(If it devotes fractions a,b,c of its time to improving P,O,I, it presumably

gets P' = a*A, O' = b*B, I' = c*C.)

>Presumably the sum of this series converges to a finite amount.

Sum? Of what over what? Do you mean that for the AIs choice of a,b,c, that P,O, and I converge to some limit as time goes to infinity?

>If the amount
>is small, we say the trajectory bottlenecks; if the amount is large, we say a
>breakthrough has occurred. The key question is whether the intelligence
>reached is able to build fast-infrastructure nanotechnology and the like, or
>of exhibiting unambiguously better-than-human abilities in all domains.

I thought you had an argument for why "breakthrough" is plausible, rather than just listing it as one of many logical possibilities.

>... At this point, the key question for me is "How much of _Coding a
>Transhuman AI_ did you actually read?"

All of it.

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/ RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614