[Fwd: AI big wins]

Eliezer S. Yudkowsky (sentience@pobox.com)
Thu, 01 Oct 1998 20:10:15 -0500

This is a multi-part message in MIME format.

--------------C07F6DA3F4C44D6BF7580B42
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Sigh. Another Hanson-only, apparently.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.
--------------C07F6DA3F4C44D6BF7580B42
Content-Type: message/rfc822
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Message-ID: <360ACB54.3F741893@pobox.com>
Date: Thu, 24 Sep 1998 17:45:02 -0500
From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
Reply-To: sentience@pobox.com
X-Mailer: Mozilla 4.06 (Macintosh; I; PPC)
MIME-Version: 1.0
To: Robin Hanson <hanson@econ.Berkeley.EDU>
Subject: Re: AI big wins
References: <3.0.3.32.19980908112136.010e05c8@econ.berkeley.edu>
	 <3.0.3.32.19980910101936.00b29c90@econ.berkeley.edu>
	 <3.0.3.32.19980923141050.0076cb9c@econ.berkeley.edu>
	 <3.0.3.32.19980923161619.00772558@econ.berkeley.edu>
	 <3.0.3.32.19980924095122.00724a18@econ.berkeley.edu> <3.0.3.32.19980924144910.0076f39c@econ.berkeley.edu>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Robin Hanson wrote:

>
> Eliezer S. Yudkowsky writes:
> >>>Well, my other reason for expecting a breakthrough/bottleneck architecture,
> >>>even if there are no big wins, is that there's positive feedback involved,
> >>...
> >>Let me repeat my call for you to clarify what appears to be a muddled argument.
> >...
> >Sigh. Okay, one more time: The total trajectory is determined by the
> >relation between power (raw teraflops), optimization (the speed and size of
> >code) and intelligence (the ability to do interesting things with code or
> >invent fast-infrastructure technologies).
> >Given constant power, the trajectory at time T is determined by whether the AI
> >can optimize itself enough to get an intelligence boost which further
> >increases the ability at optimization enough for another intelligence boost.
>
> Can I translate you so far as follows?
> Let P = power, O = optimization, I = intelligence.
> For any X, let X' = time derivative of X.
> The AI can work on improving itself, its success given by functions A,B,C.
> If the AI devoted itself to improving P, it would get P' = A(P,O,I), O'=I'=0.
> If the AI devoted itself to improving O, it would get O' = B(P,O,I), P'=I'=0.
> If the AI devoted itself to improving I, it would get I' = C(P,O,I), P'=O'=0.
> (If it devotes fractions a,b,c of its time to improving P,O,I, it presumably
> gets P' = a*A, O' = b*B, I' = c*C.)
Exactly. When P', O', and I' are all zero, when the AI can't redesign its chips, optimize existing abilities, or design new ones, the trajectory bottlenecks. I do have one minor dispute with your terminology, however: I think that you should substitute A' for A. I think that with AI intelligence, the current capabilities measure absolute limits and not how much "improvement" one can wreak. An AI of low intelligence might be able to design very slow and stupid chips - A is nonzero - but be unable to improve on human levels. In other words, the elegance of the equation is marred by the artificial initial values. Maybe your terminology is better, since you can just set A' to zero until the AI is a competent (or transhuman) researcher in chip technologies, however long that takes... Anyway, I'll continue to use your terminology.
> >Presumably the sum of this series converges to a finite amount.
>
> Sum? Of what over what? Do you mean that for the AIs choice of a,b,c, that
> P,O, and I converge to some limit as time goes to infinity?
Actually, I was speaking of O and I with constant P, since I think P' is going to be zero - or the default human speed of doubling every eighteen months - until the AI winds up on the post-Singularity side of the trajectory. I don't think they can handle the research. Given sufficient I, I think P goes to infinity, or as close to it as makes no difference - a.k.a. Singularity. But for constant P and constant O, as a simplification, I define the sum as follows: Given I'1 = C(P, O, I), and I'2 = C(P, O, I + I'1), and I'3 = C(P, O, I + I'1 + I'2), then the total improvement in intelligence is the sum of the series. (Since the AI does operate in finite steps, I do not say integral.) Let us assume the basic strategy of increasing I until a bottleneck occurs, then increasing O until a bottleneck occurs, and alternating. If both reach bottlenecks simultaneously, and the AI is too dumb to have a nonzero A, then a triple bottleneck has occurred. As previously stated, I think that nonzero A lies on the other side of a Singularity, so I usually assume zero A and constant P. (Is this optimism or pessimism?)
> >If the amount
> >is small, we say the trajectory bottlenecks; if the amount is large, we say a
> >breakthrough has occurred. The key question is whether the intelligence
> >reached is able to build fast-infrastructure nanotechnology and the like, or
> >of exhibiting unambiguously better-than-human abilities in all domains.
>
> I thought you had an argument for why "breakthrough" is plausible, rather than
> just listing it as one of many logical possibilities.
I meant "B and C have to bottleneck eventually, but when they do, will A be very large?" And my answer was: If I, P, and O are large enough to begin with. And I further went on to say: If the initial conditions are P=10^13 ops, O=human, I=architectural design, I think that the OI bottleneck will be P=10^13 ops, O=far transhuman, I=?transhuman, with A(P, O, I) = nanotech. (?transhuman means somewhere between Wili Wachendon and a Power, but I don't know where.) Of course, any term with "human" in it is loosely defined, like humans themsevles. But if you mean "Why is a sharp jump upwards plausible at any given point?", my answer is that, for any reasonable function f() of optimization and intelligence, solving the differential equation y' = f(y) yields a curve which is either flat or sharp. Either the increases in I and O are self-sustaining, yielding further increments, or they peter out. In technology progress with constant intelligence, you have t' = t, which gives us the exponential growth we all know and love. If intelligence were a function of technology (i = t), and given that intelligence sets the rate of exponential technological growth, I think a more realistic model is t' = e^t, which yields -log(-t), which goes to infinity.
> >... At this point, the key question for me is "How much of _Coding a
> >Transhuman AI_ did you actually read?"
>
> All of it.
I'm impressed. And honored, and glad to know it was readable. -- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know. --------------C07F6DA3F4C44D6BF7580B42--