Re: The Dazzle Effect: Staring into the Singularity

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Aug 16 2001 - 12:52:21 MDT


Charles Hixson wrote:
>
> Just a few data points.
> 1) Yesterday I reread Staring into the Singularity, and did a bit of
> arithmetic. Assuming that in March the computer speed was, indeed,
> doubling every 18 mos. And assuming, as indicated, that each successive
> doubling takes half the time of the previous one, then in March 2004 the
> curve goes vertical.

Say what? Okay, first of all, that was a metaphor, not a projection. And
secondarily, it would be a metaphor that would only start *after* you had
human-equivalent AI researchers running on the existing silicon. We don't
have this now and therefore there is no imaginable reason why the curve
would go vertical in March 2004. Today it's just the same 'ol same 'ol
doubling every eighteen months.

If you do have human-equivalent AI, the intelligence curve probably goes
vertical after a couple of weeks, not a couple of years, because
improvement in software and the absorption of additional hardware should
be easily sufficient to carry a human-equivalent AI to superintelligence
and nanotechnology.

> 6) I think it was in C Friendly AI that the projection was made that a
> minimum system needed by a seed AI would be 32 processors running at 2 GHz.

No, that was a number I pulled out of thin air so as to have an example.
Moreover, it wasn't an example of a finished seed AI, but rather an AI in
the process of development.

> 7) So by the end of the year it should be possible to put together a
> system about twice the minimum strength for around $10,000.

The number was specifically chosen to be achievable using present-day
technologies, rather than say those of five years down the road. It's a
minimum strength for development. Not human equivalence. If human
equivalence was that cheap we probably really would have AI by now,
software problem or no.

> 10) The assertion has been made that the more computing power you have,
> the easier it is to end up with an AI (though not necessarily a friendly
> one). So by the middle of next year any reasonably profitable business
> should be able to put together enough computing power, if it chooses to.

Any profitable business can already put together enough hardware that they
could start developing if they knew what they were doing. This was
probably true even back in 1995. But to get AI without knowing what
you're doing, you probably need substantially more hardware than exists in
the human mind.

However, if the Moon were made of computronium instead of green cheese (as
Eugene Leitl put it), it would probably *not* take all that much work to
get it to wake up. At that scale (endless quintillions of brainpower) it
becomes possible to brute-force an evolutionary algorithm in which the
individual units are millions of brainpower. If the Moon were made of
computronium instead of green cheese I doubt it would take so much as a
week for me *or* Eugene Leitl to wake it up, if it didn't wake up before
then due to self-organization of any noise in the circuitry. If the Moon
were made of computronium you could probably write one self-replicating
error-prone program a la Tierra and see it wake up not too long
afterwards. A lunar mass is one bloody heck of a lot of computronium.

Friendliness is an entirely different issue, although if the evolutionary
process goes through a pattern roughly equivalent to human evolution,
specifically including imperfectly deceptive intelligent social organisms,
then there might be humanlike altruists in the resulting civilization.
The problem is probably that a computronium substrate makes it easier for
minds to amalgate, meaning that evolution there could take a quite
different (and faster) course if it started with a Tierra algorithm; if
there's a programmed evolutionary pattern using preallocated millions of
brainpower in individual units, then the evolutionary intelligence curve
for the population of a trillion such individuals will not remotely
resemble the growth curve for human intelligence over time - more like a
sudden, sharp snap from stupid individuals to superintelligent individuals
without the opportunity to develop emotional sophistication during a long
intermediate stage of socially interacting non-transhuman general
intelligences.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:10 MDT