Re: The Dazzle Effect: Staring into the Singularity

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Thu Aug 16 2001 - 16:20:18 MDT


Eliezer S. Yudkowsky wrote:
> Charles Hixson wrote:
>
>>Just a few data points.
>>1) Yesterday I reread Staring into the Singularity, and did a bit of
>>arithmetic. Assuming that in March the computer speed was, indeed,
>>doubling every 18 mos. And assuming, as indicated, that each successive
>>doubling takes half the time of the previous one, then in March 2004 the
>>curve goes vertical.
>>
>
> Say what? Okay, first of all, that was a metaphor, not a projection. And
> secondarily, it would be a metaphor that would only start *after* you had
> human-equivalent AI researchers running on the existing silicon. We don't
> have this now and therefore there is no imaginable reason why the curve
> would go vertical in March 2004. Today it's just the same 'ol same 'ol
> doubling every eighteen months.
If you want to say metaphor rather than projection, that's reasonable.
I'm the one who threw numbers in. But though they may not be human
equivalent, there are computers involved in nearly every phase of chip
design. So I think the reinforcement effect that you mentioned has
already started. Actually, I think that it started years ago, but it's
hard to notice in the early part of the curve. And as each step of the
process becomes automated, the reinforcement effect is stronger. (And I
don't have enough data points to allow me to tell what the -current-
rate of doubling is. But it sure has been getting shorter.
(Note: I'm pretty sure that this automating of the manufacturing
processes is one of the steps needed for the injection of an intelligent
AI to have the effect that you postulated. And I expect it to be there.
  It just isn't going to wait for the AI before showing up.)

A good part of the problem of prediction is that it is quite difficult
to know what the shape of the curve is. And another is guessing how
difficult the unsolved problems are. And trend curves are nortorious
liars. (I remember one that said we would have infinite electrical
power generation by around 1984.) The most that can be hoped is that
they will be guideposts. That said, this curve, whatever it's real
shape, has much more reason behind it than most I have seen. And, of
course, the cycles predicted by Moore's law have been shrinking. There
was a time not long ago when I never expected to see a 25GB disk drive.
(OK, that's a different, reinforcing, curve.)

>
> If you do have human-equivalent AI, the intelligence curve probably goes
> vertical after a couple of weeks, not a couple of years, because
> improvement in software and the absorption of additional hardware should
> be easily sufficient to carry a human-equivalent AI to superintelligence
> and nanotechnology.
>
Understood. But when I read that section, it seemed to be that a lot of
the effects being analysed had already kicked in, though at a slower
speed, due to needing to be transduced through human engineers.

>
>>6) I think it was in C Friendly AI that the projection was made that a
>>minimum system needed by a seed AI would be 32 processors running at 2 GHz.
>>
>
> No, that was a number I pulled out of thin air so as to have an example.
> Moreover, it wasn't an example of a finished seed AI, but rather an AI in
> the process of development.
>
OK, but I was think the developement can be started (though less
effectively) one an even lower power computer than that. How far it can
be carried... well, that may be a different matter. Clearly a
uniprocessor that takes all night to build a kernel is to "weak" for
much serious work, but perhaps it could be used to settle some
architectural questions, etc. And if not a seed, could a least be a oocyte.

Actually, I think that there will eventually turn out to be a lineal
descent between the work being done now and what will eventually arise.
  And this is one of the reasons that it is important that the ground
work for friendliness be designed as quickly as possible. The curve
that I postulated was an immensely optomistic/threatening one, but not
totally beyond reason. A quite weak reason, I grant you, but in this
case, perhaps one would wish to be prepared early.

>
>>7) So by the end of the year it should be possible to put together a
>>system about twice the minimum strength for around $10,000.
>>
>
> The number was specifically chosen to be achievable using present-day
> technologies, rather than say those of five years down the road. It's a
> minimum strength for development. Not human equivalence. If human
> equivalence was that cheap we probably really would have AI by now,
> software problem or no.
>
It's a wild guess, but my guess is that the first fully capable
development system that gets a working seed will be able to leverage
it's computing power, possibly via something like the SETI, into a fully
capable system. Only the most connected tasks would be done locally,
and all weakly connectec or low-priority tasks would be farmed out. So
the total MegaFlops available wouldn't be measured by the speed of the
main host. That would just be for code that was time-sensitive. This
would, of course, drastically decrease the speed of the AI, but would
compensate by enabling it to crack tougher problems.
>
>>10) The assertion has been made that the more computing power you have,
>>the easier it is to end up with an AI (though not necessarily a friendly
>>one). So by the middle of next year any reasonably profitable business
>>should be able to put together enough computing power, if it chooses to.
>>
>
> Any profitable business can already put together enough hardware that they
> could start developing if they knew what they were doing. This was
> probably true even back in 1995. But to get AI without knowing what
> you're doing, you probably need substantially more hardware than exists in
> the human mind.

But people will know what they are doing. They are making the system
easier to use. They are enabling it to solve problems better. They are
improving the user interface. They are consolidating the company
resources. Etc. I would agree that none of these is sufficient, but
some combination might be. Particularly if the system has to figure out
what the answer is to a problem the user is having. That would use a
good deal of what is needed to get an AI going just by itself. And
Boeing is currently working with a computer physical modeling system to
determine how planes should be designed to minimize damage to passengers
in case of a crash. Not AI, but a big chunk of it.

So take the Boeing system, and make it easy for the engineers and their
managers to use, and then let it be expanded to handle the projection of
desireable business strategies. It wouldn't take too many cycles of
expansion to get something that deserved the label of AI, even if it
wasn't at all like what we had been thinking of. And that's just one
company. IBM is more likely to do it on purpose.
>
> However, if the Moon were made of computronium instead of green cheese (as
> Eugene Leitl put it), it would probably *not* take all that much work to
> get it to wake up. At that scale (endless quintillions of brainpower) it

I'm not really sure that I accept this line of argument. Bridges are
strong, but they don't grow muscles. You need to start adding in power
systems so that they flex if they sense an earthquake. And control
systems. And sensors. A big adding machine would probably stay an
adding machine if nothing else were done. Easier, yes. Inevitable, I
don't think so. It could manage with much less efficient algorithms,
but it would still need to be aimed at something near the right goal

> becomes possible to brute-force an evolutionary algorithm in which the
> individual units are millions of brainpower. If the Moon were made of
> computronium instead of green cheese I doubt it would take so much as a
> week for me *or* Eugene Leitl to wake it up, if it didn't wake up before
> then due to self-organization of any noise in the circuitry. If the Moon
> were made of computronium you could probably write one self-replicating
> error-prone program a la Tierra and see it wake up not too long
> afterwards. A lunar mass is one bloody heck of a lot of computronium.

OK. That way would work.
>
> Friendliness is an entirely different issue, although if the evolutionary
> process goes through a pattern roughly equivalent to human evolution,

Actually, I think that if you want to depend on evolution to give you
friendliness, then you had better have a development environment
substantially different from that of people. There are an awful lot of
extinct spieces not around any more.

> ...
>
> -- -- -- -- --
> Eliezer S. Yudkowsky http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
>
But what really happened was that I just plugged the numbers in, and was
shocked. I hadn't been expecting to get a number less than 2005. My
"projections" have been 2005-2030 ever since I read the Vernor Vinge
paper. To come up with an earlier estimate rather than a later one was
a shock. (That's why I titled it the Dazzle Effect... I was fairly sure
that once my "vision" recovered, I'd see something besides blinding lights.)

-- 
Charles Hixson

Copy software legally, the GNU way! Use GNU software, and legally make and share copies of software. See http://www.gnu.org http://www.redhat.com http://www.linux-mandrake.com http://www.calderasystems.com/ http://www.linuxapps.com/



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:10 MDT