From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Sat Jan 26 2002 - 03:23:28 MST
On Sat, 26 Jan 2002, Robert J. Bradbury wrote:
> I wonder whether that is really the case. The curves for CPU speed,
> data storage densities, and communications bandwidth are following
> fairly predicatble, though different, paths. I could cite DNA
There's very little innovation in current computer mainstream. Luckily,
the integration density (soon 10^9/die, add two to three orders of
magnitude for WSI) and the speed of switches (soon 6 THz) continue
ramping up. These resources are very real, and will immediately translate
into realworld performance if rearranged in a better way (realtime
reconfigurable/evolvable asynchronous cellular hardware).
> sequenced and protein crystal structures in the Genbank and PDB
> databases as additional examples. If there are discontinuities, they
A setup which could solve small protein structures automatically within
hours and with a good accuracy and sufficient processivity (it would
likely to be a processing pipeline) would eliminate the need for a
numerical PFP solver currently. NMR structures have been becoming a lot
better recently, and you could line up a few robots at a synchrotron beam
line, and a few people in front of the screens.
> appear to be in the earlier parts of the curves rather than the later
> parts. For example, there is going to be a discontinuity in the
> "proteomics" information curve. Up until 2000, most of the
> protein-protein interactions were determined by relatively primitive
> lab experiments. From 2000 - 2002, Cellzome (www.cellzome.de) built
> an engine to apply robotic MassSpec analysis to the problem. Now they
> have 1/3 of the yeast proteome done. Now that proteomics is on a
> curve, I don't really expect to see any huge jumps in the development
> rate though.
Apropos, if anyone here is interested in getting a feel for biosciences,
without buying a lot of expensive dead tree, here's a free online library
of classics: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=Books
> If its human-level you want, Blue Gene should be functional before
> 2006. *But* even if you have access to such hardware, its a roomful of
We don't know of which biological performance equivalent a given piece of
a modern solid state computer is capable of. Even if we knew, it doesn't
mean we can write the code. Even a wild seed code.
> hardware. You have to make a strong case that (a) significant
> acceleration is possible using only software modifications [Ray and
> Hans seem to suggest we may be limited to 1-2 orders of magnitude with
> clever algorithms]; or (b) there is a rapid advancement of matter as
I keep making the case that the nonlocal access memory bandwith and
latency is the first bottleneck, and the immultability of the numerical
mill is another. Blue Gene has very decent memory bandwidth (and latency,
one would estimate), but it's still hardwired. And we don't know how good
the routing mesh is going to be. Btw, for people who're too busy to read
/. (I usually am) we have first commercial nonrackmount clusters:
http://slashdot.org/article.pl?sid=02/01/22/1854218&mode=thread
> software. If you don't get (b) the SI hits a ceiling that can't be
> breached without humans supplying it with more advanced hardware.
> That presupposes that we would have the technology base at that time
> to manufacture such hardware.
I don't see any significant kinetic bareers of growth. An SI will, of
course, be very adept at manipulating matter, and people (useful idiot
savant AI is not SI, and is arguably even harder to make). Many
semiconductor factories today are highly automated, because people are a
source of process pollution. We have pretty decent telepresence robots
already. And this is merely macroscale, as soon as the system starts
controlling nanoscale (arguably, hardware necessary for AI will require
molecular circuitry, so there's your foot in the door already) the
kinetics becomes very, very impressive. (Assuming, there are still people
left to be impressed).
> So there could be a number of bumps in the road.
>
> The only smooth path I could envision for the singularity would
> be an underground breakout that takes advantage of the WWW.
> You would have to be able to co-opt a significant amount of
> of underutilized resources to be able to manage an exponential
> growth path. Ultimately you still face the requirement for
Assuming, P2P will really start going to happen, and there will be
considerable investment into ultrabroadband user inteconnect and the
hardware will make a few advances still, there's considerable substrate
present.
> matter compilers. I don't think this will be feasible until
> you have many high-bandwidth connections to a large fraction
> of the installed computronium base -- the intelligence constraints
> on low bandwidth connections between fractional human brain
> equivalents seems to be a strong barrier.
Well, we're talking about >>2010, or so. Still a decade or more to go. 10
GBit Ethernet is not very fast (what's the bandwidth of your optical
nerve, after the 1:126 retinal compression?), but of course it depends on
how much crunch your atomic blocks have, and which coding they use.
> If humans realize what is going on and decide they don't want
> it to happen you run the risk that they will disconnect their
> computers from the net. So you have the additional handicap
> that it may only be able to sneak up on you if it operates
> in a severely constrained stealth mode.
User detection is trivial. First, at night most users are asleep. An
active user generates a stream of events. A low priority program on a
decent OS is effectively user-invisible. Folding@home I'm running is
invisible even on NT machines. Btw, while we're doing good progress
http://folding.stanford.edu/cgi-bin/teampage?q=346
we've begun falling back a while now. We used to be 32, now we're 34. Team
Extropy.org could use a few more processors. If you think PFP is lame, you
can select something from
http://directory.google.com/Top/Computers/Computer_Science/Distributed_Computing/Projects/
more to your liking.
> This certainly is true. By 2010-2015, when human mind equivalent
> computational capacity becomes available to small groups, the
Robert, you're waaay too optimistic here.
> It gives you pause -- 1 billion muslims around the world.
> 10% of them devoting their computers to a DC project to
> evolve not a "Friendly AI" but an AI dedicated to advancing
> a radical muslim hegemony.
The only hegemony an expansive AI can enforce is autohegemony.
> That, IMO, is one of the problems with pushing AI technology.
> Unlike the situation with most humans, there may be no "built-in"
> human empathic perspectives in AIs. I'm deeply suspicious
> of any arguments that only beneficent AIs may produced.
> If malevolent AIs are as easy as friendly AIs then we may
> have some serious problems.
A selfish AI is malevolent by side effect. All AIs created by evolutionary
methods are selfish.
> No, certainly not. But I think getting a smooth fit to the curve
> seems to require either a late start (lots of underutilized resources
> available) or robust technologies that allow compiling matter as
> software.
>
> Personally, I'd say 2016 is a better date than 2006.
I could as well roll a dice.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:36 MST