Jeff Davis wrote:
> Would it be possible to construct a self-enhancing AI
> with today's hardware?
> I'm not asking if the software is available. Nor the
> electrical power. Nor the money. Nor whether you could cool
> it to prevent overheating. Rather I'm thinking more about
> whether memory densities and processor densities and speeds
> could be assembled in sufficient quantities in a small enough
> space so that inter-element communication speed wouldn't
> bottleneck the whole affair..
A very interesting question!
> To narrow the question down some, consider the following:
> A disk-shaped assembly, with coolant flow through the
> faces of the disk. Hardware density 50 percent, ie. half
> hardware half coolant passages. Hardware in quantities to
> support a seed AI plus additional to support an SI according
> to the following human-equivalent definition: a thousand-fold
> increase in synaptic quantities, a thousand-fold increase in
> synaptic firing rates, with memory appropriate to support
> this capability level..
> Could it be built? And if so, would the disk be ten
> meters thick by a hundred across? A hundred meters by a
> thousand? Two hundred by five thousand?
The fastest neural net system I am familiar with is Genobyte's CAM-Brain Machine (CBM) system (see http://www.genobyte.com/cbm.html). It uses an array of 72 FPGA chips to simulate 1,152 simplified neurons, which doesn't sound like much. However, the designers have taken advantage of its electronic nature to allow it to rapidly switch between different sets of neurons. In effect, it can simulate 37,000,000 simplified neurons running at roughly human speed, or 1,152 neurons running at 10 MHz, or anything in between those two extremes.
It looks to me like you could fit two of these devices into a standard 6' hardware cabinet, with room left over for power supply and networking equipment. Each module needs 400 MB/sec of bandwidth for communicating with neighboring modules, so that's 800 MB/sec per cabinet, so we could plausibly link 100 or so cabinets into a closely-coupled system using fast fiber-optic connections. That means we top out at around 10^9 well-connected neurons (and simplified ones, at that).
Now, you should be able to get a larger system than that by subdividing the artificial brain into functional units, with are internally well-connected but have a lower volume of communication with other functional units. I'm not sure what volume of data transfer you need to allow - maybe Anders could shed some light on what the situation looks like in human brains?
At any rate, I would think you could get another order of magnitude in scale this way. If you allow a couple of years to build a specialized, parallel networking system we should be able to increase that to hundreds of functional units, and probably increase the size of a functional unit by another order of magnitude as well. That gives us maybe 10^12 simplified neurons to work with, at a cost of several billion dollars.
For comparison, the human brain has around 10^11 neurons, each of which probably does a lot more information processing than the simplified ones used in neural net machines. So for a few billion dollars we could make something with hardware roughly equivalent to the human brain. That puts your SI-capable hardware well out of reach for pure neural-net machines.
Does anyone see another approach that might work?
Billy Brown, MCSE+I