Volume of Human-Equivalent Intelligence WAS Re: High-tech weaponry

Raymond G. Van De Walker (rgvandewalker@juno.com)
Tue, 22 Jun 1999 20:57:48 PDT

On Sun, 13 Jun 1999 19:16:20 +1000 "Timothy Bates" <tbates@karri.bhs.mq.edu.au> writes:
>any chance of passing on the algorithm you sued to calculate this ;)
>
>> I calculated that a human-equivalent-intelligence (HEI) would fit
>into
>> several thousand cubic microns,

I have a canned response I sent to another correspondent- it got me a signed book from Halperin!

Mr. Halperin, I'm a professional computer engineer, and an amateur but trained philosopher and biologist, and I don't think that even advanced nanotechnology can pack a human-equivalent brain into the volume of a blood cell. (TFI p208 is wrong)

Let me show you the numbers: The human brain has 10 billion neurons. Each neuron has between 30 and 10,000 synapses, with one associated dendrite for each. If the axon is considered just another connection, its organizational importance is merely to make distant dendrites. Taking the geometric mean of the number of dendrites, say that each cell has 300 dendrites. Then there are at roughly 3x10^12 synapses in the proposed human-equivalent brain.

Now, I thought about ways to reduce this by editing the system, but they won't work. Like most real computing systems, the majority of the logic (>95%) by weight or volume is I/O. (The cerebrum, cerebellum, gyrii, and most of the encephalon) Neural networks are great for I/O: they're robust and compact compared to the digital systems they replace. You would not want to use anything else to construct the phenomenal systems of a robot.

So, for a first approximation, let's say we can custom-design the system so that we can store one synaptic weight per byte. This generously assumes that the connection pattern (i.e. which neuron has the synapse) is hard-wired or hard-coded into the simulation program. The synaptic weights have to change, because that's how the system learns. Since they change, they have to be recorded.

Therefore, the computer needs at least one byte per synapse, 3x10^12 bytes of storage.

Using Drexler's estimates for fluorine/hydrogen carbyne tapes, this could be stored in at least 1500 cubic microns (Drexler roughly estimated 2GBytes/cubic micron; see the notes for Engines of Creation, p19)

Now, we want the brain to run at human speed. Let's say that nanocomputers run 1 million times as fast as neurons; this is roughly right, because I'll assume mechanical nanocomputers. Mechanical nanocomputers would be more compact than quantum electronic computers. They also have a speed that more closely matches the mechanical carbyne tape drive. If we use the QE computers, they will run 100x faster, while only being about 50x bigger, but the apparent advantage will be cancelled because they will stall waiting for the tape drives. The result will be a slower or larger computer than the mechanical systems. This might be fixable; quite possibly an experienced nanoengieer could finesse this, if such a person existed. However, note that it just divides the computer-volume by 2, and the tape remains the same size.

So, to get at least human speed, we need roughly 1/1,000,000 the number of processors, about 3x10^6. I assume that each one of these is servicing a million simulated synapses. I'm going to throw in the CPUs for free (I know pretty good CPUs that have as few as 7,000 gates; see the web site for computer cowboys).

Using Drexler's estimates for random-access memory (20MBytes/cubic micron), we can fit 305 of 64K computers in a cubic micron. The computers therefore take roughly 9.8x10^4 cubic microns.

The computers' program memories are therefore the major system expense. Can we get rid of them? Now let's say that the engineer goes for broke, and designs a system with no computers. It's totally analog, maybe with frequency-modulated hysteresis devices acting as neurons, and carbyne pushrods acting as dendrites. In this case, the system volume should grow substantially, because the dendrites have to physically exist, each with a few thousand carbon atoms, rather than just being simulated from 8 bits on <50 atoms of tape.

Possibly one could substitute a custom logic machine that _only_ processes neural nets? The problem with these is that they tend to be larger and more complex than the computers they replace. Random logic is bulkier and more power-hungry than the random-access memories that store software. Faster, maybe, but then we might stall waiting for the tape, right?

The computers therefore take about 9,800 cubic microns. The tape storing the synapses takes about 1,500 cubic microns. Now remember, this is a _low_ estimate. I actually think that the storage for a synapse would have to store an address of a neuron as well, thus having 4 bytes of address in addition to the byteof weight. This quintuples the tape system to 7,500 cubic microns. Also, the tape drive and computers might double in size. Drexler doubled them.

11,300 cubic microns is small. It's a cube about 22.5 microns on a side, say a quarter-millimeter on a side, about 1/8 the size of a crystal of table salt. 17,300 cubic microns (storing synaptic addresses) is still small, about 25.9 microns on a side. Even 34,600 cubic microns (double everything) is small, maybe 32.6microns on a side, the size of a crystal of table salt.

<snip stuff about Halperin's book>



Get the Internet just the way you want it. Free software, free e-mail, and free Internet access for a month! Try Juno Web: http://dl.www.juno.com/dynoget/tagj.