REMOVE

vladimir mendez (kancerman@hotmail.com)
Wed, 23 Jun 1999 17:00:21 GMT

>From: owner-extropians-digest@extropy.org (extropians-digest)
>Reply-To: extropians@maxwell.kumo.com
>To: extropians-digest@extropy.org
>Subject: extropians-digest V4 #170
>Date: Wed, 23 Jun 1999 06:11:07 -0600
>
>extropians-digest Wednesday, June 23 1999 Volume 04 : Number
>170
>
>
>
>
>----------------------------------------------------------------------
>
>Date: Tue, 22 Jun 1999 22:37:13 +1000
>From: "Timothy Bates" <tbates@karri.bhs.mq.edu.au>
>Subject: Re: Volume of Human-Equivalent Intelligence WAS Re: High-tech
>
>ray said
> >>I calculated that a human-equivalent-intelligence (HEI) would fit
> >>into several thousand cubic microns,
>On Sun, 13 Jun 1999 19:16:20 +1000 I wrote
>any chance of passing on the algorithm you sued to calculate this ;)
>
>and ray responded:
>
> > Let me show you the numbers: The human brain has 10 billion
> > neurons.
>maybe an underestimate by order of magnitude but that won't matter.
>
> > Each neuron has between 30 and 10,000 synapses, with one
> > associated dendrite for each.
>Well, let's just say a neuron has 30 to 10k synapses, that is what has been
>counted.
>
> >Taking the geometric mean of the number of dendrites, say
> > that each cell has 300 dendrites.
>
>You need the arithmetic mean to work out dendrites = neurons times
>dendrites/neuron. In the cortex, there are many neurons with thousands of
>synapses, mostly unmylenated axons, and myriad of local and diffuse
>neuro-transmitter effects. These should be counted as well.
>
> > Then there are at roughly 3x10^12
> > synapses in the proposed human-equivalent brain.
>Except this could be as great as 10^13 or 10^15. But that won't matter.
>
> > Now, I thought about ways to reduce this by editing the system,
> > but they won't work. Like most real computing systems, the majority of
> > the logic (>95%) by weight or volume is I/O. (The cerebrum, cerebellum,
> > gyrii, and most of the encephalon)
>The gyrii ("outwellings", no more important than sulcal surfaces),
>cerebrum,
>"encephalon" = telencelphelon = cerebrum again? are not "I/O" unless you
>mean something much more than interface typically implies.
>
> > Neural networks are great for I/O:
>What is I/O?
>
> > they're robust and compact compared to the digital systems they replace.
> > You would not want to use anything else to construct the phenomenal
> > systems of a robot.
>How do you "know" that a digital implementation of the brain will be more
>compact?
>
> > So, for a first approximation, let's say we can custom-design the
> > system so that we can store one synaptic weight per byte. This
>generously
> > assumes that the connection pattern (i.e. which neuron has the synapse)
> > is hard-wired or hard-coded into the simulation program. The synaptic
> > weights have to change, because that's how the system learns. Since
>they
> > change, they have to be recorded.
>A synapse in the cortex is an analog element, it passes along cable
>currents
>in proportion to its inputs as well as discrete action potentials. In
>addition it does spatial and temporal summation of action potential inputs.
>In addition, they can can be mediated by second messengers, voltage gated,
>and they can be auto inhibitory, just for starters. This is MUCH more than
>1
>byte. It is several kB plus an on-board processor programmed in OCCAM. Per
>synapse!
>
> > Therefore, the computer needs at least one byte per synapse,
> > 3x10^12 bytes of storage.
>I think that from here on becomes irrelevant given the above limitations.
>
> > Using Drexler's estimates for fluorine/hydrogen carbyne tapes,
> > this could be stored in at least 1500 cubic microns (Drexler roughly
> > estimated 2GBytes/cubic micron; see the notes for Engines of Creation,
> > p19)
> > Now, we want the brain to run at human speed. Let's say that
> > nanocomputers run 1 million times as fast as neurons; this is roughly
> > right, because I'll assume mechanical nanocomputers. Mechanical
> > nanocomputers would be more compact than quantum electronic computers.
> > They also have a speed that more closely matches the mechanical carbyne
> > tape drive. If we use the QE computers, they will run 100x faster,
>while
> > only being about 50x bigger, but the apparent advantage will be
>cancelled
> > because they will stall waiting for the tape drives. The result will be
> > a slower or larger computer than the mechanical systems. This might be
> > fixable; quite possibly an experienced nanoengieer could finesse this,
>if
> > such a person existed. However, note that it just divides the
> > computer-volume by 2, and the tape remains the same size.
> > So, to get at least human speed, we need roughly 1/1,000,000 the
> > number of processors, about 3x10^6. I assume that each one of these is
> > servicing a million simulated synapses. I'm going to throw in the CPUs
> > for free (I know pretty good CPUs that have as few as 7,000 gates; see
> > the web site for computer cowboys).
> > Using Drexler's estimates for random-access memory
> > (20MBytes/cubic micron), we can fit 305 of 64K computers in a cubic
> > micron. The computers therefore take roughly 9.8x10^4 cubic microns.
> > The computers' program memories are therefore the major system
> > expense. Can we get rid of them? Now let's say that the engineer goes
> > for broke, and designs a system with no computers. It's totally analog,
> > maybe with frequency-modulated hysteresis devices acting as neurons, and
> > carbyne pushrods acting as dendrites. In this case, the system volume
> > should grow substantially, because the dendrites have to physically
> > exist, each with a few thousand carbon atoms, rather than just being
> > simulated from 8 bits on <50 atoms of tape.
> > Possibly one could substitute a custom logic machine that _only_
> > processes neural nets? The problem with these is that they tend to be
> > larger and more complex than the computers they replace. Random logic
>is
> > bulkier and more power-hungry than the random-access memories that store
> > software. Faster, maybe, but then we might stall waiting for the tape,
> > right?
> > The computers therefore take about 9,800 cubic microns. The tape
> > storing the synapses takes about 1,500 cubic microns. Now remember,
>this
> > is a _low_ estimate. I actually think that the storage for a synapse
> > would have to store an address of a neuron as well, thus having 4 bytes
> > of address in addition to the byteof weight. This quintuples the tape
> > system to 7,500 cubic microns. Also, the tape drive and computers might
> > double in size. Drexler doubled them.
> > 11,300 cubic microns is small. It's a cube about 22.5 microns on
> > a side, say a quarter-millimeter on a side, about 1/8 the size of a
> > crystal of table salt. 17,300 cubic microns (storing synaptic
>addresses)
> > is still small, about 25.9 microns on a side. Even 34,600 cubic microns
> > (double everything) is small, maybe 32.6microns on a side, the size of a
> > crystal of table salt.
> > <snip stuff about Halperin's book>
>
>------------------------------
>
>Date: 22 Jun 1999 15:04:17 +0200
>From: Anders Sandberg <asa@nada.kth.se>
>Subject: Re: Volume of Human-Equivalent Intelligence WAS Re: High-tech
>
>"Timothy Bates" <tbates@karri.bhs.mq.edu.au> writes:
>
> > > So, for a first approximation, let's say we can custom-design the
> > > system so that we can store one synaptic weight per byte. This
>generously
> > > assumes that the connection pattern (i.e. which neuron has the
>synapse)
> > > is hard-wired or hard-coded into the simulation program. The synaptic
> > > weights have to change, because that's how the system learns. Since
>they
> > > change, they have to be recorded.
> > A synapse in the cortex is an analog element, it passes along cable
>currents
> > in proportion to its inputs as well as discrete action potentials. In
> > addition it does spatial and temporal summation of action potential
>inputs.
> > In addition, they can can be mediated by second messengers, voltage
>gated,
> > and they can be auto inhibitory, just for starters. This is MUCH more
>than 1
> > byte. It is several kB plus an on-board processor programmed in OCCAM.
>Per
> > synapse!
>
>That is because we try to simulate it perfectly, not knowing if every
>little bump and inclination on the membrane potential time series are
>important or not. Most likely this can be simplified a lot (since our
>biology has evolved to work in a noisy, messy environment where you
>cannot count on perfect fidelity). Still, one byte per synapse is
>likely too little, since learning most likely involves several state
>variables (just look at the BCM rule, or the "realistic" models people
>play with using more than 50 state variables) and you need to store
>the kind of receptors that are around.
>
> > > Therefore, the computer needs at least one byte per synapse,
> > > 3x10^12 bytes of storage.
> > I think that from here on becomes irrelevant given the above
>limitations.
>
>Not really; his original aim was to show that a micron-sized upload
>brain would not work. If his calculations are a million times too
>small, they still show that the micron-uploads do not work. The
>question seems to be whether we are talking about sugar cubes or small
>grains of salt here.
>
>- -----------------------------------------------------------------------
>Anders Sandberg Towards Ascension!
>asa@nada.kth.se http://www.nada.kth.se/~asa/
>GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
>
>------------------------------
>
>Date: Tue, 22 Jun 1999 14:33:21 +0000
>From: "Nick Bostrom" <bostrom@ndirect.co.uk>
>Subject: Re: Volume of Human-Equivalent Intelligence
>
>Some comments on Raymond G. Van De Walker's epistle:
>
> > Let me show you the numbers: The human brain has 10 billion
> > neurons.
>
>Most recent estimtes put it at 100 billion.
>
> > Now, I thought about ways to reduce this by editing the system,
> > but they won't work. Like most real computing systems, the majority of
> > the logic (>95%) by weight or volume is I/O. (The cerebrum, cerebellum,
> > gyrii, and most of the encephalon) Neural networks are great for I/O:
> > they're robust and compact compared to the digital systems they replace.
> > You would not want to use anything else to construct the phenomenal
> > systems of a robot.
>
>Hmm. One way of estimating the brain's processing power is like this:
>
>The human brain contains about 10^11 neurons. Each neuron has about
>5*10^3 synapses, and signals are transmitted along these synapses at
>an average frequency of about 10^2 Hz. Each signal contains, say, 5
>bits. This equals 10^17 ops.
>
>Another method, used by Moravec, is to look at one part of the
>nervous system whose function we can replicate on computers today
>(the retina). Then we multiply the resources needed for this
>computation by the factor by which the total brain is larger than the
>retina. This gives us the figure 10^14 ops for the brain, three
>magnitudes less than the first estimate.
>
>The second estimate presupposes that we can make some optimizations.
>Maybe intelligent design and new computational elements enable us to
>do several orders of magnitude better than mother nature with the
>same number of ops. Why couldn't the same be possible with regard to
>memory requirements? There is no evidence that the brain's memory
>system is not highly redundant, so that with highly reliable
>artificial (or simulated) neurons one get away with a lot less
>memory. We simply don't know.
>
>Another problem with the multiplicative approach - multiplying
>neurons, synapeses per neuron, and resources per synapse - is that it
>might turn out that a lot of the brain's computing power is in
>higher-order interactions in the dendritic trees. Signals may not
>only be added, but multiplied, and with interesting time-integration
>effects that might be used by the brain. The multiplicative approach
>neglects these potential effects, and may thus understimate the
>brain's computing power.
>
> > Using Drexler's estimates for random-access memory
> > (20MBytes/cubic micron), we can fit 305 of 64K computers in a cubic
> > micron. The computers therefore take roughly 9.8x10^4 cubic microns.
>
>Does this take into account that there might be a need for cooling?
>
> > 11,300 cubic microns is small. It's a cube about 22.5 microns on
> > a side, say a quarter-millimeter on a side, about 1/8 the size of a
> > crystal of table salt. 17,300 cubic microns (storing synaptic
>addresses)
> > is still small, about 25.9 microns on a side. Even 34,600 cubic microns
> > (double everything) is small, maybe 32.6microns on a side, the size of a
> > crystal of table salt.
>
>As an estimate of precisely how small advance nanotech could make a
>human-equivalent computer I think we have to take this grain of
>salt with a grain of salt so to speak. An uncertainty interval of a
>couple of orders of magnitude does not seem unreasonable.
>
>
>Nick Bostrom
>http://www.hedweb.com/nickb n.bostrom@l



Get Your Private, Free Email at http://www.hotmail.com