Re: Neuron Computational Requirements?

From: Robert Bradbury (bradbury@genebee.msu.su)
Date: Thu Apr 20 2000 - 10:24:11 MDT


On Wed, 19 Apr 2000, Paul Hughes wrote:

> Billy Brown wrote:
>
> > If we know that some particular set of neurons performs a certain function,
> > and we know we can duplicate said function with X MFLOPS, we have an
> > accurate estimate of the computational demands of replacing said neurons.
> > Period.
>
> Huh? Now you are making no sense at all. What "If"?? That's the whole point,
> no one has yet been able to duplicate any said function. You're caught in
> some weird self referential tautology! You might want to clear your head
> and start reading this discussion from the beginning since the crux of this
> argument has obviously lost you.

Not so. It may not be in Moravec's Robots book, but it is detailed quite
clearly in his Mind Children book. Moravec knows what the optic nerve
does at least to the degree of edge recognition, aggregating similarly
colored shapes next to each other into "objects", etc. and has spent
20+ years attempting to program computers to do the same thing (namely, to
navigate through visual recognition of the world around you). The fact
that they can get a van to drive itself across the country with the
robot in control 95% of the time shows that they have a very *good*
estimate of the amount of computer power that *is* required to "see"
and drive an automobile.

My comment to Eliezer was intended to observe the fact that much
of that seems to now have been reduced to a $6.00 chip. So the
"software" that Moravec needs all the MIPs for can in fact be
redeveloped in "hardware" that doesn't need nearly so many MIPS.
The conclusion I draw from that is that *when* one understands
exactly what it is the neurons are "computing", you can develop
software and hardware that do it more effectively. One function
the brain does (usually poorly) is "spell check". I can write
programs that do it fairly fast and if there were enough demand
people could write hardware that could do it even faster.

>
> As the 1992 study of the Purkinje demonstrates, it was an extremely difficult
> task to simulate the function of a single neuron - and this was an admittedly
> simplified simulation at that. It took an i860 processor almost 60 minutes to
> simulate a single firing! So the question still remains how many xflops it's
> going to take to simulate a *typical* neuron of the human brain not including
> any of its self-maintenance functions.
>

So? It would take less-specialized hardware & software (say the program
they are using, translated into Bliss-10, running on a PDP-10 simulator
running on an old klunking PDP-11 [I actually wrote a simulator to
do this once, simulating 36 bit words on a 16 bit machine, fun...])
*MUCH* longer than an hour to run the simulation. If they increased
the number of synapses or decreased the granularity of their simulation
(to say the atomic scale), it would take much longer as well.

As Anders' pointed out, the degree of "accuracy" you get depends on
the hardware and software you have available and how many "compromises"
you have to make to get the results in something less than a month.
If you are satisfied that the abstractions in your model are sufficently
accurate or spend the time to develop very specialized hardware then
you can run things much faster.

> So far both Robin Hanson and Anders Sandberg admit that a single transistor
> isn't going to do the job. That much we all agree on. What exactly are you
> arguing?
>

True, because a single transistor is designed as a switch, while a neuron
is a multiply-adder. So what you really need is arrays of DSP chips.
These have been built, I believe Columbia has one that is pretty large.
But even the largest (10K+ DSP chips?) are still small compared with
the number of neurons (billions). They compensate for this by being
somewhat faster (millions/sec vs. 100s-1000s/sec). In terms of #'s
of transistors you can probably do this with 1K to 100K transistors
depending on how many "bits" of accuracy you need (neurons are probably
more than 1-bit devices but probably much less than 16-bit devices).

*However*, neurons may be relatively "fuzzy" devices, meaning they
can be more effectively implemented using analog rather than digital
logic. I think the transistor count drops by 1-2 orders of magnitude
when you implement multiply-adders in analog rather than digital logic.

The real issue that I believe is being "poked" at here is whether or
not various brain functions are well enough understood that you can
implement hardware+software that produces the function for fewer "IPs"
than the brain. I'd say there are a whole *host* of things: arithmetic,
symbolic logic (to a degree), sorting, spell-checking, language
recognition, language translation (coming slowly), message passing
and routing, character "recognition", limited reading (OCR), most game
playing (the only exception left is Go), driving automobiles,
complex simulations or phenomena from quantum mechanics to weather,
are among those "brain functions" where software and hardware have
trumped the brain completely. The areas we have to wrestle with
now are pattern recognition, common sense knowledge bases, language
"comprehension" and a few more.

The question becomes, in 10, 20, 30 years, *what* will be the brain
functions that are *not* well enough understood that we cannot
implement them on top-down designed software+hardware and must
resort to atomic, molecular or "systems" simulations of neural
networks on the hardware available at that time to produce those
functions?

Billy's point is that it doesn't matter how many MIPs the brain
takes to do something (especially things it wasn't designed to
do well such as arithmetic), if we can build machines that do
those things. Since we already have people "seeing" with the
retinal cells being replaced by chips and "walking" with
spinal neurons replaced by chip signal "routers", the question
becomes how far into the complexity of the brain can this extend
before we get lost and have to resort to the simulations of the neurons?
Looking at this from the programming perspective, we are going
top-down after functional equivalents. The question is can
we take that all the way to the bottom, or at some point do
we have to resort to bottom-up simulations to fill in the things
that we don't understand?

To my mind, its going to get *very* interesting if someone develops
a chip for the Calvin "hexagons" and decides to augment some of their
neocortex with a chip add-on pack. Then you could envision replacing
the whole neocortical surface with a replacement microelectrial
implementation once the functional microelectronic circuit densities
fall below functional neuronal densities. You then get a combination
e-en-human where the higher thoughts are being carried out in new,
faster e-hardware, while all the old interfaces to the body and the
emotional strings in the reptilian part of the brain are still being
handled by old bio-hardware.

Robert



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:09:38 MDT