Re: Neuron Computational Requirements?

From: Robert Bradbury (bradbury@genebee.msu.su)
Date: Fri Apr 21 2000 - 05:33:57 MDT


On Thu, 20 Apr 2000, Paul Hughes wrote:

> John Clark wrote:
>
> > Paul Hughes <paul@planetp.cc> Wrote:
> >
> > >psychoactive drug induced states of consciousness are directly
> > > correlated with neurotransmitter activity within the neuron itself
> >
> > No doubt.
> >
> > >That alone is proof that there is *extra* computation taking place
> > >within the neuron.
> >
> > By the way, neurons are slow and neurotransmitters are super slow, even
> > today a transistor is several hundred million times faster.
>
> [Re: Moravec & neurotransmitters/hormones]: Its not that he questions
> its role, he never considers it in the first place.

>From the Moravec's perspective, this is a correct approach. In his
thinking, which is a "systems" approach, he would not care for example
how much of the processing power was in "directly" executed instructions
and how much of the processing power was done in "indirectly" executed
(microcoded/virtual) instructions. All he cares about is what the
function is that those instructions *do* and how much processing
power he needs to do it on alternate software/firmware/hardware.
In this situation, it is even "wrong" to use the "instructions per second"
measure as that really only applies to common computer architectures.
Moravec's vision software could be recoded and run on a cellular automata
or pushed down to the hardware level (as the vision chip engineer's
appear to have done). In both cases applying a "MIPs" measure is
a questionable thing to do.

In the brain its done with neurons and neurotransmitters and hormones,
in the cellular automata its done via massively parrallel data passing
with a little computation, in the hardware gate arrays its done with
networks of electrical circuits. The point isn't how many MIPs are
involved but "What does it do?" and "What is the most efficient way
to do that?".

I'd add in passing, that regarding Robin's comments that this may get
better indefinately, that at some point we have to hit some limits.
I would expect that for each type of thing the brain does, for example
"edge recognition" in vision systems, or "pattern extraction" in hearing
systems there should be an "optimal" architecture given the physical
structure of this universe.

> My entire concern is ultimately in regards to uploading my consciousness
> and to the necessary computational *complexity* required to assure that
> no parts of cognition, no matter how subtle, are ignored due to ignoring
> the role of computation within the neuron itself.

This is clearly stated. This is what I refered to as the "bottom-up"
problem. When you don't know what the neural net is *really* doing,
you are going to have to simulate it. An accurate simulation is going to
require a number of things, including the neurtransmitter/hormone effects.
As Anders pointed out, there are clever things that neurophysiologists
can abstract and assume to make neuron simulations tractable using currently
available software and hardware.

> Since Hans has based all the estimates I've seen only on the neural
> network nodes themselves, I've been urging that we also take into account
> the computational processes taking place within those nodes (neurons).
> Why has almost everybody missed this point?

I believe this is where you may be tripping up. Hans says *nothing*
about the computational capacity of the brain (neurons with or without
neurotransmitters & hormones). Hans *only* says things about the MIPS
requirements on common computer architectures that are required to
*do* what the combination of things in parts of the brain *does*.

The MIPS requirements that you are looking for (accurate simulations of
physiological (realistic) neural networks) are *entirely* unrelated to
the MIPS requirements discussed by Moravec. My suspicion is that the
work being done by deGaris on the artificial brain is closer to what
you need, since his chips are designed to emulate at least aspects
of the neural network, though at a much higher rate. If those chips
have inputs that allow for things like "ease of excitability" which
would be related to the states influenced by past neurotransmitters
and hormones in the brain, then deGaris has a pretty realistic simulator
and the question becomes simply one of when the #-of-neuron-circuits
times the rate-of-neuron-ops exceeds the equivalent capacity in the brain.

The figure of merit that you should use (to avoid this kind of confusion)
is something like NOpS (neural operations per second) where NOpS would
be divided into an aritmetic component (the multiply/add function I
previously mention) and perhaps some memory/minor-computational
component (the neurotransmitters & hormones) that effectively change
the state of the neuron and/or function as minor information carriers
(where the major information carrier is the architecture of the network
itself).

Current computer architectures are entirely unsuitable for providing NOpS.
The coming processor-in-memory architectures and "Blue-Gene" from IBM
will be much closer to what is needed. With the right instruction
sets, they should be several orders of magnitude faster than conventional
computers at providing us with NOpS.

Robert



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:09:40 MDT