Re: Neuron Computational Requirements?

From: Anders Sandberg (asa@nada.kth.se)
Date: Sat Apr 22 2000 - 05:30:49 MDT


Robin Hanson <rhanson@gmu.edu> writes:

> To me the most interesting questions are how good uploading technology
> is now, how fast it is improving, and when therefore uploading may
> be feasible. The whole point of the uploading strategy to achieving
> artificial intelligence is to port the software of a human brain
> *without* understanding how it works. The simplest strategy of this
> sort is to just understand how each type of neuron works, and scan a
> brain noting which neurons are of which types and are connected to
> which other neurons.
>
> Focusing on this neuron-level strategy, I want estimates for five
> parameters. Two are static parameters:
>
> 1) Neuron Inventory -- The number of distinct types of neurons, and
> the number of each type in a typical human brain.

I don't have my textbooks at hand, but we are still a way off a
complete list. The problem is that it is not obvious when to
categorize neurons into distinct types or not
(http://mickey.utmem.edu/papers/NUMBER_REV_1988.html#3 has a good
explanation) - some vary continously along various dimensions, and
subtle biochemical differences might be hard to detect. Still, if the
grasshopper represents the largest possible number of unique neurons
(~200,000) we have an upper bound which is almost certainly much
larger than the number found in the human brain.

I think a likely number of functionally different types is perhaps
~1000. I'll check with my copy of Shepherd, _The Synaptic Organization
of the Brain_ (it usually has good numbers).

[ Robert mentioned that maybe 25% of the genome is involved in the brain
development. I have seen numbers of 40,000 genes expressed in brain
development, based on

@Article{Adams93,
  author = {MD. Adams and MB Soares and AR Kerlavage and C Fields and JC Venter},
  title = {Rapid c{DNA} sequencing (expressed sequence tags) from a directionally cloned human infant brain cDNA library},
  journal = {Nat Genet},
  year = 1993,
  volume = 4,
  pages = {373--380}
}

@Article{Sutcliffe88,
  author = {J.G. Sutcliffe},
  title = { m{RNA} in the mammalian central nervous system},
  journal = {Annu Rev Neurosci},
  year = 1988,
  volume = 11,
  pages = {157--198}
}
]

40,000 genes is enough to create a near infinite number of unique
neurons using self-organisation and morphogen gradients, but it is
unlikely that evolution has favored that in mammals; instead many of
these genes (beside the metabolism stuff) likely control gross
connectivity - which neuron type to connect to which other where and
how.

> 3) Resolution required -- The spatial/chemical resolution of a scanning
> technology that would be required to sufficiently reliably distinguish
> between the types of neurons, and to distinguish which neurons connect
> to which others via which kinds of synapses. (Assume one is repeated
> slicing and scanning a cryogenically frozen brain.)

We need to identify synapses, which means on the order of ~1 mu. At
this resolution the morphological differences can probably be totally
characterized, but we need the ability to get the receptor types on
surfaces and possibly some chemical stuff from the inside too. I don't
think we need much smaller resolution, since then we get down into the
region where noise and diffusion play a significant role.

> Three parameters are dynamic, about *rates* of progress of these:
>
> 1) Neuron types modeled -- The fraction of neuron types for which
> we have reasonable computational models at *any* level of complexity.
> That is, where we have run such models and seen that they correspond
> reasonably closely to the behavior of actual neurons of that type.

I think this can be estimated using a bibliometric study.

http://theta.hippo.hscbklyn.edu/fox/Neuron.User.Manual/intro.html has
an interesting diagram (power.gif) which I think has held true since
it was made. Knowledge of channels has grown enormously, but I think
we will see a slowing down over time as the number of channels are
finite. Computer hardware has grown faster than the abilities of
simulation software, which has grown much faster than knowledge of the
detailled morphology.

> 2) Average CPU/RAM per neuron -- A weighted average across neuron
> types of the CPU/RAM requirements for simulating each neuron type.
> The weights should be according to the fraction of each type of
> neuron in a human brain.

I think this can be estimated roughly for the detailled models (like
the Purkinje model or Traub's pyramidal model), although I don't have
the numbers around. There is likely a multiplicative states x
compartment demand on memory, where models with many compartments will
produce heavier load than morphologically simple neurons. Those
Purkinje cells (one of the most branching types around) will likely be
heavy, while we can hope the granule cells are lighter. CPU estimates
are harder, as there are a lot of clever simplifications and tricks
that can be done (one example is Alain Destexhe's trick of using
"canned" action potentials instead of exactly calculated (very heavy)
ones and instead concentrate on the dynamics when the neuron is not
firing).
 
> 3) Scanning costs -- The time/dollar cost per area scanned of scanning
> at the resolution required for uploading. Costs of scanning at
> other resolutions may be interesting if time trends in them can be
> more accurately estimates, and if one can estimate a scaling
> relationship between the costs of scanning at different resolutions.

I think we might see a nice decrease here. I have met Bruce
H. McCormick from Texas A&M University at the CNS conferences, and he
is proposing an automated microtome linked to a CCD camera that would
be able to slice and scan tissues at a very high rate at a quite
modest price. Of course, his proposal would currently just help get
the morphology of single dyed neurons, but after talking to him I got
the impression that the scanning costs can be decreased quite a
bit. It might not scale all the way down to 1 mu with an entire brain,
but it looks positive.

> The progress estimate of neuron types modeled help us estimate
> when it will be possible to upload at any cost, while the other
> estimates (together with estimates of how fast CPU/RAM costs fall)
> help us estimate how fast the cost of uploading is falling with time.
>
> Anders, tell me if I'm wrong, but it strikes me that we may well have
> enough experience modeling neurons to start to estimate these parameters.
> If so, constructing such estimates of when uploading will be feasible
> and how fast costs will fall seems a very high priority to me.

I think we can do a reasonable estimate for this. In fact, since I am
thinking of doing a talk at TransVision in London this summer on
uploading, it seems to be a good idea to do. I'll see what numbers I
can dig up.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:09:41 MDT