On Fri, 1 Oct 1999, Robin Hanson posted a note about neuroscience developments:
I think we may be getting University Press Release "scammed" again (just like the U. of Wisconson post about using the gene chips to measure aging being a "novel idea").
> FOR RELEASE: 30 SEPTEMBER 1999 AT 14:00 ET
> discovered that information can be stored in the brain with very high
> spatial density on the surface of every single neuron (Science 1/Oct/1999).
> A research team at the Max Planck Institute for Psychiatry applied the new
> method to investigate the so-called "long-term depression" (LTD), a very
> important molecular mechanism in the brain. Actually, mechanisms like
> long-term depression and long-term potentiation (LTP) are regarded by many
> researchers as the basis for memory formation in the brain.
I think we need a quick seminar from Anders on the possible differences in how the brain uses LTD & LTP.
> As the UV-laser stimulation allowed the release of the neurotransmitter
> glutamate from an inactive form of caged glutamate in a very small region on
> the neuron, the researchers could investigate how big the region on the neuron
> was that experienced LTD. They found that this region was not bigger than the
> resolution of their method, i.e. only a few Micrometer.
This *isn't* news. The synapses have to be that size. Cell bodies range 5-50 micrometers in dimension, so the synapses have to be much smaller.
> Thus, even single synapses may undergo long-term depression and each
> single synapse could be used to store information separately from its
> neighbour. One could compare this possibility for information storage
> in the brain with the "high densitiy information storage" on a CD-ROM.
This *isn't* news either, I've always thought of synapses as having individual weights or strengths. If you compare the information density of a synapse with that of size of a pits on a CD-ROM or magnetic domains (on a hard disk) on an ATOMIC-VOLUME basis, I bet the synapse loses. It certainly loses in information transmission time because you are reading the disks at close to the speed of light, while in biology you have to wait for the molecules to diffuse across the synapse. The problem is that we haven't designed read and write mechanisms that are small enough to match the size scale of the data being stored.
My take -- sound and fury signifying nothing.
The post about the advantages of neural nets encoding things temporally (or temporally and with amplitudes) says *much* more about the information processing density.