Re: NEURO: Advanced neurons?

Anders Sandberg (
Tue, 18 Mar 1997 14:25:45 +0100 (MET)

On Mon, 17 Mar 1997, Eugene Leitl wrote:

> On Mon, 17 Mar 1997, Anders Sandberg wrote:
> > I don't know how shrinkable the brain is, but being the brother/collaborator
> > of an Amiga demo programmer I know that a clever hack can achieve plenty
> I still own a fully operational Amiga (A2000, this mail is being written
> on it),

Why am I not surprised? :-)

> and I have sure seen miraculous demos. However, we are talking not
> about a radical algorithm (as a raycasting renderer vs. a polygon one)
> written by a human hacker, we are talking about an long-term evolutionary
> optimized connectionist hardware, which seem to operate at the threshold
> of what is at all possible with biological structures (speed, accuracy,
> etc.).

We'll see. Finding out if a given algorithm is maximally compressed seems
to be very hard (Most likely NP or worse).

> What I _wanted_ to convey, is tanstaafl; that there is no free lunch, that
> there is a minimal computational work to be done to simulate a given
> physical system realistically. The harder, the smarter (= more complex)
> the system is. And that minimal threshold may lie quite high for such
> complex objects as a mammal brain.

I'm not entirely convinced about this. Remember that the brain is a messy
legacy system, and that biological neurons has plenty of biological

> Human equivalents the size of a sugar
> cube, running at speeds >10^6 of realtime seem to reside firmly in the
> realm of science fiction, not even very good science fiction.

"Permutation city - ten million people on one chip" to quote the dust
jacket of the latest edition. I think the compression problem makes for
very good sf, actually.

> > parameters in each synapse (I'll chunk the synapses and dendrites they sit
> > upon into one unit here), we get 10^17 state variables. Nonlinearity,
> > dendritic computation and similar stuff just makes calculating the next
> > state more heavy, and doesn't increase the storage needs much. Each
> > update, (say) every millisecond might depend on the other states in each
> > neuron + signals from connecting neurons, giving around 10^26 flops.
> > Diffuse modulation doesn't increase the estimate much. Of course, this is
> > a very rough estimate and should be taken with a large grain of salt.
> How large large? Please give the exact weight, and the error range ;)

Assume an ordinary statement requires one grain of salt (around 1 mm^3),
and contains around 20 words. The above paragraph contains 105 words in
its original form, so we get 6 mm^3 of salt (since it was a large grain
of salt I round upwards).

Anders Sandberg Towards Ascension!
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y