Re: The Joys Of Flesh

Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Thu, 12 Sep 1996 13:59:27 +0200 (MET DST)


John, sorry it took me so long to reply (I have a job to do here, and
reading/posting can easily turn a full-time job. Whoever sets out to
install Mallinckrodt's DICOM should never underestimate his adversary.
So far, DICOM's score is better than mine. But we shall see...).

On Sat, 7 Sep 1996, John K Clark wrote:
> On Fri, 6 Sep 1996 Eugene Leitl Eugene.Leitl@lrz.uni-muenchen.de Wrote:
>
> [Moravec's "Mind Children" assumptions on neural circuitry arbitrary ]
>
> I don't think they were arbitrary, but I grant you some things were just
> educated guesses, it turns out that he almost certainly overestimated the

Guesses, right. Whether they were educated, dunno. Moravec seems to live
in an alternative reality. He keeps seeing linear log plots for computer
performance where I see none. We've had them for some time past, yes, but
now we've hit a saturation zone, for years. We can go beyond it, if we
assume a packaging revolution imminent, but that would mean another
saturation zone barrier to spring up just a few years later.

This has sure nothing to do with wet neuroscience, but I think he's
notoriously overoptimistic, possibly indiscriminately so in most areas.
(Btw Moravec, both he & Tom Ray will speak at Medientagen in Munich on AL
things. I will sum up to the list, should something novel crop up
there).

> storage capacity of the human brain. As for it's information processing

I have no idea, what a human brain can store. I can estimate how much
storage _under certain optimistic assumptions_ a fairly accurate
representation of the human brain will need. The amount of storage
estimated, clearly rules out any 2d semiconductor implementations. Too
many of these darned bits :(

> capacity, he could be off by quite a lot and it would change the time the
> first AI is developed by very little. The speed of computation has increased

I know this argument. Sounds good at first...

> by a factor of 1000 every 20 years. It might not continue at that frantic
> pace but ...

This is exactly my own argument: we don't have it even now. Now I did
not model this, and my maths is notoriously weak. But I think saturation
makes itself felt especially bitterly if already setting in in the early
stage of the exponential function. The linear log plot premise ignores
several transition points, where novel technologies have to be invented &
introduced into the market.

Alas, one cannot predict when a technological discontinuity will have
appeared. And to the human eqivalent we'll have several of them.
Packaging revolution, generic paradigm switch to maspar systems,
radically new neural accelerator chip architecture, and finally advent of
molecular circuitry. I dunno, this sequence defines a pretty rigorous
filter.

We'll sure make it there somewhen, but trying to estimate how long it
will take us is somewhat pointless, using today's knowledge.

> >Considering about 1 MBit equivalent for storage for a single
> >neuron
>
> I think that's thousands of times too high.

Let's see: a neuron has 10k synapses, roughly. Let's say each
neuron's body has a dynamics of 16 bit, and a 32 bit ID (these numbers
are quite arbitrary, and are not felt in the estimation in relation to
the contribution of the synapses).

Each synapse has 8 bit dynamics, roughly. Let's say it connects to a
neuron with a 32 bit ID, and 8 bits neuron class ID. (That can be seen
either as too low, as allowing for only 4 Gneurons to be addressed, or to
high, as it ignores hypergridish connectivity, which is mostly local,
allowing for relative addressing). Let's add 8 bit for signal delay, even
if we might not need it. Let's add 8 bits for synapse class ID, which
might seem a bit high. That gives us 32+8+8+8=56 bits/synapse, let's
round that up to 60 bits/synapse. Having 10k of them, this means 60
kBits/neuron, not 1 MBit (1000 kBits) I claimed. An apology is due, the 1
MBit estimate was garbage.

However, my brain faddled: I meant not MBits, but MTransistors (really,
this is not just a lame excuse).

A DRAM cell needs 1 Transistor + 1 capacitor, an SRAM cell 4 to 6 times
that much. DRAM cells are too slow, moreover the neural chip's
architecture (it would lead too far exploring it here en detail) needs a
large amount of few-bit parallel comparators, so we'd rather take the 6x
estimate, which more than conservative. So this means 360 kTransistors/neuron.
Moreover, one needs address decoder circuitry, an adder tree, diverse
lookup tables or hardwired functions defined in random logic, different
for different neuron types, a crossbar or a perfect shuffle router to send
out signal messages to neighbour dies containing other
neurons & receive inputs, etc. These are sure shared resources, but one
cannot assume a huge die, since low yield would render it unsuitable for
wafer-scale integration (WSI), so the resources may well require 0.5-1
MTransistor/neuron, which is not very little. TI's new ASICS are supposed
to have 125 MTransistor complexity, at surely abysmal yield (the dies are
huge).

>
> >8 bits/synapse,
>
> A synapse may well be able to distinguish between 256 strength levels,
> perhaps more, Moravec said 10 bits/synapse, and in 1988 when he wrote his
> book it was reasonable to think that if you multiplied 10 bits by the number

Whether 8 or 10, it is an arbitrary number. The dynamics certainly larger
than 1 bit, though. Moreover, there are different synapse types.

> of synapses in the brain you could get a good estimate of the storage
> capacity of the brain. It is no longer a reasonable assumption

Why?

> The most important storage mechanism of memory is thought to be Long Term
> Potentiation ( LTP). It theorizes that memory is encoded by varying the

Yeah, but I was not talking about memory storage, just computation.
(Actually, the both are indistinguishable: are both patterns of firing
activity. The distinction between storage & computation does not make
sense from computational physics perspective (demand for maximal
locality)). It's an artefact of early days of yore, erm, core. Ferrite
core was dumb, CPU smart. Now's smart switches everywhere, no need to
make a difference anymore.

> strength of the 10^14 synapses that connect the 10^11 neurons in the human
> brain. It had been thought that LTP could be specified to a single synapse,
> so each synapse contained a unique and independent piece of memory, we now
> know that is not true. In the January 28 1994 issue of Science Dan Madison
> and Erin Schuman report that LTP spreads out to a large number of synapses on
> many different neurons.

The Hamiltonian is encoded by the physical synapse machinery. A
postsynaptic neuron is thought to feel a neurotransmitter release, as a
spike triggers neurotransmitter vesicle (packet) release into the
synaptic cleft, the neurotransmitter diffusing & docking into diverse
channels, etc. etc.

If the synapse's not there, the Hamiltonian is different, so we wind up
in slightly different region of the phase space, the shorthand for state
of current self. Failure of synapses, as well as cell death is a common
event, so the Darwinian evolution has designed this function the
Hamiltonian encodes is likely to quench such tiny deviations, to be robust.
But not infinitely so. The hypervoxels in the persona space are defined by
this Hamiltonian. Delete synapses, make the Hamiltonian fuzzy, and you'll
fail to distinguish between two persons. It might be a pure ego thing,
but I'd rather remain me, not someone else. (At first, at least).

> >a 10 k average connectivity
>
> A consensus number, although I have seen estimates as high as 100 k.

100 k are mostly related to convergent, not divergent activities, afair.
Btw, even time-sliced silicon has trouble achieving high connectivities.
The most natural approach would seem to favor bus width/synapse
connectivities, which are a few 100 even on die-local kBit broad buses.

> >and about 1 kEvents/synapse,
>
> Brain neurons seldom go above 100 firings a second, and at any one time
> only 1% to 10% are firing in the brain.

Yeah, but since the kind of coding used is unknown I'd like to assume the
worst case. Spiked systems seem to perform better to smooth ones, btw.
The sampling theorem would require about twice the signal bandwidth.
Since this is hardware, I have to provide the worst case spiking
bandwidth for each channel.

>
> >assuming about 100*10^9 neurons,
>
> Sounds about right.
>
> >Moreover, this is _nonlocal_ MOPS
>
> I assume you're talking about long range chemical messages sent by neurons
> to other neurons, that would be one of the easier things to duplicate.

No, this is referring to data locality in simulation. Access
latency, and such. Chemical broadcast stuff (neuromodulation) is trivial
to simulate, agreed.

> [ diverse things I agree with snipped ]
>
> >I'd rather run at superrealtime, possibly significantly so.
> >100* seems realistic, 1000* is stretching it. Nanoists claim
> >10^6*, which is bogus.
>
> I think 10^9 would be closer to the mark. The signals in the brain move at
> 10^2 meters per second or less, light moves at 3 X 10^8 and nano-machines

Yes, but diamondoid rods move slower. As do electric signals. And you
have to multiplex the raw bandwidth, which is admittedly very high. But
you can lose 3 (and more) degrees of magnitude by multiplexing.

> would be much smaller than neurons so the signal wouldn't have to travel
> nearly as far. Eventually the algorithms and procedures the brain uses could

Yes, they are small. Yet their connectivity is is much smaller, and hence
they have to send several packets over the same local bus to reach nodes
which they have no direct connection to in a (possibly longish) sequence
of hops. Each link can send only one packet at a time, albeit fast.

> be optimized and speed things up even more, but this would be more difficult.

What's this, again? Optimization? Assuming, one cannot collapse
functionality to any noticeable degree, without simulating all the
connectivity? That a minimal threshold computation has to be done (no
free lunch), and that threshold to be pretty high?

> >no groundbreaking demonstration of such reversible logics
> >has been done yet.
>
> Not so, reversible logic circuits have been built, see April 16 1993 Science.

Notice "groundbreaking demonstration". The same applies to quantum
cryptography: it works, yes. But it's not worth the trouble.

> They're not much use yet because of their increased complexity, and with
> components as big as they are now the energy you save by failing to destroy
> information is tiny compared to more conventional losses. It will be
> different when things get smaller. Ralph Merkle, the leader in the field is
> quoted as saying " reversible logic will dominate in the 21'st century".

It's to be hoped, the effect to be sufficiently big to be utilizable in
future switches. Can you expand a bit which energetic gains can be
expected from them?

> >It bears about that many promise as quantum cryptography and
> >quantum computers, imo. (Very little, if any).
>
> There is not the slightest doubt that quantum cryptography works, and not
> just in the lab. Recently two banks in Switzerland exchanged financial

Yes, but what does it offer, and which costs? Which advances does it have
in relation to vanilla cryptography? What is with triggered (by
stimulated emission) photon message spoofing, can it get detected in
principle?

> information over a fiber optic cable across lake Geneva using Quantum
> Cryptography. Whether it's successful in the marketplace depends on how
> well it competes against public key cryptography, which is easier to use and
> probably almost as safe if you have a big enough key. I don't want to talk
> about quantum computers quite yet, a lot has been happening in the last few
> weeks and I haven't finished my reading.

Your other post on QC has been very illuminating. Do you think these
theoretical values are achievable in reality, without assuming
unrealistically high demands on fabrication precision (structure
geometry, alignment, etc?).

> >One tends always to forget that atoms ain't that little, at
> >least in relation to most cellular structures.
>
> It's a good thing one tends to forget that, because it's not true. An average
> cell has a volume of about 3 X 10^12 cubic nanometers, that's 3 thousand

Yes, but pay attention: "cellular structures", not cells. (An E. coli is
roughly 1x1x10 um sized, some bacteria are much smaller (I forgot the
actual numbers, but it may be as low as 0.1x0.1x1 um)). A lipid bilayer
is a structure. An enzyme is a structure. A ribosome is a structure. The
C-C bond length is about the same, whether in protein or diamond. Look at
the average virus on a workstation screen, what is the zoom factor until
you can discern individual atoms? Not very high. I'm claiming you can't
build a CAM cell in a cube smaller than 0.1 um (100 nm) edge by means of
weak nanotech (molecular circuitry embedded in protein matrix). I think you
can do better with Drexlerian nanotech by one order of magnitude max,
that would mean 10 nm, that's less than 100 atoms/edge. Not so very much,
I am afraid.

> billion. Just one cubic nanometer of diamond contains exactly 176 carbon
> atoms.

Yes, but how many operations/s can you gain from these atoms? How many
bits can be stored in this volume? One bit? (probably much less). But
storing alone is pointless, the distinction between storing and
processing must fall, computational physics demands it. How big are your
(smart) bits, then? Much larger, I fear.

> >>No known physical reason that would make it
> >>[Nanotechnology] impossible.
>
> >No known physical reasons, indeed.
>
> I don't think there are any physical reasons why strong Nanotechnology is

What is physics, and what is technology is virtually the same. The
distinctions are fuzzy at best, nowadays. A lot of constraints, all of them
"engineering", applied in a sequence can make the gadget unviable.
Operability window, and such. Now is this physics, or is this engineering?
Who cares, as long as original demand -- for the functional gadget -- is
not met.

I think my perspective is the engineer's perspective, not a pure
scientist's.

> impossible, that's why I don't put it in the same category as faster than
> light flight, anti-gravity, picotechnology or time travel. If you disagree

Apropos picotech, see the post on femtotech I forwarded to the >H list ;)

> with my statement then I want to know exactly what law of physics it would
> violate, not all the reasons that would make it difficult. I already know
> it's difficult, that's why we haven't done it yet.

Chemical reactions are ruled by QM, right? Mechanosynthesis is a chemical
reaction, right? Should mechanosynthesis be the bottleneck, ruling out
autoreplication, the gadget is inviable, right? Now is this engineering
or science?

> >Just look at a diamondoid lattice from above, and look at
> >the periode constant. When zigzagged, C-C bond are a lot
> >shorter. Sterical things.
>
> If something is pretty rigid, like diamond, it's steric properties are the
> same as it's shape properties, at least to a first approximation. Often
> steric difficulties can be overcome just be applying a little force and
> compressing things a little. Naturally it is vital for a Nanotechnology

You want to deploy/abstract carbon atoms from above. What your tip sees,
is a projection of the zigzagged sheet into the plane. Now I want to deploy
a perfect diamondoid lattice. First you abstract your hydrogen, then
_another_ tip (is this compatible with the 1*10^6 atoms/s estimate?) must
deploy your reactive moiety before surface-absorbed species (which are
highly mobile) bang into your radical spot, possibly quenching it.

How much precision do I need? 100 pm? Sorry, this doesn't correlate with
my chemical intuition.

> engineer to remember that no molecule is ever perfectly rigid and at very
> short distances anything will look soft and flabby. Drexler is not ignoring

If I want to do computation with diamondoid rods, they are pretty rigid
on small scale. I can't push with very long, thin rods, however, I have
to pull'em. This is one of the reasons why you can't have the gigantic
connectivity you need for mind uploading & have to emulate it, thus
losing the speed advantage.

> this, he spends a lot of time in Nanosystems talking about it.
>
> >We want to know, whether a) a given structure can exist
>
> It is possible that some of the intermediate states of the object you want to
> construct would not be stable. I can see two ways to get around this problem.

According to Drexler's computation, the structure itself can exist, and
one _can_ do computation with diamondoid systems. The iffy thing is
mechanosynthesis, where excited states exist transiently.

> 1) Always use a jig, even if you don't need it.

Using scaffolding is an excellent idea, biotech often uses it.

> 2) Make a test. If you know you put an atom at a certain place and now it's
> mysteriously gone, put another one there again and this time use scaffolding.
> Neither method would require a lot of intelligence or skills on the part of
> the Nanotech machines.
>
> It's also theoretical possible that some exotic structures could not be built,
> something that had to be complete before it's stable, like an arch, but
> unlike an arch had no room to put temporary scaffolding around it to keep
> things in place during construction. It's unlikely this is a serious
> limitation, nature can't build things like that either.
>
> >b) whether we can build the first instance of this structure
>
> Assuming it can exist, (see above) the question of whether you can make it or
> not is depends entirely on your skill at engineering and has nothing to do
> with science.

But what is your skill of engineering? Why can't organic synthesis
synthesize anything complex? Because it can't do mechanosynthesis. Why is
biochemistry much more powerful? Basically, because it utilizes highly
constrained systems at synthesis, which are created automagically as the
protein folds. So whether you can create a complex structure, depends on
your skill at mechanosyntheis. So this is a circulus vitiosus.

There _is_ a bootstrap problem, and it is not trivial. (Personally, I
think it is surmountable, using the complementary skills of SPM,
biochemistry & organic syntesis).

> >c) this structure is sufficiently powerful to at least a
> >make a sufficiently accurate copy of itself.
>
> Depends entirely on the particular structure you're talking about and on the
> particular environment it is expected to be working in. Again, this is pure
> engineering.

The particular structure is a stiff diamondoid tip system, driven by a
diamondoid rod logic computer. So far, no trouble. This structure
deposits diamondoid solids of any geometry allowed by nanolitho
constraints. Mechanosynthesis has not been shown to work. This is a
problem. The defect threshold in the clone structure must not have _any_
negative impact on positioning tip accuracy. Orelse the autoamplification
sequence will be pitifully short, resulting in a handfull of viable
nanoagents. Nanotechnology which can't autoreplicate is worthless. That is
a problem.

One must be careful with words. Engineering denotes a problem class which
is surmountable, enough work invested provided. Physics defines a class
of structures/processes which are viable. However, you can't always say
whether a structure/process is viable a priori. The only 'proof' would be
the first instance of such structure, or, more weakly, lots of rock-solid
computer runs and SPM experimental data. We don't have the former, nor
the latter. Drexler's case is speculative engineering, augmented with
some circumstantial evidence, not just mere engineering.

You can lick engineering.
You can't lick physics.

There is a difference.

> >That's a lot of constraints, and all of them physical.
>
> No, none of them are physical, all of them are engineering.

See above.

>
> >Claiming the problems to be merely engineering, is not good
> >marketing, imo.
>
> I wouldn't know, I'm no expert on marketing.

Nor me, but without adequate R&D funds almost no relevant research will
be done. Marketing is social engineering. I thought you liked engineering?

> >There are excellent reasons to suspect this connectivity
> > [of neurons] to be crucial
>
> That's true, I don't think there is the slightest doubt. This vast
> connectivity is the very reason why biological brains are still much better
> than today's electronic computers at most tasks, in spite of it's appallingly
> slow signal propagation.
>
> > so you have to simulate this connectivity.
>
> Obviously, and I see absolutely nothing in the laws of Physics that would
> forbid Nano Machines from equaling or exceeding this connectivity.

Simulating or implementing it in hardware are two different things. The
neuronal connectivity can fluctuate, because the structure which computes
has also automanipulative capabilities. A diamond rod has no such
capabilities, it needs anabolics/catabolics by an external unit which is
capable of mechanosynthesis. Such a unit is _very_ big, and cannot have
access to that rod before disassembling all the other structures before it,
recording the state, changing the circuitry, then rebuilding everything
from memory, including state.

(John, nanoagents are supposed to manipulate atoms, not bits. At least
that's what they are designed to do. This is also the reason why Utility
Fog is suboptimal as a computational medium).

I dunno, hardly an economical solution. So rather give me hardware
of limited connectivity, which can simulate arbitrarily high
connectivities by multiplexing existing local bandwidth. This means
losing efficiency, resulting in much slower designs, but currently I do
not see how this problem can be solved in any different way.

> >you can't do it directly in hardware
>
> Why not? The brain has a lot of connectivity, but a random neuron can't
> connect with any other random neuron, only the closest 10 thousand or so.

Yes, but try to rewire nodes aligned on a noisy grid with 10 k wires
each. See the nightmare of tangled wire, as many 10k cones overlap. This
is a jungle of sterical hindrance. You have either construct the wires
from within, like the brain does, or from the outside, which both bloats
the machinery drastically, making it even bigger than its biological
counterpart.

I'd rather go for e.g. cubic primitive lattice of _very_ small, very
brainded computers which can only talk to its neighbours. This is the CA
paradigm, and it can emulate about anything (it's equivalent to Turing),
unlimited connectivity included.

Of course there is a price: physical signalling can be very fast,
probably many km/s. The entire circuit is just a few mm sized, so that's
a lot.

However, due to multiplexing, you lose some orders of magnitude. So alas,
the uploader is ulikely to run at 10^9 realtime speed.

> The brain grows new connections and I don't see why a machine couldn't do
> that too if needed, but another way is to pre wire 10 thousand connections

The machine needs to build structures from the outside, not from within.

> and then change their strength from zero to a maximum value.

This is a possibly viable alternative. It requres strong Drexlerian
nanotechnology, however, while the simple molecular circuitry (weak
nanotech) cannot do that, relying on autoassembly of protein crystal
growth.

Since I tend to be conservative, I always choose the simpler route.

Autoassembling molecular circuitry is almost certainly viable, but is so
strong nanotech?

> >you must start sending bits, instead of pushing rods
>
> Eugene! I know you can't mean that. Using the same logic you could say that a
> computer doesn't send bits, it just pushes electrons around, and the brain
> doesn't deal in information, it just pushes sodium and potassium ions around
> and Shakespeare didn't write plays, he just pushed ASCII characters around
> until the formed a particular sequence.

I was referring to emulation, not hardwired implementation. Everbody
knows a simulation is always much slower than simulation, unless clever
algorithms can be utilized. If you can't substitute what neural circuitry
does with less demanding algorithmics (what I suspect) it's emulation
time, alas. Wish it was different.

>
> John K Clark johnkc@well.com

'gene
_________________________________________________________________________________
| mailto: ui22204@sunmail.lrz-muenchen.de | transhumanism >H, cryonics, |
| mailto: Eugene.Leitl@uni-muenchen.de | nanotechnology, etc. etc. |
| mailto: c438@org.chemie.uni-muenchen.de | "deus ex machina, v.0.0.alpha" |
| icbmto: N 48 10'07'' E 011 33'53'' | http://www.lrz-muenchen.de/~ui22204 |