Re: "analog computer" = useless hypothesis?

From: Jim Fehlinger (fehlinger@home.com)
Date: Mon Apr 02 2001 - 18:48:06 MDT


"J. R. Molloy" wrote:
>
> Thank you for the input, Anders. Nice to hear from someone who keeps up to
> date on these matters. I'd thought it premature to write off analog computing
> as a useless hypothesis, but now it seems intelligence has a singular affinity
> for analysis of the digital computation kind (conveyance of this information
> is completely digital), so the tagline remains amended accordingly.

I should probably stay out of this henceforward, but I can't resist the
opportunity to quote some more from the book I bought weekend before last,
_Going Inside..._ by John McCrone. This will be what I would have marked
from this section in pink highlighter (if I were the sort of person who
did that to books). At the end are some comments of my very own (!),
so you can skip there if you're not interested in the McCrone.

 From Chapter 3 "Ugly Questions About Chaos" (pp. 50-73):

"[T]he success of the digital computer is founded on precisely its ability
to squeeze out any uncertainty in its behaviour. Computers are built to
follow the same steps and come to the same answers every time they run a
program. The advantage of this is that once the design of the hardware or
software has been perfected, endless copies can be churned out, all with
identical characteristics and performance. By freezing the logic, the logic
can be mass-produced. But the flipside is that the smallest bug or mis-step
can bring the whole system crashing down. For a computer to work, chance
events have to be ruled out right down to the level of the electrical
components from which the hardware is built. A transistor is engineered
so that it can put up with things like slight fluctuations in the power
supply or changes in temperature without changing state. Some kinds of
electronic devices are analogue -- they produce a continuously varying
output -- but a transistor is a binary switch [in a computer anyway; not
in most audio amplifiers!]. That is the meaning of digital. It is
either on or off, a 1 or a 0. There is no room for shades of grey, only
black and white. A bit of information either exists, or it does not.

The assumption [of the back-propagation neural network folks of the 1980's]
was that brain cells were also basically digital devices. The brain might
be a pink handful of gloopy mush; brain cells themselves might be rather
unsightly tangles of protoplasm, no two ever shaped the same; but it was
believed that information processing in the brain must somehow rise above
this organic squalor. There might be no engineer to draw neat circuit
diagrams, but something about neurons had to allow them to act together
with logic and precision.

Brain cells certainly had a few suggestive features. To start with... they
have a separate input and output end... there is a direction in which
information flows... It is true that a few synapses... are found on
the cell body, and sometimes even on the axon itself, but generally
speaking dendrites collect the information and axons deliver the response...
Whether two cells are connected is a black and white issue...

There is a physical logic in the wiring patterns of the brain. Then, on
top of this, there is something quite plainly binary about the all-or-
nothing nature of a neuron's decision to fire...

So, despite the brain being made of flesh and blood, the propagation
of signals looks to have a digital clarity. But the question is whether
brains are exclusively digital in their operation. A computer...
relies on its circuits being completely insulated from any source of
noise which might interfere with the clockwork progression of 0s and
1s. But it is not so clear that brain cells are designed to be shielded
from the messy details of their biology. Indeed, a closer look at a
neuron soon suggests the exact opposite: it is alive to every small
fluctuation or nuance in its internal environment. Where a
transistor is engineered for stability, a brain cell trades in the
almost overwhelming sensitivity of its response...

[D]epending on what mix of pores [ion channels, pumps, and receptors]
is built into an area of membrane -- something which itself can be
changed in minutes or hours -- ... a neuron can show a trememdous
variety of responses. A computer is made of standardised components.
One transistor is exactly like the next. But every bit of
membrane in the brain is individual. The blend of pores can be tailored
to do a particular job, and that blend can be fine-tuned at any time.
There is a plasticity that makes the outside of a neuron itself seem
like a learning surface, a landscape of competition and adaptation...

[I]n practice, there is nothing certain about any of the steps in [the]
chain [of an action potential in one neuron leading to neurotransmitter
release and stimulation of target neuron]. An axon may often not
even release any neurotransmitter, despite being hit by a full-strength
spike. The amount of neurotransmitter spilled into the gap can also
vary. Plus, there is a whole cocktail of other substances that may
or may not be released at the same time... So a spike may seem like
a digital event -- the all-or-nothing creation of a bit of information
guaranteed to reach a known destination -- but the same signal
might one moment be met with an instant and enthusiastic response,
the next only fizzle away into nothing, failing even to stir a cell's
own axon tip.

Some of the variability in the behavior of a neuron could be just noise...
[b]ut while there is undoubtedly a degree of noise in the brain, much
of the variability looks deliberate... [Neurons] appear to
thrive on being fluid. By using competition and feedback to fine-tune
their workings, they can adapt their response to meet the needs of the
moment...

[A] quite recently discovered fact is [that while it] had always been
assumed that it was the action of billions of synapses that added up
to make a state of consciousness... consciousness -- at least,
levels of attention and alertness -- seems able to influence the response
of individual synapses. In computer terms, the logic sounds alarmingly
backwards...

One crucial discovery was that a cell's output spike actually travels
both ways: it runs down the axon, but also back over the cell itself and
through its own dendrites. What this means is that synapses are told
whether or not they contributed to the last firing decision...

A still more surprising discovery [of the 1990s] was that a stimulated
dendrite releases a small dose of... nitric oxide (NO)... back across
the synaptic junction... [I]t appears to switch on certain enzymes
in an axon tip, prompting them to ramp up production of neurotransmitters...
Nitric oxide also has the secondary effect of relaxing the walls of
blood vessels, so its release probably increases the blood flow into
an active area of the brain [the brain's self-administered poppers ;-> ]...

[T]hen there are the feedback connections between cells... Here,
[Donald O.] Hebb [author of _The Organization of Behavior_ (1949)]
turned out to be more right than he imagined... [F]eedback connections
dominated the brain. They were everywhere, from short loops linking
neighbouring cells to long chains in which a signal might bounce right
around the brain before feeding back, much modified, to its source...
Some came back as part of the wash of input hitting a cell's dendrites,
but others formed synapses on the cell body or close to the axon hillock
where -- being closer to the action -- they could exert a much more
powerful effect...

As a biological organ, the brain could not help being a little noisy
and unpredictable in its workings... The brain... [uses] feedback to
adjust its circuits and competition to evolve its answers, which again
introduced an element of unpredictability. But ultimately, all this
feedback and competition appeared to be directed towards producing
a well-organized response... To the computer-minded, the foundations
might look soggy, but there did seem to be something concrete going
on...

The trouble with this charitable view was that there remained
something fundamentally different about brains and computers. Any
digitalism in the brain was a weak, blurred-edge, pseudo kind of
digitalism... Computers, on the other hand, were digital by nature...
So if a computer wanted to behave like a dynamic, feedback-tuned
system, it had to fake it...

For example, to make the neurons in a backprop network seem more
realistic... they broadcast... some figure between the full-off
of a 0 and the full-on of a 1... Surely, it would not take
too many... decimal places to render the problem of rounding...
completely irrelevant? A simulated neuron should be able to
show all the rich variety of output of a real one...

There is a lot of science that can be done by concentrating on
situations so close to being digital as not to make a difference.
Yet there are clearly also a great many areas... where the
blurring of boundaries and the fluid nature of the relationships
cannot be ignored. The classic examples are the weather,
economics, social systems, condensed matter physics, quantum
mechanics, fluid dynamics, and anything to do with biology.
Such systems are not just accumulations of components, bits of
clockwork in which every gear is locked into a fixed relationship
with its fellows...

[There follows a brief summary of the mathematics of chaos and
complexity; the former dealing in iterated applications of a function
whose output is fed back to itself (generating fractals and
Mandelbrot sets and so forth); the latter with the further wrinkle
that the chaos-generating functions evolve in time). Description
of point, limit-cycle, and strange attractors.]

What startled [meteorologist Edward] Lorenz... was that the maths was
both deterministic and utterly unpredictable. It was deterministic
in that so long as the starting values were exactly the same,
his program would always crank out the same result... Yet the
merest hint of a change in those values and immediately there
was no telling where the situation might go...

Given that the real world is a continuous place, and so exact
starting points can never be measured, this means that it is
impossible -- as a matter of principle -- to predict the behavior
of a feedback-dependent system...

This was dismal news for scientists wedded to a reductionist view
of the world. It destroyed the belief that if you knew all the rules
governing a system, you could then predict its future. Chaos
theory said you could know the laws and still not predict...

Fortunately, chaos had an important saving grace. While the path
of any particular system could not be predicted, outcomes had a
tendency to group... A truly random system would be equally
likely to visit every point in the space of all possible
outcomes. But a chaotic system would have some kind of
attractor -- a region it preferred to inhabit...

Computer designers wondered whether they could harness a chaotic
attractor to drive a new kind of neural network. A network
might be able to represent its memories or programs as an
attractor state distributed across the strength of its
connections. So rather than following a rigid step-by-step
summation of weights to produce an answer, the system would
be like a Hebbian feedback network in which input would
wander about a bit before it eventually fell into a basin
of attraction... Neuroscientists saw the same link...

There is much more that could be said about chaos and complexity;
they are huge subjects in their own right. But for mind
science, the point is that their mathematics must shake the
common conviction that computers and brains are fundamentally
the same. Chaos theory says that being digital matters.
There are consequences when decimal places get rounded off
as small errors can soon develop into large differences. But
much more importantly, there is a hidden energy in a feedback-
driven, chaos-harnessing system. There is both the push
of its competitions and the pull of its underlying dynamics --
the places its attractors want it to go. So brains and
computers might both process information, but as technology
stands -- even with the glamorous new field of artificial
neural networks -- they do it in a deeply different way...

Even then, complexity cannot be the whole story. The brain
still has a digital-like side. There is no escaping the
all-or-nothing nature of cell firing, or the precision with
which neurons make their connections... Human brains
can also -- with perhaps a bit of a struggle -- think logically.
We can reason in a sequential, linear fashion which appears
not unlike a computer program.

------------------------

It crossed my mind today that the reason for some
of the "tetchiness" surrounding this whole subject of
digital vs. analog may be the unspoken (and mistaken, I
think) assumption that if the brain **does** rely in some
fundamental way on analog processing, then that would mean
the party's over in terms of AI, SI, the Singularity, and
all the other fun stuff the Extropians have planned for
the coming century. It's certainly true that most of
the talk about AI on this list has been in terms of
software running on some sort of digital substrate
("computronium", or whatever), but that's not the only
way AI or SI could happen.
 
While digital integrated circuits might be the most
glamorous electronics on the market these days, don't
forget that there are still linear ICs being manufactured!
It's altogether conceivable that a non-biological AI or
upload could use molecular or nano-scale **linear** devices
as processing elements! Such a contraption might not be
quite as tame inside as we probably visualize -- most of us
probably think of some sort of 3D crystalline lattice of
nanotubes with nothing but electrons flashing around, and with
all the action happening in software, rather than something that
might look more like Babbage's (or Gibson and Sterling's)
Difference Engine, with gears and whirlygigs making and
breaking connections or aiming little laser beams around,
or scurrying nanobots. Or what about an AI/SI made
out of honest-to-God biological tissue, but freed from the
confines of a human skull and serviced by nanobots (yes,
that idea gives me the creeps, too. Too _Last and First
Men_).

I get the impression that's the way Kurzweil thinks AI
will happen -- he talks about the brain as a "digitally-
controlled analog system", and I think he thinks we're
going to get to AI by reverse-engineering the human
brain.

Maybe a partly analog phase will be a necessary **transition**
on the way to all-digital AI and/or uploads. It's certainly
true that all-digital does have its attractions: if nothing
else, the idea of being able to halt the processor between
one clock cycle and the next, and being able read out, save,
transmit, reload, and restart an AI with no interruption
in the flow of consciousness does sound sort of appealing.
But even a self-enhancing AI such as Eliezer imagines doesn't
have to be a digital computer. True, a "codic cortex" might
not be of much use in that case, but maybe the AI will have
to major in EE instead of in CS ;-> .

Here's a deep question: would an analog AI constructed out
of vacuum tubes have a "mellower" personality than one
made out of transistors? ;-> ;-> ;->

Jim F.



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:44 MDT