Re: Emulation vs. Simulation

From: Jim Fehlinger (fehlinger@home.com)
Date: Tue Mar 27 2001 - 20:54:50 MST


hal@finney.org wrote:
>
> Jim Fehlinger, <fehlinger@home.com>, writes:
>
> > In George Lakoff's words "Functionalism... is the theory that
> > all aspects of mind can be characterized adequately without
> > looking at the brain...
>
> This seems to be a somewhat contradictory definition. The point is
> that the manner in which you "program" a brain to run an algorithm is
> by arranging the details of neural connection and tissue organization.

Well, the cognitivists of the 70's and 80's didn't bother about "the
details of neural connection and tissue organization". They were
concerned with abstract information-processing models which, as
far as they were concerned, might just as well be executing on a
digital computer as have anything to do with action potentials,
ion channels, and that grey and white glob of jello between the ears.

The progression of "hard", scientific psychology during the 20th
century (after the great days of William James) is:

1. Behaviorism. Study observable stimulus and measurable behavior.
    What's inside the organism's skin is strictly off limits --
    the organism is a black box.

2. Cognitivism. You can theorize about what's inside the skin, but
    you needn't actually bother about all that slimy stuff inside.
    Just treat the organism as if it could be replaced
    by a computer hooked up to equivalent input and output devices.

3. The 90's thing (what's it called, anyway?) Whatever science-
    fictional Gedanken experiments can be cooked up about using
    computronium to simulate physical reality at arbitrary levels
    of detail, you aren't going to get anywhere figuring out how
    brains work unless you actually look at the brains in detail.

> You can't be concerned about the algorithm without being concerned about
> the structural details.

Yes, but those can be abstracted into a formal, high-level description
of the program. As long as you've got a compiler or interpreter (a.k.a
"virtual machine") on your computer for that high-level formal language,
then whether the physical computer's a Mac or a PC, the program will **function**
the same.

> > "In the functionalist view, what is ultimately important...
> > are the algorithms, not the hardware on which they are executed...
>
> This is like saying that we care in a computer about the program it is
> running, but the contents of its memory and storage shouldn't concern us.

It's like saying we don't care whether the program is running on a
Univac I, a PDP-10, or a Pentium (apart from questions of performance);
whether the memory is magnetic drum, acoustic delay line, vacuum tube,
or CMOS. For that matter, the program could be run by a person
reading the code off sheets of paper, and using a pencil (and an
eraser) to write, erase, and re-write paper "storage" (I have worked
with people who learned to program that way, in places and at times
where real computers were scarce -- like in India 20 years ago).

> The problem is, it is the contents of its memory which determine what
> program it runs. So if we are concerned about the program, we must
> therefore be concerned about the contents of memory.

But the physical implementation of that memory is of no concern,
in the functionalist view.

Here's what McCrone has to say about this in _Going Inside_:

"With hindsight, it seems odder and odder that mainstream psychologists
were so intent on studying the mind without also studying the brain.
Even if they could not do actual experiments, there was already
enough known about neurology to help frame their theories. However,
a generation of researchers had grown up in the belief that
information processing was all about programs. It did not really
matter what kind of hardware a program was running on -- whether it
was flesh and blood or silicon -- so long as the underlying logic
was preserved. The brain was simply a particular implementation
of something more general. So how the brain might choose to arrange
its circuits was of marginal interest at best.

The justification for this view was put forward in 1936 by the
spiritual father of computing, the British mathematician Alan Turing.
Turing's proof was famously simple. He created an imaginary device,
later dubbed the Turing machine, which was nothing more than a
long strip of paper tape and a processing gate which could carry out
four operations. The gate could move the tape a step to the left
or the right and then it could either print or erase a mark. Given
an infinite amount of time and an infinite length of tape, Turing
demonstrated that this most rudimentary of computers could
crunch its way through any problem that could be reduced to a
string of 0's and 1's. Using a binary code to represent both
the data and the instructions which told the gate how to manipulate
the data, a Turing machine had all it needed to get the job
done.

For computer science, this proof was enormously important because
it said that all computers were basically the same... Whether
a machine used a single gate, or millions, or trillions;
whether it was built of paper tape, silicon chips, or something
really exotic like beams of laser light, the principles of its
operation would be identical...

In 1960, one of the founders of cognitive science, the Princeton
philosopher Hilary Putnam, seized on Turing's proof to argue
that it meant brains did not matter. If the processes of the
human mind could be expressed in computational form, then any
old contraption could be used to recreate what brains did.
The brain might be an incredibly complicated system, involving
billions of nerve cells all chattering at once -- not to
mention the biochemical reactions taking place within each
cell -- but, in the end, everything boiled down to a shifting
pattern of information. It was the logic of what the brain was
trying to do that counted. So given enough time, even the
simplest Turing machine could recreate these flows...

...[T]here was a noticeable difference between the way
computer scientists and psychologists talked about the issue.
Those on the computer side of the divide could be as bullish
as they liked. Many seemed convinced their creations were
practically conscious already; certainly, artificial
intelligence was only a matter of decades away. The
psychologists had to choose their words more carefully.
Yet what Turing's proof did mean was that they never need
feel guilty about failing to take a neuroscience class
or open a volume on neuroanatomy. During the 1970s and for
most of the 1980s, it was information theory which was
the future of mind science. So while a psychologist might be
embarrassed by not being up to date with the latest
programming tricks or computer jargon, a complete
ignorance of the brain was no bar to a successful career."

-- _Going Inside: A Tour Round a Single Moment of Consciousness_
   Chapter 2, "Disturbing the Surface", pp. 22-24

And here's what Edelman has to say about functionalism:

"A persuasive set of arguments states that if I can describe
an effective mathematical procedure (technically called an
algorithm...), then that procedure can be carried out by a
Turing machine. More generally, we know that **any**
algorithm or effective procedure may be executed by any
universal Turing machine. The existence of universal
machines implies that the **mechanism** of operation of
any one of them is unimportant. This can be shown in the
real world by running a given program on two digital computers
of radically different construction or hardware design and
successfully obtaining identical results...

On the basis of these properties, the workings of the brain
have been considered to be the result of a "functional"
process, one held to be describable in a fashion similar
to that used for algorithms. This point of view is
called functionalism (and in one of its more trenchant forms,
Turing machine functionalism). Functionalism assumes
psychology can be adequately described in terms of the
"functional organization of the brain" -- much in the way
that software determines the performance of computer
hardware...

This "liberal" position affirming the absence of any need
for particular kinds of brain tissue suffuses much of
present-day cognitive psychology...

For problems that can be solved consistently in a finite
amount of time, a Turing machine is as powerful as any
other entity for solving the problem, **including the brain**.
According to this analysis, either the brain is a computer,
or the computer is an adequate model or analogue for
the interesting things that the brain does.

This kind of analysis underlies what has become known
as the physical symbol system hypothesis, which provides
the basis for most research in artificial intelligence.
This hypothesis holds that cognitive functions are carried
out by the manipulation of symbols according to rules.
In physical symbol systems, symbols are instantiated in
a program as states of physical objects. Strings of
symbols are used to represent sensory inputs, categories,
behaviors, memories, logical propositions, and indeed
all the information that the system deals with...

If any of the forms of functionalism is a correct
theory of the mind, then the brain is truly analogous
to a Turing machine. And in that case, the relevant
level of description for both is the level of symbolic
representation and of algorithms, not of biology...

Why won't this position do? The reasons are many...

An analysis of the evolution, development, and structure
of brains makes it highly unlikely that they are
Turing machines. As we saw..., brains possess enormous
individual structural variation at a variety of
organizational levels. An examination of the means
by which brains develop indicates that each brain is
highly variable... [E]ach organism's behavior is
biologically individual and enormously diverse...

More damaging is the fact that an analysis of ecological
and environmental variation and of the categorization
procedures of animals and humans... makes it unlikely
that the world (physical and social) could function
as a tape for a Turing machine... The brain and
nervous system cannot be considered in isolation from
states of the world and social interactions. But
such states, both environmental and social, are
indeterminate and open-ended. They cannot be simply
identified by any software description...

What is at stake here is the notion of meaning.
Meaning, as Putnam puts it, 'is interactional.
The environment itself plays a role in determining
what a speaker's words, or a community's words,
refer to.' Because such an environment is open-ended,
it admits of no a priori inclusive description in
terms of effective procedures...

Now we begin to see why digital computers are a false
analogue to the brain. The facile analogy with
digital computers breaks down for several reasons.
The tape read by a Turing machine is marked unambiguously
with symbols chosen from a finite set; in contrast,
the sensory signals available to nervous systems are
truly analogue in nature and therefore are neither
unambiguous nor finite in number. Turing machines
have by definition a finite number of internal states,
while there are no apparent limits on the number of
states the human nervous system can assume (for example,
by analog modulation of large numbers of synaptic
strengths in neuronal connections). The transitions
of Turing machines between states are entirely
deterministic, while those of humans give ample appearance
of indeterminacy. Human experience is not based on
so simple an abstraction as a Turing machine; to get
our 'meanings' we have to grow and communicate in
a society."

-- _Bright Air, Brilliant File_,
   "Mind Without Biology: A Critical Postscript", pp. 220-225

Jim F.



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:43 MDT