Re: Emulation vs. Simulation

From: hal@finney.org
Date: Thu Mar 29 2001 - 01:07:39 MST


Jim Fehlinger, <fehlinger@home.com>, writes:
> Here's what McCrone has to say about this in _Going Inside_:
>
> "With hindsight, it seems odder and odder that mainstream psychologists
> were so intent on studying the mind without also studying the brain.
> Even if they could not do actual experiments, there was already
> enough known about neurology to help frame their theories. However,
> a generation of researchers had grown up in the belief that
> information processing was all about programs. It did not really
> matter what kind of hardware a program was running on -- whether it
> was flesh and blood or silicon -- so long as the underlying logic
> was preserved. The brain was simply a particular implementation
> of something more general.

Makes sense up to this point.

> So how the brain might choose to arrange
> its circuits was of marginal interest at best.

Thud.

This final sentence does not follow. It is slipping between levels
of understanding. Philosophically, a computer can be conscious.
But psychologically, the only conscious systems around are brains.
While the details of brain structure are irrelevant to the abstract
philosopher, to the psychologists (which is what this paragraph is about)
they are highly relevant.

I, like most of us, adopt a position which is basically functionalism;
anyone who believes that uploading is possible (even gradual uploading)
believes in it. Yet do any of us agree that how a brain arranges its
circuits is of only marginal interest? I don't see how. This is of
crucial interest in understanding brain behavior.

It's certainly true that, once this understanding is gained, the same
functional properties can be realized in other media, such as computer
circuits. But the paragraph is discussion how theories of consciousness
are created and tested. The most devout, strict functionalist will find
his philosophy fully supportive of the notion that his theories of mind
must be consistent with the physical properties that implement the mind,
which is the structure of the brain.

> In 1960, one of the founders of cognitive science, the Princeton
> philosopher Hilary Putnam, seized on Turing's proof to argue
> that it meant brains did not matter. If the processes of the
> human mind could be expressed in computational form, then any
> old contraption could be used to recreate what brains did.

Of course. You could make a computer capable of running a conscious
program out of tin cans and string, in principle.

But this says NOTHING that implies that we should ignore the brain in
trying to understand how it works. All it means is that, once that
understanding is gained, we can at least in principle implement the
same functional relationships on a computer.

> And here's what Edelman has to say about functionalism:
>
> "A persuasive set of arguments states that if I can describe
> an effective mathematical procedure (technically called an
> algorithm...), then that procedure can be carried out by a
> Turing machine. More generally, we know that **any**
> algorithm or effective procedure may be executed by any
> universal Turing machine. The existence of universal
> machines implies that the **mechanism** of operation of
> any one of them is unimportant. This can be shown in the
> real world by running a given program on two digital computers
> of radically different construction or hardware design and
> successfully obtaining identical results...

Sure, keeping in mind that the details are unimportant only in the
philosophical sense. In terms of learning what the program is, being
able to dump out the memory requires understanding how the data is stored
in memory in the first place. So to learn about the program and how it
works, understanding these details is of crucial importance. This is
the distinction which the anti-functionalists seem eager to overlook.

> More damaging is the fact that an analysis of ecological
> and environmental variation and of the categorization
> procedures of animals and humans... makes it unlikely
> that the world (physical and social) could function
> as a tape for a Turing machine... The brain and
> nervous system cannot be considered in isolation from
> states of the world and social interactions. But
> such states, both environmental and social, are
> indeterminate and open-ended. They cannot be simply
> identified by any software description...

There is a germ of truth in this critique. Technically a TM does not
interact with a changing environment. It has a static set of input and
output tapes. As a model for some forms of computation, this is adequate,
but to model a physical system which is not fully self-contained it is
not enough.

I don't view this as of crucial importance, because the basic idea
still holds. Modern computers are open systems just like brains; they
interact with their environments. I don't know if anyone has formalized
this notion of "open" computation. But the general idea is still valid,
that a computer interacting with an environment is every bit as powerful
in its information-processing capabilities as a brain interacting with
that environment.

We should also note that the recent discoveries of quantum computers
suggest that it may be possible for physical systems to be more powerful
than Turing Machines. However we can fix this by simply positing a
Universal Quantum Computer in place of the Universal Turing Machine.
The basic point remains true, that information processing is a fundamental
physical process which can be carried out by many kinds of systems,
from brains to computer chips.

> Now we begin to see why digital computers are a false
> analogue to the brain. The facile analogy with
> digital computers breaks down for several reasons.
> The tape read by a Turing machine is marked unambiguously
> with symbols chosen from a finite set; in contrast,
> the sensory signals available to nervous systems are
> truly analogue in nature and therefore are neither
> unambiguous nor finite in number.

Nonsense! If sensory signals were truly analog they would have an
infinite amount of precision and therefore carry an infinite amount
of information. This is fundamentally impossible by quantum theory
if nothing else. Any measuring device, whether a retinal cell or a
magnetometer, has a finite precision.

> Turing machines
> have by definition a finite number of internal states,
> while there are no apparent limits on the number of
> states the human nervous system can assume (for example,
> by analog modulation of large numbers of synaptic
> strengths in neuronal connections).

Further nonsense! Are brains immune to the Bekenstein Bound? Does
Edelman really think the information storage capacity of the human brain
is INFINITE?

> The transitions
> of Turing machines between states are entirely
> deterministic, while those of humans give ample appearance
> of indeterminacy. Human experience is not based on
> so simple an abstraction as a Turing machine; to get
> our 'meanings' we have to grow and communicate in
> a society."

Here I think he has a technical point, but first, Turing machines can
approximate indeterminacy, and second, we might use quantum computers
in place of TMs if it should turn out that quantum uncertainty plays a
fundamental role in brain behavior.

I am surprised that these quotes (which I appreciate Jim taking the
time to find and present) are what passes for intelligent commentary on
these issues. There are arguments against functionalism which are far
more profound than what McCrone and Edelman offer. They focus on one
weak point, which is that there is no agreed-upon way to unambiguously
describe what consitutes an implementation of a given computation.

This is an issue which we have debated on this list at length over the
years, and it is discussed in many other forums as well. Chalmers has
done a good job of tackling this problem head-on (and it nearly bounced
him into dualism). Such arguments are much more difficult to deal with
than claiming that brains have more power than TMs because they are
analog, for Pete's sake.

Hal



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:44 MDT