Re: Emulation vs. Simulation

From: Jim Fehlinger (fehlinger@home.com)
Date: Thu Mar 29 2001 - 21:04:10 MST


hal@finney.org wrote:
>
> I, like most of us, adopt a position which is basically functionalism;
> anyone who believes that uploading is possible (even gradual uploading)
> believes in it.

There are some exceedingly slippery issues here. Even the trans-cognitivists
(that's my coinage) like Edelman still believe that a non-Turing system like
the brain can be **simulated** by a Turing machine operating at a low
enough level. But that's **not** what the 1970s-1980s cognitivists were
talking about doing (even though it's what many folks on this list do talk about --
but that's because we read too much SF, dontcha know? :-> ).

It does seem likely to me that what Edelman objects to as a difference
in **kind** between his view of the "hardware" of consciousness and
the traditional cognitivist approach is really just a matter of degree.
For one thing, the traditional cog-sci/AI folks clung to the notion,
for a long time, that you could get genuine AI if you could just come
up with the right Lisp program to run on a 1 MIPS, 1 megabyte class
PDP-10. That's a pretty forlorn hope, these days.

Edelman argues, plausibly, that there are a number of special characteristics
of biological brains -- namely, their lavish, fine-grained structure,
the multiple levels of stochastic variability, and the variation-precedes-selection
dynamics of biological systems -- that are so different in degree from all
contemporary hardware and software as to constitute a major qualitative gap.

When Edelman argues his points, he's looking back over his shoulder at the
MIT/PDP-10/Lisp crowd. He's assuredly **not** looking forward to the sort of
science-fictional scenarios that color the discussions on this list (I'd be
really, really surprised if he's ever heard of Greg Egan).

> Yet do any of us agree that how a brain arranges its
> circuits is of only marginal interest? I don't see how. This is of
> crucial interest in understanding brain behavior.

Again, **we** may not, but apparently the legit cognitivists did. This
is a fact about the recent history of science, and I'm willing to take
the word of folks like Putnam, Lakoff, and Edelman that that's what
happened.

> > And here's what Edelman has to say about functionalism:
> >
> > ...we know that **any** algorithm or effective procedure
> > may be executed by any universal Turing machine.
>
> Sure, keeping in mind that the details are unimportant only in the
> philosophical sense.

Yes, of course. It the real world, the practical questions would totally
swamp the philosophical ones. For one thing, a "universal Turing machine"
has unlimited time and memory. Real computers have hard physical
memory limits, which means that you probably **couldn't** shoehorn
Netscape onto a Univac I, even if you were crazy enough to want to try.

> I don't view this as of crucial importance, because the basic idea
> still holds. Modern computers are open systems just like brains; they
> interact with their environments. I don't know if anyone has formalized
> this notion of "open" computation. But the general idea is still valid,
> that a computer interacting with an environment is every bit as powerful
> in its information-processing capabilities as a brain interacting with
> that environment.

Well, again, most of the computers in the world today are **not** doing anything
meaningful to the computers themselves; their behavior only makes sense in the context
of the human systems they serve. An exception to this is Edelman's own
prototype of what he calls a "noetic system": Darwin IV -- a simulation
with the sort of architecture that Edelman claims is necessary
infrastructure for what he calls "primary consciousness". These machines
**are** doing things meaningful to themselves, even if its just reaching
for or batting away objects of various colors and shapes.

> The basic point remains true, that information processing is a fundamental
> physical process which can be carried out by many kinds of systems,
> from brains to computer chips.

"Information" is a slippery, slippery thing to define. Edelman and
Tononi devote a great deal of discussion to the definition of "information"
in _A Universe of Consciousness_.

> > The facile analogy with digital computers breaks down [because]
> > the sensory signals available to nervous systems are
> > truly analogue in nature and therefore are neither
> > unambiguous nor finite in number.
>
> Nonsense! If sensory signals were truly analog they would have an
> infinite amount of precision and therefore carry an infinite amount
> of information.

Yes, it struck me that Edelman was putting his argument badly here
even as I was typing it out. But reading between his lines,
in light of arguments elsewhere in his books, he is making a
legitimate point about the lack of predetermined categorical
boundaries in the signals coming from real world, and the chaotic
indeterminacy in the nervous system's reaction to those signals.

It was careless phrasing on his part, though.

> > Turing machines have by definition a finite number of internal states,
> > while there are no apparent limits on the number of
> > states the human nervous system can assume (for example,
> > by analog modulation of large numbers of synaptic
> > strengths in neuronal connections).
>
> Further nonsense! Are brains immune to the Bekenstein Bound? Does
> Edelman really think the information storage capacity of the human brain
> is INFINITE?

Again, I cringed a bit when I was typing this. I almost edited
it out, but I decided to leave Edelman's argument intact, warts and all.

It crossed my mind that, since Edelman gives hints in his books
that he is an audiophile as well as a music lover, he should
have been better prepared for this sort of discussion by the
enormous amount of ink that's been spilled in the audio press
about the merits of analog vs. digital recordings ;-> .

> I am surprised that these quotes (which I appreciate Jim taking the
> time to find and present) are what passes for intelligent commentary on
> these issues. There are arguments against functionalism which are far
> more profound than what McCrone and Edelman offer. They focus on one
> weak point, which is that there is no agreed-upon way to unambiguously
> describe what consitutes an implementation of a given computation.
>
> Such arguments are much more difficult to deal with
> than claiming that brains have more power than TMs because they are
> analog, for Pete's sake.

And that would be an oversimplification of Edelman's position, too,
I think. I don't think you can throw out Edelman's whole argument
because of his infelicitous characterization of the resolution
of analog signals and processing elements as "unlimited".

However, since you seem to agree that it's a good idea to actually
look at brains in detail to see how they work, rather than ignoring
them completely and just trying to write programs to try to duplicate
their input/output functions at the highest level of abstraction,
I guess there's really no conflict, anyway.

Jim F.



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:44 MDT