Re: PHIL: Eliminative Materialism

Mark Crosby (
Sun, 31 Aug 1997 18:56:47 -0700 (PDT)

Eric Watt Forste wrote:
< Personally, I'm very interested in looking at some of the psychology
and philosophy of Buddhist scholars, and this stuff is not as
accessible to me (for various reasons) as the work of European
scholars, so I haven't yet done much of it. >

This certainly sounds like an interesting approach (see below after my
brief ‘review’ of Churchland).

In criticizing eliminative materialism I wanted to note that I wasn’t
against departing from the traditional science paradigms in some
areas. Especially regarding ‘folk psychology’, there’s certainly some
Jungian and Freudian concepts that could be better replaced with
computationalist terms. I was primarily ranting against the tendency
of some (not Eric, probably not Churchland) to say that talking about
anything other than bare ‘brain states’ would amount to some sort of

After my post, I decided it was time to pull Churchland’s book out of
my to-read stack and see what he actually had to say in this regard.

The last section of ch.9 in Paul Churchland's _The Engine of Reason,
the Seat of the Soul_ argues that "human massively parallel prototype
activators" are different from "serial algorithm executors", or Turing
machines, and that this can account for the superiority of human
pattern recognition over rule-based machines: "this capacity for
activating relevant prototype vectors, makes algorithmic plodding
largely unnecessary". This insight is one of Churchland's greatest
contributions to cognitive science.

Churchland also makes excellent points about fallacies held by the old
functionalists. But, that's because they "cared relatively little
about exactly what processes take place inside us, so long as they
implement the right input-output function [and their] presumption
about the nature of those processes was mistaken: it portrayed them as
algorithmic to the core." I would agree with Churchland that "it does
matter what physical processes take place inside us, and they are not
just executing a program" (p251).

BUT, on the other hand, I don't think "high-dimension activation
vectors [only] have intrinsic meaning within the human neural
architecture" (p244). Churchland probably does not imply that 'only'
that I have inserted because, at the end of ch.9, he adds: "No one
claims that the silicon retina is conscious ... Its representations
will not be a target of or a part of something's consciousness until
it is embedded in a larger cognitive system..."

This "larger cognitive system" also has its own degrees of freedom,
including the ability to ALSO use rule-based approaches, plans (cf.
Psycoloquy at
) and goals that subvert the "intrinsic meaning" by adapting existing
tools to perform new functions in a constructivist manner. Some, e.g.,
Patrick Hayes, see this larger cognitive system as a virtual machine.
It is in this sense - some independence, the ability to form abstract
representations, and to both select ones environment and ones
responses to it - that the larger cognitive system can be considered
'software' *in comparison* to the lower-level 'hardware'. Having
programs and algorithmic repertoires can actually free us up to think
of other things when engaged in mundane tasks. The key to freedom here
is having various levels of processing.

It is important not to take this software / hardware dichotomy too
rigidly, but applied in moderation it can allow adaptive functions to
emerge. Contrary to what Churchland supposedly says elsewhere about
eliminative materialism, I would suggest that, at these higher levels
of aggregation, Churchland's "prototype activation vectors" become the
symbols, schemas, analogies & stories of the linguistic levels of the
mind. This emergent array of functionality is how I define 'software'
as opposed to some strictly deterministic program.

Regarding Buddhist philosophy and AI, perhaps the psychophysicists
have done the most to apply some of these principles to neuroscience.
For example, one intriguing paper by Robert A.M. Gregson is called
"n-Dimensional Nonlinear Psychophysics: Intersensory interaction as a
network at the edge of chaos". BTW, if you do a search on
Robert+Gregson, you’ll mostly get back links to stuff by an old
Extropians list participant, Ben Goertzel, who has a lot of stuff
online along these lines. He’s back in the U.S. (at ) and working on an AI /
Web-search project called WebMind. Another serious psychophysics
researcher, whose works seem more practical to me than the Roger
Penrose school of quantum consciousness, is psychiatrist Gordon Globus.

Another essay I recently found that tries to reconcile
computationalism with Eastern philosophies is Copthorne Macdonald's
"An Energy / Awareness / Informational Interpretation of Physical and
Mental Reality" essay ( ). Macdonald
used to write for Mother Earth News and despite some spiritualist red
flags, the essay is really pretty good. He does endorse the
psychophysics of Gordon Globus, but he also develops the systems
science notion of levels of reality, and systems, that I find so
important. He points, in particular, to the work of Ervin Laszlo in
the 70s & 80s (cf. _The Systems View of the World_) and develops the
notions of directed evolution that I find intriguing.

Macdonald also describes Roger Sperry's emergent interactionalism:
"emergent entities then interact causally with each other at their own
level". But then he proposes: "The assumption that awareness is
fundamental suggests a somewhat simpler and more direct explanation of
brain/mind relationship". Basically, he takes the line, similar to
that of Globus, that "[neural] correlates represent physically
embodied information about the attended-to quale, and are available
for use in unconscious 'computational' processing - perhaps to
initiate behavior, or to flag (by creating an emotion) a danger or
opportunity. I am suggesting that ... qualia are generated in
relatively simple neuronal systems and that the selective attention to
these qualia produces neuronal data that is used in the [higher-level]
computational phase of the evaluation process."

This is very similar to what I was proposing for 'hardware' vs
'software' levels of the mind in regards to Churchland and the claims
of eliminative materialism. Macdonald cites Libet's work on sensation
vs. decision lag time to conclude: "The implication here is that even
though conscious intentions may arise from unconscious sources, if one
is sufficiently alert it is possible to make a conscious choice to
abort the intended behavior before it actually begins".

This is made possible by having a system with multiple levels of
control, as described in the current thread about Stephen Thayer’s
Creativity Machine.

Mark Crosby

Sent by RocketMail. Get your free e-mail at