RE: Tech Review contra Kurzweil and self as software

From: Amara D. Angelica (amara@kurzweilai.net)
Date: Thu Jul 12 2001 - 20:20:46 MDT


Ray Kurzweil has written a detailed response to Searle's arguments, which
will appear shortly in a book published by the Discovery Institute
(www.discovery.org) titled "Are We Spiritual Machines: Ray Kurzweil versus
the critics of Strong AI."

A few excerpts relevant to David Weinberger's article in Darwin magazine:

Searle's Chinese Room Argument Can Be Applied to the Human Brain Itself.
Although it is clearly not his intent, Searle's own argument implies that
the human brain has no understanding. He writes:

"The computer ... succeeds by manipulating formal symbols. The symbols
themselves are quite meaningless: they have only the meaning we have
attached to them. The computer knows nothing of this, it just shuffles the
symbols."

Searle acknowledges that biological neurons are machines, so if we simply
substitute the phrase "human brain" for "computer" and "neurotransmitter
concentrations and related mechanisms" for "formal symbols," we get:

"The [human brain] ... succeeds by manipulating [neurotransmitter
concentrations and related mechanisms]. The [neurotransmitter concentrations
and related mechanisms] themselves are quite meaningless: they have only the
meaning we have attached to them. The [human brain] knows nothing of this,
it just shuffles the [neurotransmitter concentrations and related
mechanisms]. "

Of course, neurotransmitter concentrations and other neural details (e.g.,
interneuronal connection patterns) have no meaning in and of themselves. The
meaning and understanding that emerges in the human brain is exactly that:
an emergent property of its complex patterns of activity. The same is true
for machines. Although the "shuffling symbols" do not have meaning in and
of themselves, the emergent patterns have the same potential role in
nonbiological systems as they do in biological systems such as the brain. As
Hans Moravec has written, "Searle is looking for understanding in the wrong
places ... [he] seemingly cannot accept that real meaning can exist in mere
patterns." ...

[Searle's] descriptions illustrate a failure to understand the essence of
either brain processes or the nonbiological processes that could replicate
them.... While the man [in the Chinese room] may not see it, the
understanding is distributed across the entire pattern of the program itself
and the billions of notes he would have to make to follow the program. I
understand English, but none of my neurons do. My understanding is
represented in vast patterns of neurotransmitter strengths, synaptic clefts,
and interneuronal connections. Searle appears not to understand the
significance of distributed patterns of information and their emergent
properties.

Searle writes that I confuse a simulation with a recreation of the real
thing. What my book (and chapter in this book) actually talk about is a
third category: functionally equivalent recreation. I am not talking about a
mere "simulation" of the human brain as Searle construes it, but rather
functionally equivalent recreations of its causal powers. As I pointed out,
we already have functionally equivalent replacements of portions of the
brain to overcome such disabilities as deafness and Parkinson's disease....

We have already created detailed replications of substantial neuron
clusters. These replications (not to be confused with the simplified
mathematical models used in many contemporary "neural nets") recreate the
highly parallel analog-digital functions of these neuron clusters, and such
efforts are also scaling up exponentially. This has nothing to do with
manipulating symbols, but is a detailed and realistic recreation of what
Searle refers to as the "causal powers" of neuron clusters. Human neurons
and neuron clusters are certainly complicated, but their complexity is not
beyond our ability to understand and recreate using other mediums. The pace
of brain reverse engineering is only slightly behind the availability of the
brain scanning and neuron structure information.... There are many
contemporary examples, but I will cite just one, which is a comprehensive
model of a significant portion of the human auditory processing system that
Lloyd Watts (www.lloydwatts.com) has developed from both neurobiology
studies of specific neuron types and brain interneuronal connection
information....

Virtual personalities can claim to be conscious today, but such claims are
not convincing. They lack the subtle and profound behavior that would make
such claims compelling. But the claims of nonbiological entities some
decades from now -- entities that are based on the detailed design of human
thinking -- will not be so easily dismissed....

Inevitably, Searle comes back to a criticism of "symbolic" computing: that
orderly sequential symbolic processes cannot recreate true thinking. I
think that's true. But that's not the only way to build machines, or
computers. So-called computers (and part of the problem is the word
"computer" because machines can do more than "compute") are not limited to
symbolic processing. Nonbiological entities can also use the emergent
self-organizing paradigm, and indeed that will be one great trend over the
next couple of decades, a trend well under way.



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:39:44 MDT