After writing the above two paragraphs, and re-reading the
relevant passages in the book, I realized that although I seemed
to understand these comments when I first read them, I am no
longer sure that I grasp the point the authors were trying to
make here. Choosing one out of "billions" of alternatives in
fractions of a second can be accomplished with as few as 30-odd
bits per second with a preassigned code (which is clearly **not**
what the authors mean), but Edelman and Tononi do not specify
precisely the quantity that must be "better than any present-day
engineer could dream of". There is a reference in the end notes
to the book _The User Illusion: Cutting Consciousness Down To
Size_ by Tor Norretranders (Viking, New York, 1998); Chapter 6 of
this book is called "The Bandwidth of Consciousness", and there
is a diagram on p. 145, taken from the 1971 work of Karl
Kupfmuller, giving the information flow from the senses through
the brain and consciousness to the motor apparatus. This diagram
gives 11 million bits/sec coming in through the senses, 10
billion bits/sec as the throughput of the brain (conservatively
based on an estimate of 10 billion neurons each processing 1
bit/sec), and 20 bits/sec as the throughput of consciousness.
Edelman and Tononi seem to be saying that the bandwidth of
consciousness should be identified with the figure Kupfmuller
gives as the bandwidth of the brain (or at least with that
portion of it participating in the dynamic core), not limited to
the bit rate associated with the "chunking" capacity.
Development and early experience may be thought of as adding
dimensions to the dynamic core. "[I]t is likely that among the
earliest conscious dimensions and discriminations are those
concerned with the body itself -- mediated through structures in
the brain stem that map the state of the body and its relation to
both the inside and outside environment... Equally early and
central are the dimensions provided by value systems indicating
salience for the entire organism... [T]his early, bodily based
consciousness may provide the initial dominant axes of the
N-dimensionsal neural reference space... [A]s increasing
numbers... of signals [from the world ('nonself')] are
assimilated, they would be discriminated according to modality
and category in reference to these initial dimensions that
constitute the protoself... With the accession of new dimensions
related to language and their integration in the dynamic
core... higher-order consciousness appears in humans... A
discriminable and nameable self, developed through social
interactions, can now be connected to the simultaneous experience
of the scenes of primary consciousness and conceptually-based
imagery in which experiences of all kinds are linked... Thus,
one can see development and experience as a progressive increase
in the complexity of the dynamic core, both in terms of the
number of available dimensions and the number of points in the
corresponding N-dimensional space that can be differentiated"
(UoC pp. 174-175).
As I admitted at the beginning of this article, I do not have the
expertise to offer serious scientific criticism of Edelman's
books, but it appears that they have not been well-received among
some scholars in neuroscience, cognitive science, and philosophy.
John Horgan serves up some of this negative reaction as gossip in
_The End of Science_ (p. 171): "Francis Crick spoke for many of
his fellow neuroscientists when he accused Edelman of hiding
'presentable' but not terribly original ideas behind a 'smoke
screen of jargon.' Edelman's Darwinian terminology, Crick added,
has less to do with any real analogies to Darwinian evolution
than with rhetorical grandiosity... The philosopher Daniel
Dennett of Tufts University remained unimpressed after visiting
Edelman's laboratory. In a review of Edelman's _Bright Air,
Brilliant Fire_, Dennett argued that Edelman had merely presented
rather crude versions of old ideas. Edelman's denials
notwithstanding, his model **was** a neural network, and reentry
**was** feedback, according to Dennett. Edelman also
'misunderstands the philosophical issues he addresses at an
elementary level,' Dennett asserted. Edelman may profess scorn
for those who think that the brain is a computer, but his use of
a robot to 'prove' his theory shows that he holds the same
belief, Dennett explained".
Edelman himself states in _Bright Air, Brilliant Fire_ that "The
two concepts of the theory that have come under the most intense
attack are those of neuronal groups and of selection itself.
Horace Barlow and, separately, Francis Crick have attacked the
notion of the existence of groups... Crick's claim is that
neuronal groups have little evidence to support them. He also
asserts that neuronal group selection is not necessary to support
ideas of global mapping. Finally, he claims that he has not
found it possible to make a worthwhile comparison between the
theory of natural selection and what happens in the developing
brain" (BABF pp. 94-95). Edelman goes on to answer these
criticisms on pp. 95 - 98.
This layman finds quite clear and persuasive Edelman's
description of how sensory maps form reentrantly-connected
classification couples which facilitate perceptual categorization
through correlated activity -- although Crick points out (in _The
Astonishing Hypothesis_, Simon & Schuster, London, 1994) that
Edelman did not originate this idea: "In 1974, the psychologist
Peter Milner published a very perceptive paper" in which he
"proposed the idea of correlated firing to solve the binding
problem". In the same 1974 paper, Crick says, Milner anticipated
Edelman's discussion of recursive synthesis: "[Milner] argued
that... early cortical areas would have to be involved in visual
awareness as well as the higher cortical areas. He suggested
that this could be implemented by a mechanism involving the
numerous backprojections from neurons higher in the visual
hierarchy to those lower down" (_The Astonishing Hypothesis_,
p. 232 [and footnote]). This is in a context in which Crick
warns against premature reliance on reentry as an explanatory
tool: "Another possible tack is to ask whether awareness
involves, in some sense, the brain talking to itself. In neural
terms this might imply that reentrant pathways -- that is, one
that, after one or more steps, arrives back at the starting
point, are essential, as Gerald Edelman has suggested. The
problem here is that it is difficult to find a pathway that is
not reentrant... [W]e must use the reentry criterion with care"
(Ibid., p. 234).
Daniel Dennett sounds a similar warning about Edelman's use of
reentry as an explanatory device (in _Consciousness Explained_,
p. 268): "It is not just philosophers' theories that need to be
made honest by modeling at this level [the level of AI models
such as John Anderson's (1983) ACT* or Rosenbloom, Laird, and
Newell's (1987) Soar]; neuroscientists' theories are in the same
boat. For instance, Gerald Edelman's (1989) elaborate theory of
're-entrant' circuits in the brain makes many claims about how
such re-entrants can accomplish the discriminations, build the
memory structures, co-ordinate the sequential steps of problem
solving, and in general execute the activities of a human mind,
but in spite of a wealth of neuroanatomical detail, and
enthusiastic and often plausible assertions from Edelman, we
won't know what his re-entrants can do -- we won't know that
re-entrants are the right way to conceive of the functional
neuroanatomy -- until they are fashioned into a whole cognitive
architecture at the grain-level of ACT* or Soar and put through
their paces". A footnote continues: "Edelman (1989) is one
theorist who has tried to put it all together, from the details
of neuroanatomy to cognitive psychology to computational models
to the most abstruse philosophical controversies. The result is
an instructive failure. It shows in great detail how many
different sorts of question must be answered, before we can claim
to have secured a complete theory of consciousness, but it also
shows that no one theorist can appreciate all the subtleties of
the problems addressed by the different fields. Edelman has
misconstrued, and then simply dismissed, the work of many of his
potential allies, so he has isolated his theory from the sort of
sympathetic and informed attention it needs if it is to be saved
from its errors and shortcomings".
Ascending further in Edelman's infrastructural hierarchy, I found
the discussions of global mappings, which wed motor activity to
sensory classification n-tuples as an inseparable component of
perceptual categorization, and which involve the operations of
"cortical appendages" such as the cerebellum and basal ganglia,
somewhat vaguer and harder to grasp. They are also highly
speculative, as Edelman admits in the Preface to _The Remembered
Present_: "The description of cortical appendages... contains
specific models of cerebellar, hippocampal, and basal ganglion
function. In formulating them, I have risked being accused of
indulging in speculative neurology. I have nonetheless attempted
to keep the functional aspects of these models well within the
known properties of these brain structures. The models, risky
though they may be, are intended to reinforce the view that the
cortical appendages are temporal organs necessary for the
eventual emergence of rich conscious activity" (RP pp. xix-xx).
Finally, it seems to me that there is an element of the deus ex
machina in the notion of conceptual categorization, in which the
brain is supposedly linking correlated maps of its own activity.
Still, Edelman manages to weave all this into a plausible story
leading straight up the mountainside to the summit of primary
consciousness itself (and even further, to what he calls
higher-order consciousness), and since his avowed purpose is to
demonstrate the feasibility, at this point in history, of
constructing a first draft of such an ambitious but
scientifically plausible story, this reader was able to tolerate
the threadbare patches. "Above all, the present essay is
intended to provoke thought by scientists about a subject
considered in most scientific circles to be beyond scientific
reach. To show the feasibility of constructing a brain-based
theory of consciousness is perhaps all that can be reasonably
expected at this stage of knowledge. On that basis, others may
build more firmly when the necessary facts and methods become
available" (RP p. xx).
As suggested by some of Dennett's remarks quoted above, Edelman
seems perfectly poised, straddling as he does the fields of
neurobiology, psychology, philosophy, and computer science, to
maximally irritate his colleagues with his claims to have gone
farther than any prior theorist in building the scaffolding for a
complete theory of consciousness (and with the sort of immodesty
illustrated by the gossipy John Horgan [_The End of Science_,
p. 167], who quotes Edelman making remarks such as "I'm
**astonished** that people don't sit and put these things
together"). It seems likely that only an eminence grise with a
big ego and a Nobel already under his belt would be in a position
to risk career and reputation by embarking on work spanning
fields in some of which he would be considered an arrogant and
blundering outsider. Another possible factor in negative
reactions to a selectionist theory of the brain such as Edelman's
(though not necessarily on the part of those scholars I have
specifically mentioned) may have to do with the antipathy and
anger that any invocation of the name of Darwin still stirs up,
almost a century and a half after the publication of _The Origin
of Species_, in some intellectual circles. For example, the
encroachment of Darwinian theory into the social sciences of
psychology and sociology (the newer Darwinian annexes of these
fields were once known as "biosocial approaches" or
"sociobiology", but I see that the term "evolutionary psychology"
is more common today), the implications of which were at first
largely ignored by mainstream social scientists, began to provoke
a strong intellectual counterreaction with the publication in
1975 of Edward O. Wilson's _Sociobiology: The New Synthesis_.
The complex politics of this counterreaction (spearheaded,
ironically enough, by some of Wilson's fellow biologists, but the
alarm thus sounded certainly rallied the defenses of fields in
the humanities which were "under attack") are recounted in
intricate detail in the recently-published _Defenders of the
Truth: The Battle For Science in the Sociobiology Debate and
Beyond_ by Ullica Segerstrale (Oxford University Press, 2000).
Of likeliest interest to those on this list are the discussions
in Edelman's books concerning constraints on the design of
conscious artifacts and the role of digital computers both in
theories about the brain and in the future construction of
brain-like devices. When electronic digital computers came into
existence after World War II, they were like no kind of machine
that had ever been seen before. The spectacle of a huge array of
electronic components connected to a teletypewriter, accepting
and parsing commands typed into the keyboard and responding by
typing back human-readable strings of symbols might easily
suggest, to a thoughtful person, the sort of experiment which,
when described by Alan Turing, became a basis for speculations
about whether such symbol manipulation could be developed into
the basis for an intelligent conversation between a human and an
electronic device connected to a typewriter. Douglas
R. Hofstadter (in _Le Ton beau de Marot_, Basic Books, New York,
1997, p. 88) has described the impact that words issuing from a
machine can have on a naive person as the "Eliza effect": the
idea that "lay people have a strong tendency -- indeed, a great
willingness -- to attribute to words produced by a computer just
as much meaning as if they had come from a human being. The
reason for this weakness is clear enough: Most people experience
language only with other humans, and in that case, there is no
reason to doubt the depth of its rootedness in experience.
Although we all can chit-chat fairly smoothly while running on a
mere half a cylinder, and often do at cocktail parties, the
syntactically correct use of words absolutely drained of every
last drop of meaning is something truly alien to us". Given that
in the early days of digital computers almost everyone was more
or less naive in this regard, even those most intimate with their
construction and operation, the fact that the newely-invented
digital computers almost immediately suggested themselves as a
basis for "artificial intelligence" is hardly surprising.
To the mathematicians such as John von Neumann who worked out the
instruction sets and devised the earliest programming languages
for these machines, there must have been an added appeal. Up to
that time, any large-scale computational effort, such as the
compilation of tables of functions (the need for which funded
Charles Babbage's dreams of a mechanical digital computer in the
19th century), or the calculations needed by the Manhattan
Project, which had physicist Richard Feynman coordinating and
cross-checking teams of pink-collar workers operating adding
machines, or even the detailed exposition and checking of an
intricate proof in mathematical logic, such as those contained in
Russell and Whitehead's _Principia Mathematica_, was a laborious,
numbingly tedious, and error-prone undertaking. The heights of
mathematical and logical reasoning were also restricted to an
elite who, like those initiated into the secrets of literacy
itself in an earlier age, were considered to be possessed of
intellectual powers above the common herd of humanity, fluent in
a realm of discourse constituting one of the pinnacles of
achievement of the human mind. Of course, the comprehension and
invention of mathematical knowledge isn't the same as the labor
involved in making use of it, but given that those capable of the
former also often had to be the ones to perform, or at least
organize and supervise, the latter, the anticipation that a
digital computer would be able to perform such tasks, so
difficult for human beings, with astonishing speed and freedom
from errors, must have caused the breath to catch in the throat
of many slide-rule-toting workers in physics and the more
intensely mathematical sciences. The computational prowess of
digital computers also probably engendered (as the Eliza effect
did with ordinary language) dreams of being able to develop these
machines, evidently already capable of astounding feats of
mathematical computation, into devices capable of mathematical
reasoning and invention.
There was a counter-trend in the nascent field of robotics during
the 1950's and 1960's, led in its final days by Frank Rosenblatt,
composed of researchers who were more interested in exploring the
capabilities of networks of analog electronics than in the new
digital symbol manipulators. This led to a battle between the
analog and digital approaches to AI for students and funding,
made all the more urgent by the fact that the new digital toys
were **expensive**, which culminated at the end of the 1960s with
the triumph of the digital, symbol-based pursuit of AI and the
abandonment of "connectionist" approaches to AI for the next
decade (see, for example, _The Brain Makers_, H. P. Newquist,
Prentice Hall, 1994, pp. 71-75). However, already by this time
at least one AI researcher, from the heart of the symbolic AI
faction at MIT, had come to be dismayed by how easily people
could be led to mistake parlor tricks of syntatic manipulation
for something more genuinely intelligent (see Hofstadter, _Le Ton
beau de Marot_, pp. 87-89). This dismay led Joseph Weizenbaum to
create his famous demonstration program Eliza, and later to write
a book about it and his disillusionment with the field of AI, as
it was then practiced (_Computer Power and Human Reason_,
W. H. Freeman, San Francisco, 1976).
It seems to me, though, that suspicion of hyperbole in discourse
about the prospects for intelligent digital computers was quite
widespread throughout the 1960s, even in popular culture. One
need only recall the repeated instances, in the original _Star
Trek_ television series, of Captain Kirk being able to use simple
logical paradoxes to disable an otherwise invincible intelligent
computer or android by trapping its CPU in a loop, or to talk the
machine into turning itself off or blowing itself up, as a
demonstration that a computer is brittle, or lacks depth, or that
its seeming intelligence is "fake" in some way that a human's is
not. (All while Mr. Spock, the show's master logician, looked on
in awe. Well, it was also implausible that Kirk could beat Spock
at chess -- no doubt Kirk's intellectual superiority was
stipulated in Shatner's contract -- that's what it means to be
the leading man!). It's easy to dismiss this negative attitude
as fear of losing one's job to automation, as fear of
relinquishing the uniqueness of humanity, or as resistance to
scientific progress in the tradition of clinging to
heliocentrism, spontaneous generation of microbes, vitalism, or
special creation of species, but the continued insistence by
philosophers such as John Searle that semantics cannot be
generated from syntax alone suggests some deeper point that's
struggling to be made. Whatever it is, it's hard to articulate
-- Searle's own "Chinese Room" illustration simply begs the
question of whether the implementation of an algorithm could
generate understanding of a human language. Imagining an
algorithm for understanding Chinese being carried out by an
English-speaking human with no personal knowledge of Chinese who
interprets a set of rules written in English is a clever literary
device, but it makes the idea of machine intelligence no less
plausible (as a thought experiment, not a practical endeavor)
than the notion that such an algorithm could be implemented on a
digital computer.
The belief that the human capacity for speech is a divine gift
may still be widespread, but among scientists it is generally
accepted that language is a capacity that evolved along with
human beings, and as such it exists because of its practical
value in human life. Its utterance is a useful behavior
exhibited by embodied beings in intimate interaction with the
world and with their neighbors, and it is apprehended by fellow
beings with a history of equally rich and extensive contact with
the world. The details of the matching among speech, speakers,
and the world blur and shift with variations in time, place, and
context. The notion that a body of such utterances or their
symbolic representations, together with a set of rules for
transforming them into each other and into new utterances,
independently encodes a set of "facts" or "truths" about the
world when divorced from the community of speakers, mired in the
contingent messiness of the world, who originally anchored those
utterances to reality, is an idea that doesn't seem to have much
credence among philosophers these days (at least as far as this
layman can tell). It is somewhat analogous to the hopes of
turn-of-the-century mathematical logicians that the truth of
mathematics could be put on a firm foundation by deducing all of
mathematics from a few logical principles also safely out of
reach of the fuzzy contingencies of the world. Now that digital
computers are cheap and widespread, no longer rare objects of
veneration, and now that the Eliza effect has had the opportunity
to wear thin, it is probably also the case that most computer
programmers, at least, would no longer ascribe intelligence to a
disembodied digital computer performing logical operations on a
body of symbolically-encoded facts about the world (such as
Douglas Lenat's Cyc). Such a system may have its uses, as a
front end for a database perhaps, but only a marketing department
would call it artifical intelligence.
Edelman is the first author I have encountered who seems (on
initial reading, at least) to be clearly and sensibly
articulating the unease that Searle and others apparently feel
about the notion of intelligent computers. Edelman depicts the
contrast between brains and digital computers in a way that, for
this reader at least, does not cause the impatience with which I
respond to Searle's Chinese Room argument. To Edelman, the
crucial distinction is between the conscious brain as a
selectional system constrained by value, from which
context-dependent categories with fuzzy boundaries spontaneously
emerge, and the digital computer as an instructional system in
which categories with predefined boundaries are explicitly
programmed. Edelman points to the intersection of two streams of
unpredictable variation (the variation in the repertoires of the
brain, and the variation in the signals from the world) as the
seed from which categorization of the unlabelled world and
consciousness (and later, intelligence) can emerge -- the kernel
of the "soul", if you will. Edelman also treats the body, with
its linked sensory and motor activity, as an inseparable
component of the perceptual categorization underlying
consciousness. Edelman claims affinity (in BABF, p. 229) between
his views on these issues and those of a number of scholars (a
minority, says Edelman, which he calls the Realists Club) in the
fields of cognitive psychology, linguistics, philosophy, and
neuroscience; including John Searle, Hilary Putnam, Ruth Garret
Millikan, George Lakoff, Ronald Langacker, Alan Gould, Benny
Shanon, Claes von Hofsten, and Jerome Bruner (I do not know if
the scholars thus named would acknowledge this claimed affinity).
On further reflection, the distinction Edelman is attempting to
draw between selectionism and instructionism, at least as applied
to the simulation of selectionist system on a computer, seems
less clear-cut. In the case of a living organism, in which the
body, nervous system, and sensory and motor capabilities are are
all products of natural selection in an evolutionary sequence
going all the way back to the origin of life on Earth, the
distinction between the living brain and any sort of computer
remains clear. However, when human designers intervene to choose
the initial characteristics for a simulated selectional system,
the arbitrariness of the choice of these initial conditions --
especially the details of the values according to which the
adjustment of synaptic weights will occur, but also choices about
how large and fine-grained the repertoires will be, as well as
the details of the body and its sensory and motor capabilities,
blurs somewhat the distinction that Edelman is trying to draw.
At what point, for example, does the definition of a value
criterion become too high-level a specification, crossing some
boundary to become direct programming of a category and hence,
instructionism? It seems to me that this is an empirical
question, to be answered by the actual construction of such
simulations and observation of the results.
Desigining and manufacturing a robot that bootstraps itself up
from a very low-level set of value criteria to a complex
behavioral repertoire of great generality will always be more
expensive than seeking a shortcut to a high-level specification
of behavior that gets a more limited-scope job done, even at the
cost of curtailing the artifact's ultimate ability to deal with
novelty. At least, this is likely to be the case until the
barely-imaginable day when the technology of consciousness
becomes well established, and the hardware required to implement
such complexity is cheap enough not to be a significant economic
burden. Until then, no venture capitalist is likely to accept a
proposed business plan simply because the product envisioned
contains the seed of a "soul". John Horgan illustrates this "so
what?" attitude in _The End of Science_ (pp. 168-170) in his
reaction to Edelman's latest recognition automaton, Darwin IV:
"Edelman and his coworkers had built four robots, each named
Darwin, each more sophisticated than the last. Indeed, Darwin 4,
Edelman assured me, was not a robot at all, but a 'real
creature'. It was 'the first nonliving thing that learns,
okay?'... What is its end goal? I asked. 'It **has** no end
goals,' Edelman reminded me with a frown. 'We have given it
**values**. Blue is bad, red is good.' Values are general and
thus better suited to helping us cope with the polymorphous world
than are goals, which are much more specific... I asked how this
robot differed from all the others built by scientists over the
past few decades, many of which were capable of feats at least as
impressive as those achieved by Darwin 4. The difference,
Edelman replied, his jaw setting, was that Darwin 4 possessed
values or instincts, whereas other robots needed specific
instruction to accomplish any task. But don't all neural
networks, I asked, eschew specific instructions for general
learning programs? Edelman frowned. "But [with] all of those,
you have to exclusively define the input and output...
Edelman... noted that most artificial-intelligence designers
tried to program knowledge in from the top down with explicit
instructions for every situation, instead of having knowledge
arise naturally from values. Take a dog, he said. Hunting dogs
acquire their knowledge from a few basic instincts. 'That is
more efficacious than any bunch of Harvard boys writing a program
for swamps!' Edelman guffawed... But Darwin 4 is still a
computer, a robot, with a limited repertoire of responses to the
world, I persisted; Edelman was using language metaphorically
when he called it a 'creature' with a 'brain'... If a computer,
[Edelman] said, is defined as something driven by algorithms, or
effective procedures, then Darwin 4 is not a computer. True,
computer scientists might program robots to do what Darwin 4
does. But they would just be faking biological behavior, whereas
Darwin 4's behavior is authentically biolgical. If some random
electronic glitch scrambles a line of code in his creature,
Edelman informed me, 'it'll just correct like a wounded organism
and it'll go around again. I do that for the other one and it'll
drop dead in its tracks.' Rather than pointing out that all
neural networks and many conventional computer programs have this
capability, I asked Edelman about the complaints of some
scientists that they simply did not understand his theories..."
I notice that Edelman's observations on the subject of conscious
artifacts correspond to some of the reflections of this list's
own Eliezer S. Yudkowsky, in his Web document _Coding a
Transhuman AI 2.0a_ (http://www.singinst.org/CaTAI.html). Some
excerpts from Yudkowsky (I trust I am not exhibiting them too
seriously out of context): "It's actually rather surprising that
the vast body of knowledge about human neuroscience and cognition
has not yet been reflected in proposed designs for AIs. It makes
you wonder if there's some kind of rule that says that AI
researchers don't study cognitive science. This wouldn't make
any sense, and is almost certainly false, but you do get that
impression". Edelman (BABF pp. 13-14) says that the cognitive
scientists don't pay enough attention to neuroscience, either:
"[T]he cognitivist enterprise rests on a set of unexamined
assumptions. One of its most curious deficiencies is that it
makes only marginal reference to the biological foundations that
underlie the mechanisms it purports to explain. The result is a
scientific deviation as great as that of the behaviorism it has
attempted to supplant. The critical errors underlying this
deviation are as unperceived by most cognitive scientists as
relativity was before Einstein and heliocentrism was before
Copernicus" (BABF p. 14).
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:13:51 MDT