The high-level components in Edelman's model of the
infrastructure of primary consciousness (UoC pp. 107-110; RP
pp. 95-98, Chap. 9; BABF pp. 117-123) are, first of all, the
process of perceptual categorization of combined sensory and
motor activity described above, which Edleman calls "nonself",
symbolized as C(W): "C(W) is the neural basis for perceptual
categorization of W, the exteroceptive input -- peripheral,
voluntary motor, proprioceptive, and polymodal sensory signals --
and is mediated by the thalamus and cortical areas" (RP p. 156
[Fig. 9.1]). Secondly, the correlation of the activity of the
evolutionarily older, hedonic systems of the brain, which Edelman
calls "self", and which is not carried out by the cortex but by
the hippocampus, amygdala, and septum: "C(I) is the neural basis
for categorization of I, the interoceptive input -- autonomic,
hypothalamic, endocrine. It is evolutionarily earlier, driven by
inner events, mediated by limbic and brain-stem circuits coupled
to biochemical circuits, and it shows slow phasic activity" (RP
p. 156). Third, these two types of categorizations both feed the
delay loop through the hippocampus, the convolved and sequenced
output of which returns to the cortex to be stored as conceptual
categorizations: "C(W)*C(I) represents the neural basis of
interaction and comparison of two categorical systems that
occurs, for example, at the hippocampus, septum, and cingulate
gyri. C[C(W)*C(I)] is the neural basis of conceptual
recategorization of this comparison, which takes place in the
cingulate gyri, temporal lobes, and parietal and frontal cortex"
(RP p. 156). These conceptual categorizations, based on both
value and perception, are stored by the cortex in what Edelman
calls the "value-category memory": "Unlike the system of
perceptual categorization, this conceptual memory system is able
to **categorize responses in the different brain systems** that
carry out perceptual categorization and it does this according to
the demands of limbic-brain stem value systems. This
'value-category' memory allows conceptual responses to occur in
terms of the **mutual** interactions of the thalamocortical and
limbic-brain stem systems" (BABF p. 119).
The final element in the primary consciousness model is what
Edelman terms the "key reentrant loop" (RP p. 162) which connects
cortical areas associated with value-category memory to those
areas carrying out current perceptual categorization: "The key
circuits underlying primary consciousness contain particular
reentrant pathways connecting this self-nonself memory system to
primary and secondary repertoires carrying out peceptual
categorization in all modalities -- smell and taste, sight,
hearing, touch, proprioception... [In other words], the various
memory repertoires dedicated to storage of the categorization of
**past** matches of value to perceptual category are reentrantly
connected to mapped classification couples dealing with
**current** sensory input and motor response. By such means, the
past correlations of category with value are now themselves
interactive in real time with current perceptual categorizations
**before** they are altered by the value-dependent portions of
the nervous system. A kind of bootstrapping occurs in which
current value-free perceptual categorization interacts with
value-dominated memory before further contributing to alteration
of that memory (RP p. 97). "[T]his 'bootstrapping process' takes
place in all sensory modalities in parallel and simultaneously,
thus allowing for the construction of a complex scene. The
coherence of this scene is coordinated by the conceptual
value-category memory even if the individual perceptual
categorization events that contribute to it are causally
independent" (BABF p. 119).
Once established by the concurrent developmental selection of
brain and body morphologies and experiential selection of the
population of potential activation patterns of the secondary
repertoire, primary consciousness is a self-perpetuating process.
A somewhat misleadingly sequential description of what is
actually interactive and overlapping activity would be that the
process of primary consciousness continually answers the
following implicit questions: while (conscious) {1. What's out
there now? 2. How do I feel about it? 3. What shall I do about
it?} These questions are jointly answered by the activation of a
particular set of global mappings and the conceptual categories
(including value-category concepts) that link them, out of all
the possibilities in the secondary repertoire, within a couple of
hundred milliseconds after the initial sampling of sensory data
(UoC pp. 27, 150-151). The behavior constituting the answer to
question 3 has consequences which may be beneficial (pleasant) or
harmful (painful) to the organism (as determined by the values
resulting from the evolutionary history of the organism's
ancestors). These consequences will modify future answers to
questions 2 and 3, in future circumstances in which there is a
similar answer to question 1. "In an animal with primary
consciousness... [t]he reward or threat of a scene consisting of
both causally connected and causally unrelated events is assessed
in terms of past experience of that individual animal, and that
scene drives behavior... The number of brain areas contributing
to the... global mappings that are simultaneously engaged is
large, fluctuating, and subject to various linkages. Still, what
appears to be a large number of circuits and cells that are
actually engaged at any one time is only a small fraction of the
number of combinations that are possible in the repertoire of a
selectional brain" (UoC p. 205). Consciousness continues as long
as the neuronal groups in the thalamocortical system are capable
of assuming the functionally integrated yet highly differentiated
states characteristic of global mappings.
When this differentiated activity is lost, as during an epileptic
seizure or slow-wave sleep, unconsciousness results: "During an
epileptic seizure, within a short time the discharge of neurons
in much of the cerebral cortex and thalamus becomes
hypersynchronous; that is, most neurons discharge at high rates
and almost simultaneously... The loss of consciousness during a
seizure is therefore associated with a dramatic reduction in the
complexity of the diverse repertoire of neural states that are
normally available" (UoC pp. 70-71). "The stereotyped
'burst-pause' mode of activity during [slow-wave] sleep affects a
large number of neurons dispersed over the entire brain.
Furthermore, the slow, oscillatory firing of such distributed
populations of neurons is highly synchronized globally, in sharp
contrast with waking, when groups of neurons dynamically assemble
and reassemble into continuously changing patterns of firing...
While the patterns of neural firing are remarkably diverse and
the repertoire of available neural states is large during waking,
the repertoire of available neural states is greatly reduced
during slow-wave sleep. Corresponding to this dramatic reduction
in the number of differentiated neural states, consciousness is
diminished or lost, just as it is in generalized epileptic
discharges. Thus, it appears that consciousness requires not
just neural activity, but neural activity that changes
continually and is thus spatially and temporally differentiated.
If most groups of neurons in the cortex discharge synchronously,
functional discriminations among them are obliterated, brain
states become extremely homogeneous, and with the shrinking
repertoire of brain states that are available for selection,
consciousness itself is lost" (UoC pp. 72-74).
Because consciousness is the result of widely distributed neural
activity throughout the cortex, local damage to the cortex may
result in focused impairments in performance, but seldom in
unconsciousness: "Despite occasional claims to the contrary, it
has never been conclusively shown that a lesion of a restricted
portion of the cerebral cortex leads to unconsciousness... The
only localized brain lesions that result in loss of consciousness
typically affect the so-called reticular activating system. This
highly heterogeneous system, which is located in the
evolutionarily older part of the brain -- upper portions of the
brainstem (upper pons and mesencephalon) and extends into the
posterior hypothalamus, the so-called thalamic intralaminar and
reticular nuclei, and the basal forebrain -- projects diffusely
to the thalamus and cortex. It is thought to 'energize' or
'activate' the thalamocortical system and facilitate interactions
among distant cortical regions... During wakefulness, when this
system is active, thalamocortical neurons are depolarized, fire
in a tonic or continuous way, and respond well to incoming
stimuli. During dreamless sleep, this system becomes less active
or inactive; thalamocortical neurons become hyperpolarized, fire
in repetitive bursts and pauses, and are hardly responsive to
incoming stimuli. Moreover, if this system is lesioned, all
consciousness is lost, and a person enters a state of coma" (UoC
p. 54).
Edelman goes on to consider the bases of language and
higher-order consciousness in human beings (UoC Chap. 15; RP
Chap. 10, 11; BABF Chap. 12), but I will stop here with the
completion of his model of primary consciousness. Primary and
higher-order consciousness are given schematic diagrams in RP
p. 96 (Fig 5.1 A and B); BABF pp. 120 (Fig. 11-1), 132
(Fig. 12-4); the same diagrams are repeated in UoC pp. 108 (Fig
9.1), 194 (Fig. 15.1) but with the captions reversed! Most
roboticists would be quite happy, and would consider it quite
useful, to be able to construct artifacts exhibiting what Edelman
calls primary consciousness. Edelman believes that, in the
biological realm, this capability is quite old: "Which animals
have consciousness? ... Going backward from the human referent,
we may be reasonably sure... that chimpanzees have it. In all
likelihood, most mammals and some birds may have it... If the
brain systems required by the present model represent the
**only** evolutionary path to primary consciousness, we can be
fairly sure that animals without a cortex or its equivalent lack
it. An amusing speculation is that cold-blooded animals with
primitive cortices would face severe restrictions on primary
consciousness because their value systems and value-category
memory lack a stable enough biochemical milieu in which to make
appropriate linkages to a system that could sustain such
consciousness. So snakes are in (dubiously, depending on the
temperature), but lobsters are out. If further study bears out
this surmise, consciousness is about 300 million years old" (BABF
pp. 122-123).
Edelman and his colleagues have constructed a number of computer
simulations which are based on the principles of the "Theory of
Neuronal Group Selection" (TNGS), and which function quite
differently, Edelman says, from the computer programs, robotic
devices, and neural networks produced heretofore by roboticists
and the artificial intelligence community. Edelman claims that
his own artifacts exhibit behavior resulting from the spontaneous
emergence within the simulation of perceptual categorization,
through selection of synaptic weights and network firing patterns
constrained by value: "I have called the study of such devices
**noetics** from the Greek **noein**, to perceive. Unlike
cybernetic devices that are able to adapt within fixed design
constraints, and unlike robotic devices that use cybernetic
principles under programmed control, noetic devices act on their
environment by selectional means and by categorization on value"
(BABF p. 192). In such artifacts, "[c]ertain specialized
networks... reflect the relative adaptive value to the automaton
of the effects of its various motor actions and sensory
experience... Selective amplification of synapses [is] made to
depend on adaptive value as registered by such internal
structures... External or explicit **categorical** criteria for
amplification (such as those a programmer might provide) are
**not** permitted, however" (RP pp. 59-62).
Among those "noetic" devices described in the books are models of
the visual cortex (RP pp. 72-90; UoC pp. 114-120) and a series of
recognition automata named "Darwin" (Darwin II: ND Chap. 10;
Darwin III: RP pp. 57-63, BABF pp. 91-94; Darwin IV: UoC
pp. 90-91). Darwin IV seems to be the same device described as
"under construction" in BABF and there called NOMAD: "Neurally
Organized Multiply Adaptive Device" (BABF pp. 192-193). These
models were built to answer the question "Can a prewired network
or congeries of networks based on selective principles and
reentry respond stably and adaptively to structural inputs to
yield pattern recognition, categorization, and association
without prior instructions, explicit semantic rules, or forced
learning?" despite the fact that "while the three major premises
of the theory of neuronal group selection (developmental
selection leading to a primary repertoire, synaptic selection to
yield a secondary repertoire, and reentry) can all be stated
reasonably simply, their actual operation in interacting
nonlinear networks is highly complex" (ND p. 271). "Unlike the
standard artificial intelligence paradigm, selective recognition
automata avoid preestablished categories and programmed
instructions for behavior... Although programming is used to
instruct a computer how to simulate the neuronal units in a
recognition automaton, **the actual function of these units is
not itself programmed**" (RP p. 58).
Darwin III, for example, "has a four-jointed arm with touch
receptors on the part of its arm distal to the last joint,
kinesthetic neurons in its joints, and a movable eye... Although
it sits still, it can move its eye and arm in any pattern
possible within the bounds imposed by its mechanical arrangement.
Objects in a world of randomly chosen shapes move at random past
its field of vision and occasionally within reach of its arm and
touch" (BABF p. 191). "After a suitable period of experience
with various moving stimuli, the eye of Darwin III in fact begins
to make the appropriate saccades and fine tracking movements with
no further specification of its task other than that implicit in
the value scheme" (RP. p. 62). "Values are arbitrary: in a given
example of Darwin III, they have a specific structure and
correspond to various kinds of evolutionarily determined
characteristics that contribute to phenotypic fitness. Such
low-level values are, for example, 'Seeing is better than not
seeing' -- translated as 'Increase the probability that, when the
retina and its visual networks are stimulated, those synapses
that were active in the recent past (and thus potentially
involved in the behavior that brought about any increased
stimulation) will change in strength" (RP p. 59). "In Darwin
III, pairs of higher-order networks form classification
couples... [O]ne network... responds to local visual features...
The other responds to kinesthetic features as the last joint of
the arm traces contours by light touch (since the low-level value
for touch is 'More pressure is better than less pressure' this
joint will tend to seek edges). The visual and kinesthetic
repertoires are reentrantly connected. This reentry allows
correlation of the responses of these different repertoires that
have disjunctively sampled visual and kinesthetic signals, and it
yields a primitive form of categorization" (RP p. 62). "No two
versions of Darwin III so constituted show identical behavior
but, provided their low-level values are similar, their behavior
tends to converge in terms of particular kinds of categorization
upon a given value. Most strikingly, however, if the
**lower-level values** (expressed as biases acting upon synaptic
changes) are removed from the simulation, these automata show no
convergent behavior" (RP p. 63).
The preservation of "realism" in these models by strict adherence
to the TNGS and avoidance of "shortcuts" to categorization and
behavior exacts a high price in complexity: "There is a dilemma
in modeling the degree of complexity underlying the function of
higher neural networks. On the one hand, any representation in a
machine must be very limited as compared with real neural
networks. On the other hand, the internal design of even a
highly simplified and minimal model of a classification
couple... must be highly complex as compared with computer logic.
This is so because of the minimal size requirements on
repertoires, the parallelism of classification couples, the
nonlinearlity of network behavior, and the deliberate avoidance
of semantic or instructional components in the design of the
machine" (ND p. 272). "While Darwin... is not a model of an
actual nervous system, it does set out to approach one of the
problems that evolution had to solve in forming complex nervous
systems -- the need to form categories in a bottom-up manner from
structures in the environment. Five key features of the model
make this possible: (1) Darwin... incorporates selective networks
whose initial specificities enable them to respond without
instruction to unfamiliar stimuli; (2) degeneracy provides
multiple possibilities of response to any stimulus, at the same
time providing functional redundancy against component failure;
(3) the output of Darwin... is a **pattern** of response, making
use of the simultaneous responses of multiple degenerate groups
to avoid the need for very high specificity and the combinatorial
disaster that this would imply; (4) reentry within individual
networks vitiates the limitations described by Minsky and Papert
(1969) for a class of perceptual automata (perceptrons) lacking
such connections; and (5) reentry between communicating networks
with different functions gives rise to new functions, such as
association, that each network alone could not carry out.
Neither the representative transformations nor the limited
generalizations performed by Darwin... require the direct
intervention of a classifying observer, either through
instruction or through forced learning" (ND pp. 288-289).
In the design of such bottom-up categorization devices, one
element that cannot be avoided is the necessity to choose a set
of initial value constraints: "Value is a sign of **nested**
selective systems -- a result of **natural selection** that
yields alterations of the phenotype that can then serve as
constraints on the **somatic selection** occurring in an
individual's nervous system. Unlike evolution, somatic selection
can deal with contingencies of immediate environments that are
rich and unpredictable -- even ones that have never occurred
before -- by enabling an individual animal to categorize critical
features of these environments during short periods. But we
again emphasize that neuronal group selection can consistently
accomplish this categorization only under the constraint of
inherited values determined by evolution. The nest of systems is
a beautiful one, guaranteeing survival for each species in terms
of what may be called necessary prejudice -- one required for
survival under behavioral control by a selectional brain" (UoC
p. 92).
At many points in these books, Edelman stresses his belief that
the analogy which has repeatedly been drawn during the past fifty
years between digital computers and the human brain is a false
one (BABF p. 218), stemming largely from "confusions concerning
what can be assumed about how the brain works without bothering
to study how it is physically put together" (BABF p. 227). The
lavish, almost profligate, morphology exhibited by the multiple
levels of degeneracy in the brain is in stark contrast to the
parsimony and specificity of present-day human-made artifacts,
composed of parts of which the variability is deliberately
minimized, and whose components are chosen from a relatively
limited number of categories of almost identical units.
Statistical variability among (say) electronic components occurs,
but it's usually merely a nuisance that must be accommodated,
rather than an opportunity that can be exploited as a fundamental
organizational principle, as Edelman claims for the brain. In
human-built computers, "the small deviations in physical
parameters that do occur (noise levels, for example) are ignored
by agreement and design" (BABF p. 225). "The analogy between the
mind and a computer fails for many reasons. The brain is
constructed by principles that ensure diversity and degeneracy.
Unlike a computer, it has no replicative memory. It is
historical and value driven. It forms categories by internal
criteria and by constraints acting at many scales, not by means
of a syntactically constructed program. The world with which the
brain interacts is not unequivocally made up of classical
categories" (BABF p. 152).
This contrast between the role of stochastic variation in the
brain and the absence of such a role in electronic devices such
as computers is one of the distinctions between what Edelman
calls "instructionism" in his own terminology (RP p. 30), but has
also been called "functionalism" or "machine functionalism" (RP
p. 30; BABF p. 220); and "selectionism" (UoC p. 16; RP
pp. 30-33). Up to the present, all human artifacts and machines
(including computers and computer programs) have been based on
functionalist or instructionist design principles. In these
devices, the parts and their interactions are precisely specified
by a designer, and precisely matched to expected inputs and
outputs. This is a construction approach based on cost
consciousness, parsimonious allocation of materials, and limited
levels of manageable complexity in design and manufacture. The
workings of such artifacts are "held to be describable in a
fashion similar to that used for algorithms".
By analogy to the hardware-independence of computer programs,
functionalist models of neural "algorithms" underlying cognition
and behavior have attempted to separate these functions from
their physical instantiation in the brain: "In the functionalist
view, what is ultimately important for understanding psychology
are the algorithms, not the hardware on which they are
executed... Furthermore, the tissue organization and composition
of the brain shouldn't concern us as long as the algorithm 'runs'
or comes to a successful halt." (BABF p. 220). In Edelman's
view, the capabilities of the human brain are much more
intimately dependent on its morphology than the functionalist
view admits, and any attempt to minimize the contribution of the
brain's biological substrate by assuming functional equivalence
with the sort of impoverished and rigid substrates characteristic
of modern-day computers is bound to be misleading.
On the other hand, "selectionism", according to Edelman, is
quintessentially characteristic of biological systems (such as
the brain), whose fine-grained structure (not yet achievable by
human manufacturing processes, but imagined in speculations about
molecular electronics, nanotechnology, and the like) permits
luxuriantly large populations of statistically-varying components
to vie in Darwinian competition based on their ability to
colonize available functional niches created by the growth of a
living organism and its ongoing interaction with the external
world. The fine-grained variation in functional repertoires
matches the fine-grained variation in the world itself: "the
nature of the physical world itself imposes commonalities as well
as some very stringent requirements on any representation of that
world by conscious beings... [W]hatever the mental representation
of the world is at any one time, there are almost always very
large numbers of additional signals linked to any chunk of the
world... [S]uch properties are inconsistent with a fundamental
**symbolic** representation of the world considered as an
**initial** neural transform. This is so because a symbolic
representation is **discontinuous** with respect to small changes
in the world..." (RP p. 33).
Edelman's selectionist scenarios are highly dynamic, both in
terms of events within the brain and in terms of the interaction
of the organism with its environment: "In the creation of a
neural construct, motion plays a pivotal role in selectional
events both in primary and in secondary repertoire development.
The morphogenetic conditions for establishing primary repertoires
(modulation and regulation of cell motion and process extension
under regulatory constraint to give constancy and variation in
neural circuits) have a counterpart in the requirement for
organismic motion during early perceptual categorization and
learning." (ND p. 320). "Selective systems... involve **two
different domains of stochastic variation** (world and neural
repertoires). The domains map onto each other in an individual
**historical** manner... Neural systems capable of this mapping
can deal with novelty and generalize upon the results of
categorization. Because they do not depend upon specific
programming, they are self-organizing and do not invoke
homunculi. Unlike functionalist systems, they can take account
of an open-ended environment" (RP p. 31).
A human-designed computer or computer program operates upon input
which has been coded by, or has had a priori meaning assigned by,
human beings: "For ordinary computers, we have little difficulty
accepting the functionalist position because the only meaning of
the symbols on the tape and the states in the processor is **the
meaning assigned to them by a human programmer**. There is no
ambiguity in the interpretation of physical states as symbols
because the symbols are represented digitally according to rules
in a syntax. The system is **designed** to jump quickly between
defined states and to avoid transition regions between them..."
(BABF p. 225). It functions according to a set of deterministic
algorithms ("effective procedures" [UoC p. 214]) and produces
outputs whose significance must, once again, be interpreted by
human beings.
A similar "instructionist" theory of the brain, based on logical
manipulation of coded inputs and outputs, cannot escape the
embarrassing necessity to posit a "homunculus" to assign and
interpret the input and output codes (BABF pp. 79, 80 [Fig. 8-2],
8). In contrast, a "selectionist" theory of the brain based on
competition among a degenerate set of "effective structures [UoC
p. 214]", can escape this awkwardness, with perceptual categories
of evolutionary significance to the organism spontaneously
emerging from the ongoing loop of sensory sampling continuously
modified by movement that is characteristic of an embodied brain
(UoC pp. 81, 214; ND pp. 20, 37; RP p. 532). It's clear that
Edelman, in formulating the TNGS (UoC Chap. 7; see also ND
Chap. 3; RP Chap. 3, p. 242; BABF Chap. 9) has generalized to the
nervous system the insights he gained from his earlier work in
immunology, which also relies on fortuitous matching by a
biological recognition system (BABF Chap. 8) between a novel
antigen and one of a large repertoire of variant
proto-antibodies, with the resulting selection being
differentially amplified to produce the organism's immune
response (BABF p. 76 [Fig. 8-2]).
Despite his dismissive attitude toward traditional "top-down",
symbolic approaches to artificial intelligence, and to the sorts
of neural-network models in which specific roles are assigned to
input and output neurons by the network designer, Edelman does
not deny the possibility that conscious artifacts can be
constructed (BABF Chap. 19): "I have said that the brain is not a
computer and that the world is not so unequivocally specified
that it could act as a set of instructions. Yet computers can be
used to **simulate** parts of brains and even to help build
perception machines based on selection rather than instruction...
A system undergoing selection has two parts: the animal or organ,
and the environment or world... No instructions come from events
of the world to the system on which selection occurs, [and]
events occurring in an environment or world are unpredictable...
[W]e simulate events and their effects... as follows: 1. Simulate
the organ or animal... making provision for the fact that, as a
selective system, it contains a generator of diversity --
mutation, alterations in neural wiring, or synaptic changes that
are unpredictable. 2. Independently simulate a world or
environment constrained by known physical principles, but allow
for the occurrence of unpredictable events. 3. Let the simulated
organ or animal interact with the simulated world or the real
world without prior information transfer, so that selection can
take place. 4. See what happens... Variational conditions are
placed in the simulation by a technique called a pseudo-random
number generator... [I]f we wanted to capture randomness
absolutely, we could hook up a radioactive source emitting alpha
particles, for example, to a counter that would **then** be
hooked up to the computer" (BABF p. 190).
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:13:51 MDT