A Spring-Powered Theory of Consciousness (4 of 6)

From: Jim Fehlinger (fehlinger@home.com)
Date: Mon Jun 19 2000 - 21:57:49 MDT


Given that today's electronic technology is still one of relative
scarcity (in terms of the economic limits on complexity),
constructing a device possessing primary consciousness, using the
principles of the TNGS, may not currently be feasible: "In
principle there is no reason why one could not by selective
principles simulate a brain that has primary consciousness,
provided that the simulation has the appropriate parts.
But... no one has yet been able to simulate a brain system
capable of concepts and thus of the **reconstruction** of
portions of global mappings... Add that one needs multiple
sensory modalities, sophisticated motor appendages, and a lot of
simulated neurons, and it is not at all clear whether
presently-available supercomputers and their memories are up to
the task" (BABF pp. 193-194).

In a biological system, much of the physical complexity needed to
support primary consciousness is inherent in the morphology of
biological cells, tissues, and organs, and it isn't clear that
this morphology can be easily dismissed: "[Are] artifacts
designed to have primary consciousness... **necessarily**
confined to carbon chemistry and, more specifically, to
biochemistry (the organic chemical or chauvinist position)[?]
The provisional answer is that, while we cannot completely
dismiss a particular material basis for consciousness in the
liberal fashion of functionalism, it is probable that there will
be severe (but not unique) constraints on the design of any
artifact that is supposed to acquire conscious behavior. Such
constraints are likely to exist because there is every indication
that an intricate, stochastically variant anatomy and synaptic
chemistry underlie brain function and because consciousness is
definitely a process based on an immensely intricate and unusual
morphology" (RP pp. 32-33). Perhaps the kinds of advances
projected for the coming decades by such writers as Ray Kurzweil,
who predicts, based on a generalized Moore's Law, that a new
technological paradigm (based on three-dimensional networks of
carbon nanotubes, or something of the sort) will emerge when
current semiconductor techniques reach their limits in a decade
or two, will ease the current technical and economic limits on
complexity and permit genuinely conscious artifacts to be
constructed according to principles suggested by Edelman.

Edelman seems ambivalent about the desirability of constructing
conscious artifacts: "In principle... there is no reason to
believe that we will not be able to construct such artifacts
someday. Whether we should or not is another matter. The moral
issues are fraught with difficult choices and unpredictable
consequences. We have enough to concern ourselves with in the
human environment to justify suspension of judgment and thought
on the matter of conscious artifacts for a bit. There are more
urgent tasks at hand" (BABF pp. 194-195). On the other hand,
"The results from computers hooked to NOMADs or noetic devices
will, if successful, have enormous practical and social
implications. I do not know how close to realization this kind
of thing is, but I do know, as usual in science, that we are in
for some surprises" (BABF p. 196).

Meanwhile, there is also the question of whether shortcuts can be
taken to permit the high-level, linguistically-based logical and
symbolic behavior of human beings to be "grafted" onto
present-day symbol-manipulation machines such as digital
computers, without duplicating all the baggage (as described by
the TNGS) that allowed higher-order consciousness to emerge in
the first place. A negative answer to this question remains
unproven, but despite such recent tours de force as IBM's "Big
Blue" chess-playing system, Edelman is unpersuaded that
traditional top-down AI will ever be able to produce
general-purpose machines able to deal intelligently with the
messiness and unpredictability of the world, while at the same
time avoiding a correspondingly complex (and expensive) messiness
in their own innards. Edelman cites three maxims that summarize
his position in this regard: 1. "Being comes first, describing
second... [N]ot only is it impossible to generate being by mere
describing, but, in the proper order of things, being precedes
describing both ontologically and chronologically"
2. "Doing... precedes understanding... [A]nimals can solve
problems that they certainly do not understand logically... [W]e
[humans] choose the right strategy before we understand why...
[W]e use a [grammatical] rule before we understand what it is;
and, finally... we learn how to speak before we know anything
about syntax" 3. "Selectionism precedes logic." "Logic is... a
human activity of great power and subtlety... [but] [l]ogic is
not necessary for the emergence of animal bodies and brains, as
it obviously is to the construction and operation of a
computer... [S]electionist principles apply to brains
and... logical ones are learned later by individuals with brains"
(UoC pp. 15-16).

Edelman speculates that the pattern-recognition capabilities
granted to living brains by the processes of phylogenetic and
somatic selection may exceed those of logic-based Turing
machines: "Clearly, if the brain evolved in such a fashion, and
this evolution provided the biological basis for the eventual
discovery and refinement of logical systems in human cultures,
then we may conclude that, in the generative sense, selection is
more powerful than logic. It is selection -- natural and somatic
-- that gave rise to language and to metaphor, and it is
selection, not logic, that underlies pattern recognition and
thinking in metaphorical terms. Thought is thus ultimately based
on our bodily interactions and structure, and its powers are
therefore limited in some degree. Our capacity for pattern
recognition may nevertheless exceed the power to prove
propositions by logical means... This realization does not, of
course, imply that selection can take the place of logic, nor
does it deny the enormous power of logical operations. In the
realm of either organisms or of the synthetic artifacts that we
may someday build, we conjecture that there are only two
fundamental kinds -- Turing machines and selectional systems.
Inasmuch as the latter preceded the emergence of the former in
evolution, we conclude that selection is biologically the more
fundamental process. In any case, the interesting conjecture is
that there appear to be only two deeply fundamental ways of
patterning thought: selectionism and logic. It would be a
momentous occasion in the history of philosophy if a third way
were found or demonstrated" (UoC p. 214).

Edelman's latest book (_A Universe of Consciousness_ [UoC],
coauthored with his colleague from the Neurosciences Institute,
Giulio Tononi) continues to be based on the Theory of Neuronal
Group Selection as developed by Edelman in his earlier books, but
contains some new ideas and a shift in nomenclature reflecting a
more information-theoretic and abstract point of view. To
introduce these new ideas, Edelman and Tononi give equations (in
UoC Chap. 10 and Chap. 11) for some mathematically-defined
quantities based on the notion of the statistical entropy of a
system, "a (logarithmic) function reflecting the number of
possible patterns of activity that the system can take, weighted
by their probability of occurrence" (UoC p. 121). Using
statistical entropy as a basis, Edelman and Tononi go on to
define: 1. the **integration** of a system, a measure of "the
loss of entropy that is due to the interactions among its
elements" due to the fact that "if there are any interactions
within the system, the number of states that the system can take
will be **less** than would be expected from the number of states
that its separate elements can take" (UoC p. 121); 2. the
**mutual information** among subsets of a system, measuring "the
total amount of statistical dependence (loss of entropy) between
any chosen subset of elements and the rest of a system; 3. an
**index of functional clustering**, measuring "the relative
strength of the interactions within a subset of elements compared
to the interactions between that subset and the rest of the
system", i.e., the integration of the subset divided by the
mutual information between the subset and the rest of the system
(UoC pp. 122-123); and finally 4. **neural complexity**, the
"averag[e of] the mutual information between each subset of a
neural system and the rest of the system for all possible
bipartitions of the system" (UoC p. 130). "[T]he value of the
average mutual information will be high if, on average, each
subset can take on many different states **and** these states
make a difference to the rest of the system" (UoC p. 130).

A high value of neural complexity means that a system is both
highly differentiated (consists of a large number of
functionally-specialized subunits) and highly integrated (the
activities of the subunits have a significant effect on the whole
system): "[H]igh values of complexity correspond to an optimal
synthesis of functional specialization and functional integration
within a system. This is clearly the case with systems like the
brain -- different areas and groups of neurons do different
things (they are differentiated); at the same time they interact
to give rise to a unified, conscious scene (they are integrated).
By contrast, systems whose individual elements are either not
integrated (such as a gas) or not specialized (like a homogeneous
crystal) will have minimal complexity" (UoC pp 130-131).

Edelman and Tononi have introduced the notions of neural
complexity and mutual information deliberately to avoid problems
associated with the application of traditional information theory
to the brain: "[A] number of applications of information theory
in biology have been fraught with problems and have had a
notoriously controversial history. This is the case largely
because at the heart of information theory as originally
formulated lies the sensible notion of an external, intelligent
observer who encodes messages using an alphabet of symbols.
So-called information-processing views of the brain, however,
have been severely criticized because they typically assume the
existence in the world of previously **defined** information
(begging the question of what information is) and often assume
the existence of precise neural codes for which there is no
evidence... The standard approach would be to measure
information by the number and probability of the states of the
system that are discriminable from the point of view of an
external observer. To avoid the fallacy of assuming a
'homunculus' watching the brain and interpreting its activity
patterns from the outside, however, we must get rid of the
privileged viewpoint of an external observer. In other words,
differences between activity patterns should be assessed only
with reference to the system itself... A noisy TV screen, for
example, goes through a large number of 'activity patterns' that
may look different to an external observer, but the TV, by
itself, cannot tell the difference among them; they make no
difference to it. Since there is no homunculus watching the
enchanted loom or TV screen of the brain, the only activity
patterns that matter are those that make a difference to the
brain itself" (UoC pp. 126-127).

Edelman and Tononi define the quantity which they call mutual
information precisely to eliminate the question-begging in the
definition of information, and to obviate the need for an outside
observer: "How can we measure... differences that make a
difference within a system like the brain? A simple approach is
to consider the system as its own 'observer'... all we need to do
is to imagine dividing the system in two and considering how one
part of the system affects the rest of the system, and vice
versa" (UoC pp. 127-128). The metric of neural complexity
further generalizes the notion of mutual information by averaging
it across all such possible bipartitions, and considering the
result as a measure of the overall information content of the
system. "[A] complex brain is like a collection of specialists
who talk to each other a lot" (UoC p. 126).

The variation of complexity with extremes of integration and
differentiation is illustrated (UoC p. 132 [Fig. 11.3]) by a set
of successive frames of bitmaps representing the activity of a
simulated primary visual cortical area, in which the connectivity
in the simulation has been adjusted to correspond to 1. an old,
deteriorated cortex, in which individual neuronal are still
active but there has been a loss of inter-group connections; 2. a
young, immature cortex in which each group is uniformly connected
to every other; or 3. a normal adult cortex in which inter-group
connectivity corresponds to that observed experimentally in the
primary visual cortex (neuronal groups with similar orientation
responsivity connected preferentially to each other, connection
strength decreasing with increasing topographic distance). The
pictures of 1 show a random-dot pattern like snow on a TV screen,
corresponding to high entropy but a lack of functional
integration (the gaseous extreme); the pictures of 2 show
alternating black and white bands rolling across the frames,
corresponding to high integration but low entropy (the
crystalline extreme) due to hypersynchronous firing like that in
slow-wave sleep or generalized epilepsy; while the pictures of 3
show continually changing patterned activity, corresponding to
both high (but less than case 1) entropy **and** high (but less
than case 2) functional integration, resulting in maximal
complexity. The patterned activity in a normal cortex can be
more or less complex depending on its level of arousal, since the
firing patterns of the thalamocortical neurons are responsive to
the neuromodulatory effects of the diffusely projecting value
systems. The "tonic" pattern, typical of waking, undergoes a
transition to a burst-pause pattern typical of slow-wave sleep,
corresponding to a dramatic reduction in the activity of
noradrenergic and serotoninergic systems during that sleep stage
(UoC p. 134; see also UoC p. 91).

A further quantity, "complexity matching", is not discussed in
formal detail, but is defined as "the change in neural complexity
that occurs as a result of the encounter with external stimuli"
(UoC p.137). This quantity reflects the fact that "[f]or a small
value of the **extrinsic** mutual information between a stimulus
and a neural system, there is generally a large change in the
**intrinsic** mutual information among subset of units within the
neural system... According to this analysis, extrinsic signals
convey information not so much in themselves, but by virtue of
how they modulate the intrinsic signals exchanged within a
previously experienced neural system... [H]igh values of
complexity matching indicate a 'high degree of adjustment of
inner to outer relations'... The same stimulus, say, a Chinese
character, can be meaningful to Chinese speakers and meaningless
to English speakers even if the extrinsic information conveyed to
the retina is the same. Attempts to explain this difference that
are based solely on the processing of a previously coded message
in an information channel beg the question of where this
information comes from. The concept of matching in a selectional
system easily resolves the issue" (UoC pp. 137-138).

In Edelman's earlier books, the momentary state of the
thalamocortical system of the brain of an organism exhibiting
primary consciousness was characterized as a linked set of active
global mappings, composed of the widely distributed,
reentrantly-connected set of currently-active neuronal groups.
This set of global mappings, one activation pattern selected out
of all the possibilities of the secondary repertoire, was spoken
of as constantly morphing into its successor in a probabilistic
trajectory influenced both by the continued bombardment of new
exteroceptive input (actively sampled through constant movement)
and by the organism's past history (as reflected by the strengths
of all the synaptic connections within and among the groups of
the primary repertoire). The evolving state of the
thalamocortical system in the conscious brain is given a new
characterization in Edelman's latest book by means of the
"dynamic core hypothesis" (UoC Chap. 12), which is described
using the notions of functional integration and complexity
formalized in the previous two chapters. Edelman and Tononi
define a "dynamic core" as "a cluster of neuronal groups that are
strongly interacting among themselves and that have distinct
functional borders with the rest of the brain at the time scale
of fractions of a second" (UoC p. 144). The term was chosen to
"emphasize both its integration and its constantly changing
composition. A dynamic core is therefore a process, not a thing
or a place... [I]t is, in general, spatially distributed, as
well as changing in composition, and thus cannot be localized to
a single place in the brain" (UoC p. 144).

The two tenets of the dynamic core hypothesis reframe the earlier
picture of consciousness (that of an evolving succession of sets
of active global mappings) as a dynamic core: "1. A group of
neurons can contribute directly to conscious experience only if
it is part of a distributed functional cluster that, through
reentrant interactions in the thalamocortical system, achieves
high integration in hundreds of milliseconds. 2. To sustain
conscious experience, it is essential that this functional
cluster be highly differentiated, as indicated by high values of
complexity" (UoC p. 144). "While we envision that a functional
cluster of sufficiently high complexity can be generated through
reentrant interactions among neuronal groups distributed
particularly within the thalamocortical system and possibly
within other brain regions, such a cluster is neither coextensive
with the entire brain nor restricted to any special subset of
neurons" (UoC p. 144).

The concept of consciousness as a dynamic core, and of the brain
as its own observer, are used by Edelman and Tononi as a
springboard to tackle the difficult problem of qualia (UoC
Chap. 13): "The specific quality, or 'quale' of subjective
experience -- of color, warmth, pain, a loud sound -- has seemed
beyond scientific explanation" (UoC p. 157). To provide a
scientific basis for qualia, the authors first recast the dynamic
core as an N-dimensional space, where N is the number of neuronal
groups currently participating in the core (a "large number, say,
between 10^3 and 10^7" [Uoc p. 165]): "Since a functional cluster
identifies a single, unified physical process, it follows that
the activity of these N neuronal groups should be considered
within a single reference space [a set of axes having a common
origin]" (UoC p. 165). "The number of points that can be
differentiated in this N-dimensional space -- which make a
difference to it -- is vast, as indicated by high values of
complexity... however, a large number of participating neuronal
groups alone [a large value of N] is not a guarantee of high
complexity... If, for example, the firing of the N neuronal
groups... were synchronized to an extreme degree, as is the case
during epileptic seizures, the actual repertoire of neural states
available to the dynamic core would be... just a few positions in
the N-dimensional space" (UoC p. 166). "[E]very discriminable
point in the N-dimensional space defined by the dynamic core
identifies a conscious state, while a trajectory joining points
in this space would correspond to a sequence of conscious states
occurring over time. Contrary to common usage by many
philosophers and scientists, we suggest that... **every conscious
state deserves to be called a quale**" (UoC p. 168). The
conscious experience of a quale therefore corresponds to the
discrimination of one particular state (one particular point in
the N-dimensional space) out of all the possible states of the
dynamic core.

Note that the authors do **not** identify qualia with neuronal
groups themselves. A particular neuronal group in the visual
cortex, for example, may fire when light of a particular
wavelength impinges on the retina, but it can only contribute to
a quale (a conscious discrimination) if it shares the same neural
reference space (has a high degree of functional integration, or
mutual information) with other neuronal groups. "Even then,
there would still be no notion that the system is dealing with
visual aspects of a stimulus, rather than... some other
modality... [unless] the neural reference space include[s] other
neuronal groups that are (or are not) responding to auditory,
tactile, or proprioceptive inputs. We would also need neuronal
groups whose firing is correlated with the particular position
your body is in and its relation to the environment -- the
so-called body schema. In addition, we would need neuronal
groups whose firing is correlated with your sense of familiarity
and of congruence with the situation you are in and neuronal
groups indicating whether salient events are occurring. And so
on and so forth until there is a neural reference space that is
sufficiently rich to allow discrimination of the conscious state
corresponding to the pure perception of a given color from
billions of other conscious states" (UoC pp. 166-167). This rich
neural reference space must, in fact, be the N-dimensional space
corresponding to the dynamic core of consciousness.

On the other hand, neuronal groups that are functionally
disconnected from the dynamic core may be thought of as
generating "smaller neural spaces spanned by a few axes that have
a separate origin [from the N-dimensional space of the dynamic
core]. An example of such a small, functionally disconnected
space may correspond to, for instance, neurons responding to the
fluctuations of blood pressure" (UoC pp. 165-166). This
independence from the dynamic core of consciousness accounts for
the fact that while "the firing of warm-sensitive neurons in the
brain produces a quale of warmth,... the firing of neurons
sensitive to blood pressure fails to produce any corresponding
quale, or any subjective feeling of what it is like to have high
blood pressure" (UoC p. 158).

The number of qualia available to an individual organism,
corresponding to the dimensionality of the N-dimensional space
representing the dynamic core, varies among individual organisms
depending on each organism's history and experience. For
example, the experience of a wine connoisseur, who has acquired
the ability to discriminate Cabernets from Pinots where once
there was only the ability to discriminate wine from water, has
simply made available additional discriminatory dimensions to the
connoisseur's dynamic core, "thereby adding a large number of
subtler differentiations among conscious states" (UoC p. 174).
Qualia can also be lost to conscious experience. For example,
neuronal groups in the fusiform gyrus, which selectively fire in
response to color, may continue to particpate in the dynamic core
even if damage to the retinas eliminates all sensory input
corresponding to wavelengths of light. These neuronal groups
will even continue to be occasionally active in the visual cortex
of a blind person in the absence of all such sensory input,
contributing to qualia for colors in dreams, memory, and
imagination. However, if this area of the cortex is damaged,
then a person will not only lose the capacity to respond to color
as a sensory stimulus, but will lose the capacity to remember,
imagine, or dream in color. The lesion effectively reduces the
dimensionality of the N-dimensional space of the dynamic core
(UoC pp. 53, 160).

The intuitive notion of similarity and dissimilarity of conscious
states corresponds to the notion of geometric distance between
points in an N-dimensional space with a certain metric. The
metric of this space is chosen so that points along axes
corresponding to the submodalities of a particular sensory
modality are closer to each other than points along axes
corresponding to different modalities -- in other words, the axes
corresponding to submodalities of a particular modality are
"bundled" together. "It is often remarked that red is as
irreducible and different from blue as it can possibly be. This
irreducibility corresponds to the fact that different groups of
neurons fire when we perceive red and when we perceive blue,
thereby defining two irreducible dimensions of the N-dimensional
space underlying conscious perception. Yet we also know that as
different as red and blue may seem subjectively, they are much
closer to each other than they are, say, to the blaring of a
trumpet. In short, the phenomenal space obeys a certain metric
within which certain conscious states are closer than others.
According to our hypothesis, the topology and metric of this
space should be described in terms of the appropriate neural
reference -- the dynamic core -- and must be based on the
interactions among the neuronal groups participating in it" (UoC
pp. 168-169).

The similarity between a human-made gadget or electronic device
discriminating among stimuli and a person being asked to make the
same discrimination (such as "a photodiode that can differentiate
between light and dark and provide an audible output, compared to
a conscious human being performing the same task and giving a
verbal report" [UoC p. 32]) is misleading. One might be tempted
to ask "Why should the simple differentiation between light and
dark performed by the human being be associated with conscious
experience, while that performed by the photodiode is not?" (UoC
p. 32). The answer of Edelman and Tononi is that "[T]o a
photodiode, the discrimination between darkness and light is the
only one available, and therefore it is only minimally
informative. To a human being, in contrast, an experience of
complete darkness and an experience of complete light are two
special conscious experiences selected out of an enormous
repertoire, and their selection thus implies a correspondingly
large amount of information and discrimination among potential
actions" (UoC pp. 32-33). "The enormous variety of discriminable
states available to a conscious human being is clearly many
orders of magnitude larger than those available to anything we
have built. Whether we can verbally describe these states
satisfactorily or not, billions of such states are easily
discriminable by the same person, and each of them is capable of
bringing about different consequences" (UoC p. 32).

It is a well-known fact of psychology that humans have a very
limited capacity to keep more than a few distinct chunks of
information (more than about seven digits, for example, or more
than about four visualized objects) simultaneously in mind (UoC
p. 26). This has been ungenerously interpreted as meaning that
the "bandwidth" of human consciousness is between 1 and 16 bits
per second (UoC p. 150). Edelman and Tononi assert that this is
an incorrect interpretation: the bandwidth of consciousness
should be calculated based on a definition of information as the
discarding of alternatives: "The ability to differentiate among a
large repertoire of possibilities constitutes information, in the
precise sense of 'reduction of uncertainty'. Furthermore,
conscious discrimination represents information **that makes a
difference**, in the sense that the occurrence of a given
conscious state can lead to consequences that are different, in
terms of both thought and action, from those that might ensue
from other conscious states" (UoC pp. 29-30). In the human
brain, this ruling out of alternatives takes place within as
little as 100 or 150 milliseconds: "Since we can easily
differentiate among billions of different conscious states within
a fraction of a second, we have concluded that the
informativeness of conscious experience must be extraordinarily
high, indeed, better than any present-day engineer could dream
of" (UoC p. 150).

The limitation on the simultaneous conscious juggling of discrete
"chunks", claim Edelman and Tononi, "is a limit not on the
information content of conscious states, but merely on how many
nearly independent entities can be discriminated within a
**single** conscious state **without interfering with the
integration and coherence of that state**" (UoC p. 26).
Consciousness is an inherently Gestalt phenomenon -- it "wants"
to be integrated: "In terms of the dynamic core, such a capacity
limitation reflects an upper limit on **how many partially
independent subprocesses can be sustained within the core without
interfering with its integration and coherence**. Indeed, it is
likely that the same neural mechanisms responsible for the rapid
integration of the dynamic core are also responsible for this
capacity limitation" (UoC p. 150). It would seem that, just as
it would be difficult to simulate primary consciousness on a
digital computer, conscious brains return the compliment -- they
have a hard time simulating the discrete registers and memory
locations used by such machines during the course of computation
(at least without resorting to pencil and paper).



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:13:51 MDT