Re: Contextualizing seed-AI proposals

From: Jim Fehlinger (fehlinger@home.com)
Date: Sat Apr 14 2001 - 09:25:11 MDT


"Eliezer S. Yudkowsky" wrote:

> I wrote:

> > I've never quite been able to figure out which side of the
> > cognitive vs. post-cognitive or language-as-stuff-of-intelligence
> > vs. language-as-epiphenomenon-of-intelligence fence this document
> > [CaTAI 2.2, http://www.singinst.org/CaTAI.html ] comes down on...

> "Hm, the old debate about symbols, representational power,
> and so on. I'm well out of it."

Sigh. I freely admit to being an amateur (and worse than that --
a dilettante) in this field, as in all others, but my impression
is that this "old debate" is far from settled, and that you can't
afford to be "well out of it" if you want to make a serious
contribution to AI.

The precise role of "symbols" in the functional hierarchy of an
intelligent organism is a wickedly-beckoning trap for human
philosophers, mathematicians, and AI researchers **precisely** because
they're so impressed by, and enamored of, human language. They
have, historically, succumbed to the temptation to stick it in
where it doesn't belong. That tendency is now being successfully
combatted, I gather. You, as an AI researcher, need to know
all about that, and to take it very seriously indeed (though certainly
not **just** because **I** say so, for God [or Ifni] 's sake! ;->).

In particular, you need to armor yourself, in your pursuit
of the truth of these matters, against what I perceive is
your own particular weakness -- your burning desire to find
a rocket fuel to power the runaway Singularity you do desperately
long for. Your motives for that may be laudable -- saving the
human race, and all that -- but the truth needs to come
first.

Pay attention to the business at hand, Eliezer. Even an Algernon
can do no better than that. The Singularity will take care of
itself. Or not. Whatever. Don't try to push the river, as the
Taoists would say.

> So if you want to know whether CaTAI is symbolic or connectionist AI, the
> answer is a resounding "No". This is one of the old, false dichotomies
> that helped to cripple the field of AI.

No, no, a thousand times no!!! Symbolic vs. "connectionist" isn't
the point (or at least it's far from being the whole story).
Which gives me hope, because it means the attractors inside **your**
head may yet spontaneously undergo a phase transition or two (you're
liable to pop through the roof when they do, yelling Eurisko, Eureka,
or something unprintable ;-> ;-> ).

> Thoughts are built from structures of concepts ("categories",
> "symbols"). Concepts are built from sensory modalities. Sensory
> modalities are built from the actual code.

This is my excuse for another excerpt from _Going Inside_.
Yes, this is about biological brains, and not the design of
any hypothetical synthetic intelligence, but as the only
known example of the sort of thing the AI community has been
looking to build, the **human** brain needs to be examined
seriously, and not just dismissed as an inconveniently
squishy instantiation of some hypothetical (and perhaps
nonexistent) Turing machine, as Edelman has pointed out.

 From Chapter Five, "A Dynamical Computation" (pp. 106-110):

"[T]he brain [is] a lot like a computer in having a highly
structured way of dealing with information; on the other
[hand], its circuitry [is] basically fluid... These two ways
of looking at the brain, the computational and dynamic, seemed
[irreconcilable]. However, in the 1990s this started to
change rapidly as people began to understand the idea of
complexity and saw how it could embrace exactly such a
hybrid system.

[C]omplexity is not so much a new concept as a realization
that the principles behind Darwinian evolution could be
expanded to form an intellectual framework for tackling a
great many of the things... that fit so poorly into the
traditional reductionist mould of scientific thought. A
complex system is one that is fundamentally dynamic,
even chaotic, but which has found ways of harnessing this
dynamism. It has developed machinery that channels
activity down certain predictable and constructive paths
to inflate a state of organisation...

A well-adapted system becomes meta-stable. It is built of
dynamic material, yet a network of internal checks and
balances holds its structure firm, or at least firm-ish.
Like the brain, it looks solid enough -- it hangs together
for sufficient time to do the job for which it is
designed -- but it is not solid really... [A] complex
system can begin to evolve hierarchies of structure.
Having evolved a first level of organization... it can
go on to add layer upon layer of fresh structure to
develop into something that is super-complex.

Life is the most familiar example of this. The DNA
molecule is a device for milking structure from the
randomness of organic chemistry... A strand of DNA
is no more than a recipe for churning out a particular
brew of proteins at a given time. But this list has
been so carefully tuned over millennia of evolution
that when it is transcribed, the resulting mix cannot
help but self-assemble to form a cell. The proteins
coming off the DNA manufacturing line will jostle about
on individually chaotic trajectories, their fate
apparently ruled by chance, yet the blend is so precisely
specified that the proteins will find themselves falling
into the same old conjunctions, time and time again...

DNA acts as a kind of digital bottleneck. It does
information processing on a generational timescale,
trapping knowledge about what protein mixtures work and
applying that to each new cycle of birth and growth.
DNA is computer-like in that it has the necessary
rigidity to make particular kinds of structure as soon
as the winds of chemical chance begin to blow through
it. But it also has just enough plasticity to act
as a memory, to be changed by events...

The brain is similar in that it has a machinery... which
can milk organization from the disorder of the moment...
[T]he brain appears to have hierarchies of digital-like
bottlenecks for distilling order from a flow of events.
The channelling starts right down at the synapse level,
perhaps even at the membrane pore level. A synaptic
junction has the right mix of rigidity and plasticity
to turn a soupy mix of electro-chemistry into the
transmission of a signal... [S]omething reasonably
definite happens... The synapse also learns in some
way... becoming either fatigued or sensitised and, over
a longer timescale, generally strengthened or weakened.

Stacked up above the synapse are layer upon layer of
further bottlenecks. Neurons turn a complex mix of
synaptic activity into a crisp output decision...
After neurons come topographical mappings and
hierarchies of such mappings... Synapses, neurons,
and cortex mappings are sufficient to raise the level
of activity from raw electro-chemistry to orderly
neural representations... [F]or these maps to
be woven into a graded field of consciousness -- an
awareness with a bright, focused foreground and a
dimly-perceived yet still organized periphery -- they
must feed through some form of system-wide, whole-brain
transition... probably achieved by the cortex sheet...
using a clutter of sub-cortical organs as a bottleneck
to connect back to itself and so focus aspects of
its own activity...

[T]he brain... has many levels at which it wrings
information from basically dynamic processes...
Computation is about stability; dynamism is about
instability; but complexity is about getting just the
right pitch of meta-stability out of a basically
dynamic system so that it keeps moving forward, never
falling back... The brain... evolves its way to
a state of output. Everything has to happen in concert.
An individual synapse, neuron, or mapping area can only
arrive at a meaningful level of activity after the
entire stack of representation across the brain has
had time to settle down. All the eddies of competition
and feedback have to have the chance to play themselves
out and reach some kind of fleeting balance...

[I]n a computer, information is something compartmentalised
and stable. But in the brain, information had a trajectory --
it had to develop. And by the time a neuron or map had
settled into some kind of output state -- although by now,
even the word 'output' had dubious connotations --
this activity would as much represent what the rest of
the brain thought about the response as what the cell or
mapping area felt about the stimulus that provoked the
firing in the first place. Information only became
information when each fragment of brain activation came
also to reflect something about the whole."

> Intelligence ("problem-solving", "stream of consciousness")
> is built from thoughts. Thoughts are built from structures
> of concepts ("categories", "symbols"). Concepts are built from
> sensory modalities. Sensory modalities are built from the
> actual code.

Too static, I fear. Also, too dangerously perched on
the edge of what you have already dismissed as the "suggestively-
named Lisp token" fallacy.

Fee, fie, foe, fum.
Cogito, ergo sum.

Please understand that I'm **not** succumbing to mysticism
or oceanic holism here (or at least, I hope I'm not!). I'm
not even saying that the panorama of alternating soupy
and sticky, complexity-generated levels of hierarchy that
McCrone describes above couldn't one day be simulated on
some "honkin' big" computer (it won't have the "Intel inside"
sticker on it, though! ;->).

Don't give up, Eliezer! You may have to pay more attention
to the near end, rather than the far end of the tunnel, though.
The light there remains, whether or not any of us will get to
step into it. Remember, vegetable before dessert ;-> ;-> .

Jim F.



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:46 MDT