Contextualizing seed-AI proposals

From: Jim Fehlinger (fehlinger@home.com)
Date: Thu Apr 12 2001 - 17:06:23 MDT


I've been on the lookout, ever since I first stumbled across the
transhumanist and Extropian community on the Web three years ago,
for any sign that a consensus might be emerging as to how
quasi-human artificial intelligence is likely to come into
existence (never mind, for the moment, SI, Sysops, computronium,
or the Singularity), and how such speculations might tie in with
the larger context of the cognitive science / neuroscience
interplay that's been simmering in recent years. I've been aware
of the latter ever since running into Edelman's _Bright Air,
Brilliant Fire_ in 1992, long before I knew about the Extropians,
and that book definitely caused me to prick up my ears (I had
gotten bored with the perennially-unfulfilled promise of AI, a
boredom only momentarily relieved by the appearance of Moravec's
_Mind Children_ in 1988).

It has been my impression of ca. 1970-1990 cognitive-science-era
AI prototypes of the like of Eurisko, Automated Mathematician,
Copycat, and so on, that these efforts were all primarily or
exclusively **linguistic** in intent -- that both their input and
output domains were sets of sentences in a natural language, or
in a restricted subset of a natural language, or strings of
symbols or expressions in a formal language, or mathematical
expressions.

It strikes me that a late-90's view of the relationship between
"knowledge" and language (as espoused, for instance, by Hilary
Putnam, George Lakoff, Gerald Edelman and others) would likely
view the latter as far too coarse a net in which to completely
capture the former. Rather, language functions more like a sort
of shorthand which relies, for its usefulness, on the fact that
two interlocutors have biologically similar bodies and brains,
and already share a great many similar experiences.

Thus, in the view of neurobiologist Walter Freeman, as reported
in -- you guessed it! -- _Going Inside: A Tour Round a Single
Moment of Consciousness_ by John McCrone, for each interlocutor,
"the brain [brings] the full weight of a lifetime's experience to
bear on each moment as it [is] being lived. Every second of
awareness as a child or as an adolescent [will] in some measure
be part of [vis] consciousness of the present. That [is] what
being a memory landscape really [means]." McCrone goes on with a
direct quote of Freeman: "The cognitive guys think it's just
impossible to keep throwing everything you've got into the
computation every time. But that is exactly what the brain does.
Consciousness is about bringing your entire history to bear on
your next step, your next breath, your next moment." (p. 268).
A linguistic exchange between two such "memory landscapes" relies
for its power on the fact that the receiver can **recreate** a
conscious state similar (enough) to that of the sender, rather
than on the unlikely interpretation that the message somehow
encodes all the complexity of the conscious state of the sender:
"A word is no more than a puff of air, a growl in the throat. It
is a token. But saying a word has the effect of grabbing the
mind of a listener and taking it to some specific spot within
[vis] memories." (p. 293).

Gerald Edelman and Giulio Tononi, in _A Universe of
Consciousness_, call this phenomenon "complexity matching":

"For a small value of the **extrinsic** mutual information
between a stimulus and a neural system, there is generally a
large change in the **intrinsic** mutual information among
subsets of units within the neural system. This change can be
measured by a quantity, called complexity matching..., which is
the change in neural complexity that occurs as a result of the
encounter with external stimuli.

According to this analysis, extrinsic signals convey information
not so much in themselves, but by virtue of how they modulate the
intrinsic signals exchanged within a previously experienced
nervous system. In other words, a stimulus acts not so much by
adding large amounts of extrinsic information that need to be
processed as it does by amplifying the intrinsic information
resulting from neural interactions selected and stabilized by
memory through previous encounters with the environment... At
every instant, the brain goes far 'beyond the information given,'
and in conscious animals its response to an incoming stimulus is
therefore a 'remembered present.'

...

This conclusion is consistent with an everyday observation: The
same stimulus, say, a Chinese character, can be meaningful to
Chinese speakers and meaningless to English speakers even if the
extrinsic information conveyed to the retina is the same.
Attempts to explain this difference that are based solely on the
processing of a previously coded message in an information
channel beg the question of where this information comes from.
The concept of matching in a selectional system easily resolves
the issue." (_A Universe of Consciousness_, pp. 137-138).

This newer neurobiologically-based analysis (which, for lack of a
better term, I'm going to call the "post-cognitivist" view [*]),
suggests that both the external world and the brain are much
"denser", in some sense, than networks of logic-linked sentences
in a formal or natural language can be. On the other hand, the
1970-1990 cognitive-science-era view was based on the assumption
that the whole world **can** be captured in a net of words:

"Cognitivism is the view that reasoning is based solely on
manipulation of semantic representations. Cognitivism is based on
objectivism ('that an unequivocal description of reality can be
given by science') and classical categories (that objects and
events can be 'defined by sets of singly necessary and jointly
sufficient conditions')([_Bright Air, Brilliant Fire_] p. 14).
This conception is manifest in expert systems, for example, or
any cognitive model that supposes that human memory consists
only of stored linguistic descriptions (e.g., scripts, frames,
rules, grammars)...

Edelman characterizes these computer programs as 'axiomatic
systems' because they contain the designer's symbolic categories
and rules of combination, from which all the program's subsequent
world models and sensorimotor procedures will be
derived. Paralleling the claims of many other theorists, from
Collingwood and Dewey to Garfinkel and Bateson, he asserts that
such linguistic models 'are social constructions that are the
results of thought, not the basis of thought.' ([BABF] p. 153) He
draws a basic distinction between what people do or experience and
their linguistic descriptions (names, laws, scripts): 'Laws do
not and cannot exhaust experience or replace history or the
events that occur in the actual courses of individual
lives. Events are denser than any possible scientific
description. They are also microscopically indeterminate, and,
given our theory, they are even to some extent macroscopically
so.' (_Bright Air, Brilliant Fire_ pp. 162-163)... 'By taking
the position of a biologically-based epistemology, we are in some
sense realists' (recognizing the inherent 'density' of objects
and events) 'and also sophisticated materialists' ([BABF] p. 161)".
[from an on-line review of _Bright Air, Brilliant Fire_ at
http://cogprints.soton.ac.uk/documents/disk0/00/00/03/35/cog00000335-00/123.htm ]

If sentences in a natural language "[grab] the mind of a listener
and [take] it to some specific spot within [vis] memories" (as
John McCrone quotes Walter Freeman as saying), then, to take a
culinary analogy, if we imagine these "spots" as corresponding to
fuzzy regions embedded the space of consciousness like raisins
baked into a muffin batter, then there are many more points in
the muffin-space of consciousness than correspond to the regions
occupied by the raisins, and the spatial relationships among the
raisins themselves are only maintained by the supporting muffin.
This brings to mind Wittgenstein's famous and enigmatic
proposition in the _Tractatus_, "What we cannot speak about we
must pass over in silence".

To digress into the personal realm, the times when I have felt
most painfully the inadequacy of language to capture the
subtleties of the flow of consciousness have been in the context
of intensely emotional interpersonal relationships that have gone
off the rails -- when such relationships are working, the
synchronization between minds takes place effortlessly, and the
consonance of linguistic interchanges is just another
manifestation of the underlying consonance of outlook; but when
they go sour, meta-discussions about where things went wrong are
usually pointless, and language, whether spoken or written, is
far too crude an instrument for the job. At such times, one
feels like an angst-ridden character in a French New Wave film,
puffing meaningfully on a cigarette and staring silently into
space.

This leads me to ask: In the light of the shifts in theories
about the mind which seem to have been taking place in the
1990's, in which neuroscience-based views of the human brain (as
exemplified, for instance, by the theories of Gerald M. Edelman)
seem to be eclipsing the symbolic modelling of cognitive science,
does **anyone** here still believe that an AI could actually
operate in a purely linguistic domain? I suppose the last gasp
of the purely inferential approach to AI, in which the sentences
which the AI inputs and outputs are part of, and at the same
level as, the web of sentences which the AI is **made of**, was
Douglas Lenat's Cyc, and I haven't heard much encouraging news
from that direction lately.

Not to be coy, I will admit that I'm thinking in particular about
the philosophy of seed AI sketched in Eliezer Yudkowsky's CaTAI
2.2[.0] ( http://www.singinst.org/CaTAI.html ). I've never quite
been able to figure out which side of the cognitive
vs. post-cognitive or language-as-stuff-of-intelligence vs.
language-as-epiphenomenon-of-intelligence fence this document
comes down on. There are frustratingly vague hints of **both**
positions.

-------------------------

FOR EXAMPLE, the following opinions, statements and passages seem
to come down on the side of the cognitivists (I started by just
scanning down the document from the beginning, but then begin
skipping faster long before I got to the end; there are, no
doubt, many more examples than reproduced here):

"The task is not to build an AI with some astronomical level of
intelligence; the task is building an AI which is capable of
improving itself, of understanding and rewriting its own source
code. The task is not to build a mighty oak tree, but a humble
seed."

"Intelligence, from a design perspective, is a goal with many,
many subgoals."

"...the functionality of evolution itself must be replaced -
either by the seed AI's self-tweaking of those algorithms, or by
replacing processes that are autonomic in humans with the
deliberate decisions of the seed AI."

"A seed AI could have a "codic cortex", a sensory modality
devoted to code, with intuitions and instincts devoted to code,
and the ability to abstract higher-level concepts from code and
intuitively visualize complete models detailed in code."

The "world-model" for an AI living in [a] microworld [of billiard
balls] consists of everything the AI knows about that world - the
positions, velocities, radii, and masses of the billiard
balls... The "world-model" is a cognitive concept; it refers to
the content of all beliefs..."

"I mention that list of features to illustrate what will probably
be one of the major headaches for AI designers: If you design a
system and forget to allow for the possibility of expectation,
comparision, subjunctivity, visualization, or whatever, then
you'll either have to go back and redesign every single component
to open up space for the new possibilities, or start all over
from scratch. Actualities can always be written in later, but
the potential has to be there from the beginning, and that means
a designer who knows the requirements spec in advance."

"[T]he possession of a codic modality may improve the AI's
understanding of source code, at least until the AI is smart
enough to make its own decisions about the balance between
slow-conscious and fast-autonomic thought."

"There's an AI called "Copycat", written by Melanie Mitchell and
conceived by Douglas R. Hofstadter, that tries to solve analogy
problems in the microdomain of letter-strings... Without going
too far into the details of Copycat, I believe that some of the
mental objects in Copycat are primitive enough to lie very close
to the foundations of cognition."

"Most of the time, the associational, similarity-based
architecture of biological neural structures is a terrible
inconvenience. Human evolution always works with neural
structures - no other type of computational substrate is
available - but some computational tasks are so ill-suited to the
architecture that one must turn incredible hoops to encode them
neurally. (This is why I tend to be instinctively suspicious of
someone who says, 'Let's solve this problem with a neural net!'
When the human mind comes up with a solution, it tends to phrase
it as code, not a neural network. 'If you really understood the
problem,' I think to myself, 'you wouldn't be using neural
nets.')"

"Eurisko, designed by Douglas Lenat, is the best existing example
of a seed AI, or, for that matter, of any AI." [OK, this is
actually from "The Plan To Singularity", at
http://sysopmind.com/sing/PtS/vision/industry.html ]

-------------------------

ON THE OTHER HAND, there are many other statements that seem to
come down firmly on the side of the post-cognitivists:

"To think a single thought, it is necessary to duplicate far more
than the genetically programmed functionality of a single human
brain. After all, even if the functionality of a human were
perfectly duplicated, the AI might do nothing but burble for the
first year - that's what human infants do."

"Self-improvement - the ubiquitous glue that holds a seed AI's
mind together; the means by which the AI moves from crystalline,
programmer-implemented skeleton functionality to rich and
flexible thoughts. In the human mind, stochastic concepts -
combined answers made up of the average of many little answers -
leads to error tolerance; error tolerance lets concepts mutate
without breaking; mutation leads to evolutionary growth and rich
complexity. An AI, by using probabilistic elements, can achieve
the same effect..."

"AI has an embarassing tendency to predict success where none
materializes, to make mountains out of molehills, and to assert
that some simpleminded pattern of suggestively-named LISP tokens
completely explains some incredibly high-level thought process...
In the semantic net or Physical Symbol System of classical AI, a
light bulb would be represented by an atomic LISP token named
light-bulb."

"As always when trying to prove a desired result from a flawed
premise, the simplest path involves the Laws of Similarity and
Contagion. For example, ... any instance of human deduction
which can be written down (after the fact) as a syllogism must be
explained by the blind operation of a ten-line-of-code process -
even if the human thoughts blatantly involve a rich visualization
of the subject matter, with the results yielded by direct
examination of the visualization rather than formal deductive
reasoning."

"There are several ways to avoid making this class of mistake.
One is to have the words "Necessary, But Not Sufficient" tattooed
on your forehead. One is an intuition of causal analysis that
says 'This cause does not have sufficient complexity to explain
this effect.' One is to be instinctively wary of attempts to
implement cognition on the token level."

"The Law of Pragmatism: Any form of cognition which can be
mathematically formalized, or which has a provably correct
implementation, is too simple to contribute materially to
intelligence."

"Classical AI programs, particularly "expert systems", are often
partitioned into microtheories. A microtheory is a body of
knowledge, i.e. a big semantic net, e.g. propositional logic,
a.k.a. suggestively named LISP tokens... Why did the microtheory
approach fail? ... First, microtheories attempt to embody
high-level rules of reasoning - heuristics that require a lot of
pre-existing content in the world-model... We are not born with
experience of butterflies; we are born with the visual cortex
that gives us the capability to experience and remember
butterflies."

"We shouldn't be too harsh on the classical-AI researchers.
Building an AI that operates on "pure logic" - no sensory
modalities, no equivalent to the visual cortex - was worth
trying... But it didn't work. The recipe for intelligence
presented by CaTAI assumes an AI that possesses equivalents to
the visual cortex, auditory cortex, and so on..."

"[T]houghts don't start out as abstract; they reach what we would
consider the "abstract" level by climbing a layer cake of ideas.
That layer cake starts with the non-abstract, autonomic
intuitions and perceptions of the world described by modalities.
The concrete world provided by modalities is what enables the AI
to learn its way up to tackling abstract problems."

"Many classical AIs lack even basic quantitative interactions
(such as fuzzy logic), rendering them incapable of using methods
such as holistic network relaxation, and lending all interactions
an even more crystalline feeling. Still, there are classical AIs
that use fuzzy logic. What's missing is flexibility, mutability,
and above all richness; what's missing is the complexity that
comes from learning a concept."

"It turns out that [Douglas Lenat's] Eurisko's "heuristics" were
arbitrary pieces of LISP code. Eurisko could modify heuristics
because it possessed "heuristics" which acted by splicing,
modifying, or composing - in short, mutating - pieces of LISP
code... In a sense, Eurisko was the first attempt at a seed AI -
although it was far from truly self-swallowing, possessed no
general intelligence, and was created from crystalline
components."

-------------------------

SOMETIMES we get both points of view in the same sentence:

"[N]eural networks are very hard to understand, or debug, or
sensibly modify. I believe in the ideal of mindstuff that both
human programmers and the AI can understand and manipulate. To
expect direct human readability may be a little too much; that
goal, if taken literally, tends to promote fragile, crystalline,
simplistic code, like that of a classical AI."

"[D]efining concepts in terms of other concepts is what classical
AIs do... I can't recall any classical AIs that constructed
explicitly multilevel models to ground reasoning using semantic
networks..." [**ground** reasoning using **semantic** networks?]

"[E]ven if the mind were deprived of its ultimate grounding and
left floating - the result wouldn't be a classical AI. Abstract
concepts are learned, are grown in a world that's almost as rich
as a sensory modality - because the grounding definitions are
composed of slightly less abstract concepts with rich
interactions, and those less-abstract concepts are rich because
they grew up in a rich world composed of interactions between
even-less-abstract concepts, and so on, until you reach the level
of sensory modalities." [grounding **definitions**??]

"In a human, these features are complex functional adaptations,
generated by millions of years of evolution. For an AI, that means
you sit down and write the code; that you change the design, or
add design elements (special-purpose low-level code that directly
implements a high-level case is usually a Bad Thing), specifically
to yield the needed result."
[some might comment that if a piece of code is the kind you
can "sit down and write", then it is ipso facto too high-level
to constitute "mindstuff"]

"Mindstuff is the basic substrate from which the AI's permanently
stored cognitive objects (and particularly the AI's concepts) are
constructed. If a cognitive architecture is a structure of pipes, then
mindstuff is the liquid flowing through the pipes."
[permanently stored cognitive objects?]

"Human scientific thought relies on millennia of accumulated
knowledge, the how-to-think heuristics discovered by hundreds
of geniuses. While a seed AI may be able to absorb some of
this knowledge by surfing the 'Net..."
[**knowledge** from surfing the 'Net?]

-------------------------

Anyway, you get the idea -- it's been really, really hard for me
to contextualize this document in terms of my other reading, and
I don't think this is entirely due to the limitations of my own
intellect ;-> .

At the risk of offering what might be interpreted as a gross
impertinence (but isn't intended that way at all), here is my
take on CaTAI. I believe that Eliezer has absorbed enough of
what's going on to have realized at some level that the symbolic
logic, computer-programming, classical AI approach (what he calls
the fragile, crystalline, simplistic approach somewhere quoted
above), is in trouble. However, I think he thinks that he has to
cling to this approach, to some degree, in order to see his way
clear to a self-improving AI -- it needs **source code** that can
be raked over by that codic cortex (telling quote: "I believe in
the ideal of mindstuff that both human programmers and the AI
can understand and manipulate."). The self-improvement, of
course, is necessary in order to be able to get to his vision of
the Singularity, which is not a neutral goal in Eliezer's case --
he sees it as the salvation of the human race (another telling
quote: "The leap to true understanding, when it happens, will
open up at least as many possibilities as would be available
to a human researcher with access to vis own neural source
code." [but what if there **is** no "neural source code"?]). This has
led to the confused tone of his document, the mixing of levels, the
arguments being dragged toward the post-cognivitist point of view
while remaining stubbornly framed in the old cognitivist
language.

In a sense, the emotional tone of this position is similar to
what I believe I was sensing a while ago in the discussion of
digital vs. analog computing. I think there are folks who
believe an AI has **got** to be digital, or the party's over,
just as Eliezer seems to believe that an AI has **got** to have
source code, or the party's over.

I'm a little dismayed to find these self-imposed blinders among
the bright lights of this list. I think we've all been spending
too much time around computers, folks -- they're lots of fun, but
they're not the whole world, and in fact as far as future
ultratechnology is concerned, a veer **away** from the digital,
computational model of AI is **just** the sort of unsurprising
surprise we should all half expect, a paradigm shift we should all
be prepared for.

As McCrone says in _Going Inside_ (Chapter 12, "Getting It
Backwards"): "[P]ersonally speaking, the biggest change for me
was not how much new needed to be learnt, but how much that was
old and deeply buried needed to be unlearnt. I thought my
roundabout route into the subject would leave me well prepared.
I spent most of the 1980s dividing my time between computer
science and anthropology. Following at first-hand the attempts
of technologists to build intelligent machines would be a good
way of seeing where cognitive psychology fell short of the mark,
while taking in the bigger picture -- looking at what is known
about the human evolutionary story -- ought to highlight the
purposes for which brains are really designed [**]. It would be a
pincer movement that should result in the known facts about the
brain making more sense.

Yet it took many years, many conversations, and many false starts
to discover that the real problem was not mastering a mass of
detail but making the right shift in viewpoint. Despite
everything, a standard reductionist and computational outlook on
life had taken deep root in my thinking, shaping what I expected
to see and making it hard to appreciate anything or anyone who
was not coming from the same direction. Getting the fundamental
of what dynamic systems were all about was easy enough, but then
moving on from there to find some sort of balance between
computational and dynamic thinking was extraordinarily difficult.
Getting used to the idea of plastic structure or guided
competitions needed plenty of mental gymnastics...

[A]s I began to feel more at home with this more organic way of
thinking, it also became plain how many others were groping their
way to the same sort of accommodation -- psychologists and brain
researchers who, because of the lack of an established vocabulary
or stock of metaphors, had often sounded as if they were all
talking about completely different things when, in fact, the same
basic insights were driving their work."

Jim F.

[*] In _Bright Air, Brilliant Fire_, Edelman speaks of this
"post-cognitivist" view as characterizing a "Realists Club":

"It appears that the majority of those working in cognitive
psychology hold to the views I attack here. But there is a
minority who hold contrary views, in many ways similar to mine.
These thinkers come from many fields: cognitive psychology,
linguistics, philosophy, and neuroscience. They include John
Searle, Hilary Putnam, Ruth Garret Millikan, George Lakoff,
Ronald Langacker, Alan Gauld, Benny Shanon, Claes von Hofsten,
Jerome Bruner, and no doubt others as well. I like to think of
them as belonging to a Realists Club, a dispersed group whose
thoughts largely converge and whose hope it is that someday the
more vocal practitioners of cognitive psychology and the
frequently smug empricists of neuroscience will understand that
they have unknowingly subjected themselves to an intellectual
swindle. The views of this minority will be reflected in what I
have to say, but obviously they vary from person to person. The
reader is urged to consult these scholars' works directly for a
closer look at the diversity of their thoughts and
interpretations."

-- _Bright Air, Brilliant Fire_, "Mind Without Biology: A
   Critical Postscript", section "Some Vicious Circles in the
   Cognitive Landscape", p. 229

_Going Inside: A Tour Round a Single Moment of Consciousness_ by
John McCrone, which gives an overview of recent developments and
shifts of opinion at the border between neuroscience and
cognitive science, adds more names to this list
( http://www.btinternet.com/~neuronaut/webtwo_book_intro.html ).

[**] Another trend in CaTAI, and in a lot of SF-ish and computerish
dreaming about AI, is the burning desire to jettison human "emotional
weakness" (remember Forbin's comment in _Colossus_, "I wanted an
impartial, emotionless machine -- a paragon of reason...")
Telling quote: "Freedom from human failings, and especially human
politics... A synthetic mind has no political instincts; a synthetic
mind could run the course of human civilization without politically-imposed
dead ends, without observer bias, without the tendency to rationalize."
Again, this seems profoundly out of sync with recent, post-cognitivist
thinking about human intelligence.



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:45 MDT