Re: Contextualizing seed-AI proposals

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Apr 14 2001 - 12:14:32 MDT


Jim Fehlinger wrote:
>
> "Eliezer S. Yudkowsky" wrote:
>
> > I wrote:
>
> > > I've never quite been able to figure out which side of the
> > > cognitive vs. post-cognitive or language-as-stuff-of-intelligence
> > > vs. language-as-epiphenomenon-of-intelligence fence this document
> > > [CaTAI 2.2, http://www.singinst.org/CaTAI.html ] comes down on...
>
> > "Hm, the old debate about symbols, representational power,
> > and so on. I'm well out of it."
>
> Sigh. I freely admit to being an amateur (and worse than that --
> a dilettante) in this field, as in all others, but my impression
> is that this "old debate" is far from settled, and that you can't
> afford to be "well out of it" if you want to make a serious
> contribution to AI.

I know it's far from settled. My point is that I *grew up* - as an AI guy
- with this argument resounding in my ears, and my sheer *annoyance* with
the massive *sloppiness* of the arguments in this area is part of what
powered my entrance into AI. People seemed to get emotionally attached to
their positions, for reasons that I understand but still can't sympathize
with. If it turns out that yet another layer of functional complexity is
necessary over and above the CaTAI model, I'll deal with it! My annoyance
is with what I named the Physicist's Paradigm - the idea that one major
trick has to be THE ANSWER; or, almost as annoying, the idea that a bunch
of little tricks must be the answer and that there are no major tricks.
It takes lots and lots of major tricks, and even if you have a bunch of
major tricks you can't be sure you've found them all. I think I know the
role of symbols in humans, and how to generalize that role to generic
minds, but it's really not the most important thing I know about AI. It's
in the top ten, but not the top five.

> The precise role of "symbols" in the functional hierarchy of an
> intelligent organism is a wickedly-beckoning trap for human
> philosophers, mathematicians, and AI researchers **precisely** because
> they're so impressed by, and enamored of, human language.

Is it a wickedly-beckoning trap for neuroanatomists, evolutionary
psychologists, and cognitive scientists?

> They
> have, historically, succumbed to the temptation to stick it in
> where it doesn't belong. That tendency is now being successfully
> combatted, I gather.

I can only speak for myself, but I do not feel compelled to attach moral
significance to language or the lack of it. I am not impressed by, nor
enamored of, either language, or those theories that hold language to be
irrelevant. I know what language is, what it does, what functionality it
serves on both the individual and social scales, the role it plays in the
mind, and the size of that role. And if I turn out to be wrong, I'll
deal.

The tendency to attach moral significance to theories of cognition is the
absolute scourge of AI. Even I got zapped in the one instance I tried,
but by and large I don't do it all, and that accounts for at least a third
of my lead over the rest of the pack. My theories are not symbolic of
human society, or logic, or capitalism, or communism, or individualism, or
anything else, except insofar as these theories enable the construction of
artificial minds and that act is morally valent. The theories themselves
are simply theories.

> In particular, you need to armor yourself, in your pursuit
> of the truth of these matters, against what I perceive is
> your own particular weakness -- your burning desire to find
> a rocket fuel to power the runaway Singularity you do desperately
> long for. Your motives for that may be laudable -- saving the
> human race, and all that -- but the truth needs to come
> first.

Now there's a sword that truly and beautifully cuts both ways. Since I
already believe that successfully constructing an AI would save the human
race, I have no need whatsoever to believe that the theories themselves
are morally valent. They're just theories and can be modified at will. I
get my kicks from imagining the outcome. And of course, I try to
structure those "kicks" in such a way that I don't have an emotional
attachment to the hypotheses that predict that outcome, or that tell me
how to create that outcome, or even the hypotheses that say the outcome is
possible; I try to be very careful to regard those things as hidden
variables.

> No, no, a thousand times no!!! Symbolic vs. "connectionist" isn't
> the point (or at least it's far from being the whole story).

Of course it's not the whole story. The amount of philosophizing that's
gone into this is tremendous; it ranges from Searle's Chinese Room to the
Eliza Effect to "Artificial Intelligence Meets Natural Stupidity" to
"Godel Escher Bach" to "The Emperor's New Mind" to the entire "The Mind's
I". My point is that I read all this stuff when I was, say, fifteen or
so. And, having moved on, I would now say that fighting about the role of
language happens because people are hanging enormous weights of
functionality from the tiny little bits of the mind they know about,
whether those tiny little bits are LISP tokens or Hopfield networks.
Hence the Physicist's Paradigm.

> > Intelligence ("problem-solving", "stream of consciousness")
> > is built from thoughts. Thoughts are built from structures
> > of concepts ("categories", "symbols"). Concepts are built from
> > sensory modalities. Sensory modalities are built from the
> > actual code.
>
> Too static, I fear. Also, too dangerously perched on
> the edge of what you have already dismissed as the "suggestively-
> named Lisp token" fallacy.

Static?! There are four layers of behavior here! Some phenomena will be
emergent, some will be deliberate, but in both cases they will be holistic
and not just copies of the lower-level behaviors. *Four layers* of
different behaviors at different levels, not counting the "code" layer.
That may not be chaos but it ain't exactly crystalline either.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:46 MDT