Re: Contextualizing seed-AI proposals

From: Jim Fehlinger (fehlinger@home.com)
Date: Sat Apr 14 2001 - 22:22:38 MDT


hal@finney.org wrote:
>
> Do animals "bring the full weight of a lifetime's experience to bear
> on each moment as it is being lived?"

Yes, I should think so, if you're thinking of vertebrates like us.
This is what I said a year ago
( http://www.lucifer.com/exi-lists/extropians.2Q00/5580.html )
about Edelman's opinion of the matter:

> Edelman believes that, in the biological realm, this capability is
> quite old: "Which animals have consciousness? ... Going backward from
> the human referent, we may be reasonably sure... that chimpanzees have it.
> In all likelihood, most mammals and some birds may have it... If the
> brain systems required by the present model represent the
> **only** evolutionary path to primary consciousness, we can be
> fairly sure that animals without a cortex or its equivalent lack
> it. An amusing speculation is that cold-blooded animals with
> primitive cortices would face severe restrictions on primary
> consciousness because their value systems and value-category
> memory lack a stable enough biochemical milieu in which to make
> appropriate linkages to a system that could sustain such
> consciousness. So snakes are in (dubiously, depending on the
> temperature), but lobsters are out. If further study bears out
> this surmise, consciousness is about 300 million years old" (BABF
> pp. 122-123).

> ...some animal babies are able to function very well immediately
> after birth. Horse colts run with the herd within hours. Evidently
> they are able to function very well with virtually no experience.

Well, I'm no more an ethologist than I am a neuroscientist or
a philosopher. But I'd say that horses are an ill-chosen example
of the point you're trying to make. Pick an invertebrate instead.
Of course, nearly **any** animal, vertebrate or not, comes into
the world as a ready-to-go, turnkey system compared to a human
being.

I can give you some quotes apropos your observation from a post I made
late last year about the book _Darwin Machines and the Nature of
Knowledge_ by Henry C. Plotkin (the post is in the archive's
blind spot at the moment; it'll be back on the shelf
when it comes back from the binder's ;-> ). [It's getting so I
have so much relevant material on this subject in my e-mail archive,
I can just cut and paste!] I quoted:

> p. 137
>
> [The] period of time between receiving a set of
> genetic instructions and their implementation
> through development to the point where those
> selfsame instructions might be returned, via
> reproduction, to the gene pool, has been given a
> variety of names... Konrad Lorenz... called it
> generational deadtime. It is a lag-time that is an
> invariant feature of any system whose construction
> takes time and which is based on a set of
> instructions that cannot be continually updated. In
> the case of sexually reproducing organisms, each
> organism is cast off and cut off from the gene pool
> with a fixed set of instructions that cannot be
> altered. It cannot "dip" back into the gene pool to
> augment these instructions if it finds they are not
> good enough. In this sense, genetically, it is on
> its own.
>
> p. 145
>
> [One way] for an organism to reduce the amount of
> significant change it has to deal with... is to
> reduce the period of time between conception and
> reproductive competence... This means that the
> ratio of life-span length to numbers of offspring is
> low... This is a characteristic 'life-style' of
> animals known to ecologists as r-strategists...
> These r-strategists usually live less than one
> year...; they develop rapidly; they are usually of
> small body size; and they normally reproduce over
> just a single period. This is the life-history
> strategy common to most invertebrate animals and
> clearly one that is... relatively successful...
>
> p. 146
>
> [A] second very general way of dealing with
> change... [is to] change the phenotypes so they can
> change with and match the changing features of the
> world. This, in turn, comprises two general
> strategies. The one results in changes between
> phenotypes, and relies on the chance or radical
> component of the primary heuristic [the mechanism of
> classical Darwinian evolution, involving genetic
> variation and selection among competing
> phenotypes]... [Large] numbers of offspring are
> produced,... each... different from the others...
> r-strategists... often do combine a short life-span
> with a quite prominent radical component of the
> primary heuristic...
>
> p. 147
>
> Another... strategy is to match change with
> change... by giving rise to change **within**
> phenotypes... I will call this the tracking option.
>
> p. 149
>
> How can such changes be tracked? The only way to do
> it is to evolve tracking devices whose own states
> can be altered and held in that altered condition
> for the same period of time as the features of the
> world which they are tracking... Such tracking
> devices would be set in place by the usual
> evolutionary processes of the primary heuristic and
> hence would operate within certain limits... These
> additional knowledge-gaining devices comprise a...
> secondary... heuristic.
>
> ...[T]here are two such classes of semi-autonomous
> knowledge-gaining devices or secondary heuristics
> that can track change in this way. The immune
> system is one; the intelligence mechanisms of the
> brain are the other.

> To me this suggests a greater role for genetically determined structures
> versus experiential ones. We aren't really bringing our whole weight of
> experience to bear on every moment, or rather, that is only the smallest
> part of what we are bringing.

Certainly not "versus"! We're bringing to bear the experience within a
single lifetime, **plus** the experience accumulated across eons of
evolutionary history, encoded in the genome. See Plotkin, above.
I don't know what metric you'd use to try to decide whether the primary
or secondary heuristic is more "important"; the latter certainly
couldn't have come into existence without the former.

> Messages, verbal communications, succeed not so much because we share
> enough experiences with others that we can reconstruct their minds;
> rather, they succeed because our brains are structured so as to be able
> to elicit meaning from these communications.

Since our brains' physical structure is determined by **both** genetics
and experience, "meaning" must depend on both. Oh, and throw in the
body and the whole social and physical milieu, too. Undoubtedly, there's
been some genetic "streamlining" that ensures that all normal human beings
learn to speak. That genetic predisposition certainly doesn't mean that
anybody is born knowing English, or French, or Chinese (though it probably
constrains **all** human languages in ways only dimly understood). Contrast
this to reading and writing, which before the advent of mass education in
industrialized countries, were considered as miraculous an accomplishment
as learning to play the piano is today, and are still hard enough to teach
that not everyone is guaranteed to achieve literacy (dyslexia seems to get
written about in _Time_ magazine every few months; Tom Cruise is supposed
to be dyslexic, etc.; being born aphasic is rare enough that I can't recall
ever having seen a magazine or newspaper article about it).

So again, think of "experience" in a broader sense, as also annexing
the evolutionary history of the human race.

However, to examine the effect of experience in the ordinary sense
on language acquisition, have a look at the sad story of "Genie".
See, for example http://bioethics.georgetown.edu/hsbioethics/unit3_4.htm
(there are references at the end), or Google search the keywords
Rymer and Genie.

> Now, what does this say about AI? Not that AIs must be structured "just
> like us" in order to be able to communicate with us. It is plausible
> that convergent evolution would produce brains by independent mechanisms
> which can communicate. And since we are designing AIs which can function
> in human society, I don't think there is much danger that they will be
> unable to communicate with us.

Depends on what you mean by "communication". If that takes place
through complexity matching as described by Edelman and Tononi,
then similarity of physical structure, life history, and
phylogenetic history provides much reassurance that communication
is "really" taking place (we don't have AIs or extraterrestrials yet,
so this question only comes up in practice with non-human terrestrial
animals at the moment).

However, in _Word & Object_ by W. V. O. Quine (1960), the author develops
a theory of the "radical indeterminacy of translation" even among human
languages. Quine imagines a linguist visiting a heretofore undiscovered
tribe, and attempting to compile a lexicography of the unknown jungle
language. Such a translation comprises a consistent system of analytical
hypotheses, but not necessarily a unique one.

 In Chapter II, "Translation and Meaning", Section 15
"Analytical Hypotheses" (pp. 71-72), Quine writes:

"Whatever the details of its expository devices of word translation
and syntactical paradigm, the linguist's finished jungle-to-English
manual has as its net yield an infinite **semantic correlation** of
sentences: the implicit specification of an English sentence, or
various roughly interchangeable English sentences, for every one of
the infinitely many possible jungle sentences. Most of the semantic
correlation is supported only by analytical hypotheses, in their
extension beyond the zone where independent evidence for translation
is possible. That those unverifiable translations proceed without
mishap must not be taken as pragmatic evidence of good lexicography,
for mishap is impossible...

[C]ountless native sentences admitting no independent check...
may be expected to receive radically unlike and incompatible
English rendering under ... two [different] systems [of
translational hypotheses].

There is an obstacle to offering an actual example of two such
rival systems of analytical hypotheses. Known languages are
known through unique systems of analytical hypotheses established
in tradition or painfully arrived at by unique skilled linguists.
To devise a contrasting system would require an entire duplicate
enterprise of translation, unaided even by the usual hints
from interpreters. Yet one has only to reflect on the nature
of possible data and methods to appreciate the indeterminacy...
There can be no doubt that rival systems of analytical
hypotheses can fit the totality of speech behavior to perfection,
and can fit the totality of dispositions to speech behavior as
well, and still specify mutually incompatible translations of
countless sentences insusceptible of independent control.

Section 16 "On Failure to Perceive the Indeterminacy" (pp. 73-77):

"A ... cause of the failure to appreciate the point is confusion
of it with the platitude that uniqueness of translation is absurd.
The indeterminacy that I mean is more radical. It is that rival
systems of analytical hypotheses can conform to all speech dispositions
with each of the languages concerned and yet dictate, in countless
cases, utterly disparate translations; not mere mutual paraphrases,
but translations each of which would be excluded by the other
system of translation. Two such translations might even be patently
contrary in truth value, provided there is no stimulation that would
encourage assent to either.

...

Something of the true situation verges on visibility when the
sentences concerned are extremely theoretical. Thus who would
undertake to translate 'Neutrinos lack mass' into the jungle
language? If anyone does, we may expect him to coin words or
distort the usage of old ones. We may expect him to plead in
extenuation that the natives lack the requisite concepts; also
that they know too little physics. And he is right, except
for the hint of there being some free-floating, linguistically
neutral meaning which we capture, in 'Neutrinos lack mass',
and the native cannot.

...

Observation sentences peel nicely; their meanings, stimulus
meanings, emerge absolute and free of residual verbal taint...
Theoretical sentences such as 'Neutrinos lack mass,' or the
law of entropy, or the constancy of the speed of light, are
at the other extreme. It is of such sentences above all that
Wittgenstein's dictum holds true: 'Understanding a sentence
means understanding a language.' Such sentences, and countless
ones that lie intermediate between the two extremes, lack
linguistically neutral meaning.

There is no telling how much of one's success with analytical
hypotheses is due to real kinship of outlook on the part of the
natives and ourselves, and how much of it is due to linguistic
ingenuity or lucky coincidence. I am not sure that it even
makes sense to ask. We may alternately wonder at the
inscrutability of the native mind and wonder at how very
much like us the native is, where in the one case we have
merely muffed the best translation and in the other case we
have done a more thorough job of reading our own provincial
modes into the native's speech.

...

One frequently hears it urged that deep differences of language
carry with them ultimate differences in the way one thinks,
or looks upon the world. I would urge that what is most
generally involved is indeterminacy of correlation. There is
less basis of comparison -- less sense in saying what is good
translation and what is bad -- the farther we get away from
sentences with visibly direct conditioning to non-verbal
stimuli and the farther we get off home ground."

> Ultimately what is necessary for communication is a common language,
> and common understanding about the world. There is nothing surprising
> about this; it is completely obvious. We will never communicate with
> AIs who don't understand the meaning of the words we use. And to have
> this understanding is to understand the world and the language.

There's another one of those slippery concepts -- "understanding".
And what will it take for those AI's to understand the meaning of
the words we use? That's the big question.

> This was the insight which led to Lenat's CYC project. It is an attempt
> to teach a computer how the world works, in the hopes that this would
> allow us to communicate with it. So far, CYC appears to be a failure,
> from what I have read. This does not necessarily mean that the idea
> is fundamentally wrong; rather, perhaps the mechanism being used is not
> appropriate for representing the world.

That was the main point of my post, the dubiousness (in the opinion of
Edelman and other neuroscientists of the past decade or so) of the old
cognitivist notion that statements in formal logic and an inference
engine linking them together can possibly be the internal **basis** for
conscious interaction with the world by an autonomous intelligence.

> Ultimately, however we achieve AI, the machine will have to be able
> to learn about the world. This doesn't mean it must smell and taste
> and dance; humans unfortunate enough to have been trapped in paralyzed
> bodies have still developed full language skills. This proves that it
> is possible to learn enough about the world by being told, to understand
> it and speak about it.

By being told? No. That's another piece of this whole subtle question.
Being **embodied** is probably necessary for intelligence.

Good night!

Jim F.



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:46 MDT