Re: Contextualizing seed-AI proposals

Date: Thu Apr 12 2001 - 20:39:24 MDT

Jim writes:
>Thus, in the view of neurobiologist Walter Freeman...
>"the brain [brings] the full weight of a lifetime's experience to
>bear on each moment as it [is] being lived. ...
>Consciousness is about bringing your entire history to bear on
>your next step, your next breath, your next moment." (p. 268).
>A linguistic exchange between two such "memory landscapes" relies
>for its power on the fact that the receiver can **recreate** a
>conscious state similar (enough) to that of the sender, rather
>than on the unlikely interpretation that the message somehow
>encodes all the complexity of the conscious state of the sender:

It is true that in intimate conversation where there is a true "meeting of
the minds", you do end up imagining the mind you are speaking with. But I
don't think this effect is as universal as is being presented here. I can
think of a number of examples where no such presumption is necessary.

One example would be relatively structured or formal interchange.
If you exchange simple greetings, there is no particular deep meaning
to look for. Likewise if you are engaging in a structured question and
answer session, you don't necessarily have to construct a mental model
of the other person. You're looking for information, and only the bare
minimum of experience necessary to resolve ambiguities is relevant.

A larger example is animal consciousness. Animals are not linguistic.
Do animals "bring the full weight of a lifetime's experience to bear
on each moment as it is being lived?" I don't see that animal brains
are so different from humans that the answer can be different. So if
it true for humans, it must be true for animals as well. Yet some
animal babies are able to function very well immediately after birth.
Horse colts run with the herd within hours. Evidentally they are able
to function very well with virtually no experience.

To me this suggests a greater role for genetically determined structures
versus experiential ones. We aren't really bringing our whole weight of
experience to bear on every moment, or rather, that is only the smallest
part of what we are bringing. Rather, we are bringing the whole weight
of billions of years of evolution to that moment. Our brain structures
determine what we experience.

Messages, verbal communications, succeed not so much because we share
enough experiences with others that we can reconstruct their minds;
rather, they succeed because our brains are structured so as to be able
to elicit meaning from these communications.

Now, what does this say about AI? Not that AIs must be structured "just
like us" in order to be able to communicate with us. It is plausible
that convergent evolution would produce brains by independent mechanisms
which can communicate. And since we are designing AIs which can function
in human society, I don't think there is much danger that they will be
unable to communicate with us.

Ultimately what is necessary for communication is a common language,
and common understanding about the world. There is nothing surprising
about this; it is completely obvious. We will never communicate with
AIs who don't understand the meaning of the words we use. And to have
this understanding is to understand the world and the language.

This was the insight which led to Lenat's CYC project. It is an attempt
to teach a computer how the world works, in the hopes that this would
allow us to communicate with it. So far, CYC appears to be a failure,
from what I have read. This does not necessarily mean that the idea
is fundamentally wrong; rather, perhaps the mechanism being used is not
appropriate for representing the world.

Ultimately, however we achieve AI, the machine will have to be able
to learn about the world. This doesn't mean it must smell and taste
and dance; humans unfortunate enough to have been trapped in paralyzed
bodies have still developed full language skills. This proves that it
is possible to learn enough about the world by being told, to understand
it and speak about it. Something like this must be an element of any
AI teaching effort.


This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:45 MDT