I wonder if an AI might learn language most efficiently via holophrastic
utterances, as a baby does. Some gestural languages seem to have this
character (but I don't know much about ASL etc; see url below), in which
groups or sequences of gestures are not merely iconic or deictic (pointing
to objects) or pantomime but each encode a whole phrase or sentence, as it
were: who did what to whom. Children's post-babble utterances are often
like this, in which a single word (a verb, say) encodes an elliptical
sentence. Parents pretty quickly pick this up, I'm told.
The trouble with applying this notion to AI minds-in-boxes is that they
will have an altogether different being-in-the-world to organisms. They
have no inherited template behavioral grammars, no autonomous groping
`babbling' that gets shaped swiftly by feedback from their interaction with
a nurturant and sometimes resistant world. Still, I wonder if AIs might
learn to communicate and build up their interior world-models more rapidly
and effectively if they could be taught in a fluid holophrastic fashion?
Let their robot fingers do the talking.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:39 MDT