From: "Mike Lorrey" <mlorrey@datamann.com>
> Nah, vacation in peace. 'nother Pina Colada please....
Yes, of course...
wasting away again in Margaritaville...
or sitting quietly, doing nothing...
...or watching the Robots show off what they've learned
http://www.eet.com/story/OEG20010813S0081
SEATTLE - Robots stole the show at the recent International Joint Conference
on Artificial Intelligence hosted by the American Association for Artificial
Intelligence.
Besides the fifth annual RoboCup, which featured robots playing soccer, other
competitions included robots rescuing people trapped in a collapsed building,
interactive hors d'oeuvre-serving robots trying to be the most conversational
"butler," plus a national Botball tournament among high-school teams of robot
builders from all over the country.
"We look to the natural world for inspiration; every stance must express an
intention," said Damian Isla, researcher at the Massachusetts Institute of
Technology (MIT) Media Lab, about his "dog," Duncan, in the paper "A Layered
Brain Architecture for Synthetic Characters," co-authored by graduate student
Robert Burke and professor Bruce Blumberg.
Like Sony's cute dog robots, MIT's Duncan endears itself by design. The
animated canine can perform all the standard dog commands, plus he "herds"
sheep. More important, Duncan builds an internal model of his world from his
senses to perform unstructured commands like "find" a sheep that has strayed.
"We chose a dog because they are very empathetic around people . . . and
canine psychology helped formulate our brain model," said Isla.
The brain model permits Duncan to extract "perceptions" of objects from his
stream of raw sensory inputs and to formulate actions that respond
appropriately to those perceptions. To do all that, his brain model needed a
short-term memory in order to meld the senses of sight and sound and
coordinate the two in terms of locations so that an object's visual appearance
must come from the same location as its sound. From there an action can be
chosen and the navigation subsystem invoked to plan a route that enables the
action to be carried out.
At the Seattle conference, more than 2,600 attendees listened to over 200
presentations, which included technical papers that fleshed out all the
aspects of creating a robotic intelligence in the likeness of the living, with
sessions like "Cognitive Robotics." "Spatial vs. Temporal Reasoning,"
"Causality," and "Belief Revision."
Whole sessions were devoted to such subtopics of cognitive modeling as
modeling with diagrams, modeling with categorization and modeling with
"perceptual grounding." The newest approach, grounded models, differs from
past approaches in that it encourages the robot to synthesize its concepts
directly from its sensors. In contrast, previous robotic "reasoning" was based
on axiomatic systems - diagrams or categories - that were preprogrammed into
it ahead of time by their "creator."
Grounded models instead start with an empty set of categories that
self-organize depending on their sensory inputs. As explained by professor
Josefina Sierra-Santib of the Escuela Técnica Superior de Informática at the
Universidad Autónoma (Madrid, Spain), "grounded models are based on
conceptualization of information gathered by sensors and support a form of
intuitive reasoning, which can become the basis of [self-organized]
axiomatizations." Intuitive reasoning, for Sierra-Santib, ensues when a
grounded robot tries to fit new sensory inputs into the categories it has
self-organized from previous sensory inputs.
Other sessions covered specific types of cognitive functions, such as
planning. The planning sessions focused on improvements in traditional forward
chaining (deducing possible future events from current events) as well as
high-profile problems, such as dealing with incomplete knowledge and
uncertainty, or applying rule-of-thumb heuristics to simplify problems.
New approaches
Several papers addressed combinatorial "explosions." One such paper showed a
new approach invented by Emmanuel Guere and Rachid Alami, researchers at the
French National Center for Scientific Research's Laboratory for Analysis and
Architecture of Systems (Toulouse, France).
Their "Shaper" algorithm solves combinatorial explosions in planning
algorithms by first simplifying a multidimensional task state space into a
"shape." Shaper then used this distilled version of the data when comparing
competing task scenarios, thereby avoiding the "explosion" of different
combinations that result from planning with a fully detailed state space,
according to the authors.
Machine learning sessions had separate tracks for hardware-robot learning and
software-agent learning, as well as tracks for reinforcement learning,
knowledge acquisition and inductive logic.
Some research, such as the MinPath learning navigator for wireless personal
digitial assistants (PDAs), is already leading to practical real-world
applications. MinPath works by learning the behavior of wireless PDA visitors
to a Web site. It then suggests short-cut links to new visitors that can
drastically reduce their connection delays.
This artificial intelligence consists of an algorithm that solves the "deep
links" problem for PDAs - that is, some Web sites make you click through a
series of descending menus that bog down bandwidth-limited wireless PDAs.
MinPath learns the short-cut links to deep-link locations that other wireless
PDA users have found helpful, according to the authors, professors Pedro
Domingos and Daniel Weld at the University of Washington, Seattle, and
graduate student Corin Anderson.
Neural networks were cited in several papers as being instrumental in solving
unstructured problems, such as the so-called road-sign problem. Robots often
confront learning situations - passing a road sign, for instance - where they
should filter irrelevant information from the event stream while waiting for
whatever the road sign was warning about.
Professor Fredrik Linaker from the University of Skovde (Sweden) and professor
Henrik Jacobsson of the University of Sheffield (England) described a method
that enables a neural network to learn information from a road sign and then
postpone responding to it until the cited event occurs.
The technique involved abstracting to a higher level, where only events
grounded in sensory-motor interactions - rather than every perceived object -
enter the higher-level event stream. This trimmer, more relevant set of events
enabled a neural network not only to learn the road-sign problem, but also
enabled the robot to deal with unrelated intervening events, like the car in
front of it slamming on its brakes, without losing track of the sign.
Of the invited papers, one by researcher Wolfgang Wahlster from the German
Research Center for Artificial Intelligence (DFKI; Saarbrücken, Germany)
described a European initiative called SmartKom, which is designed to enable
Europeans from one country to understand Europeans from any other country even
though they are speaking different languages.
SmartKom follows on the heels of the VerbMobile initiative, which ended last
year. VerbMobile was a seven-year, $80 million effort to enable Europeans to
speak over the phone to each other with automatic language translation. But
according to Wahlster, DFKI will up the ante for SmartKom by adding
distinctive gestures to each of the English-, French-, German- and
Spanish-spoken languages. Communications will then be "delegated" to a virtual
assistant who will translate not only the speaker's language, but also the
natural hand-waving gestures of the locale.
_____________________________________________________________________
«¤»¥«¤»§«¤»¥«¤»§«¤»¥«¤»«¤»¥«¤»§«¤»¥«¤»§«¤»¥«¤»§«¤»¥«¤»§«¤»¥«¤»§«¤»¥«¤
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
©¿©¬
Stay hungry,
--J. R.
Useless hypotheses, etc.:
consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism, GAC, Cyc, Eliza, cryonics, individual
uniqueness, ego
Everything that can happen has already happened, not just once,
but an infinite number of times, and will continue to do so forever.
(Everything that can happen = more than anyone can imagine.)
We won't move into a better future until we debunk religiosity, the most
regressive force now operating in society.
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:23 MDT