ROBOT: Self-organizing Autonomous Incremental Learner

From: J. R. Molloy (jr@shasta.com)
Date: Fri Aug 17 2001 - 12:02:14 MDT


http://www.siliconvalley.com/docs/news/svfeatures/084346.htm
BY ROBERT S. BOYD
Knight Ridder Newspapers

EAST LANSING, Mich. -- Would you like it if your kids came equipped with
``good'' and ``bad'' buttons you could push to make them behave?

Of course not. Nobody wants to raise a child to be a robot. Yet that's the way
John Weng, a robotics expert at Michigan State University here, is teaching a
robot to learn like a child -- to obey spoken commands, trundle down a hall,
find and pick up toys with its mechanical hand.

Weng is breeding a new kind of ``intelligent'' robot that learns in a novel
way: by experience, the way animals and people do. He said this approach to
learning will be cheaper, faster and more flexible than traditional robot
training methods, which mostly are limited to what a human programmer tells
the machine to do.

Instead of stuffing its computer brain with elaborate instructions, like Big
Blue, the IBM chess champion, Weng teaches his robot a few basic skills and
then lets it learn on its own by interacting with its environment.

He compares the process to teaching a baby to walk by holding its hands and
then letting go. In his lab, a human trainer first controls the robot's
actions manually, then sets it free to perform its new tricks on its own.

Weng calls his machine a developmental robot, because -- unlike most
traditional robots -- it ``develops'' its new abilities through practice,
gaining skill with each training session.

``Humans mentally raise the developmental robot by interacting with it,'' he
said. ``Human trainers teach robots through verbal, gestural or written
commands in much the same way as parents teach their children.''

Named SAIL (for Self-organizing Autonomous Incremental Learner), Weng's
robot-in-training wanders the halls of Michigan State's Engineering Building,
responding to touch, voices and what it ``sees'' with its stereoscopic vision
system.

It works something like AIBO, the robotic toy dog from Sony Corp. that
responds to pats on the head, but on a vastly more sophisticated level.

Five feet tall and black-skinned, SAIL has a boxy torso, a round head, two
eyes, one arm and hand, and a wheelchair base to roll around on. A more
human-looking successor, nicknamed Dave, is on the drawing boards for next
summer.

According to Weng, a developmental robot acquires its smarts in two ways: The
first is ``supervised learning'' under the direct control of a human teacher.
Then comes ``reinforcement learning,'' in which the trainer lets the robot
operate on its own but rewards it for successful action and penalizes it for
failure.

In supervised learning, for example, Becky Smith, one of Weng's students,
steers SAIL down a corridor by pushing touch sensors on its shoulders. ``To
train the baby robot to get around, we take it for a walk,'' she said.

After a few practice sessions, Smith lets the machine go free. She said SAIL
needs only one lesson to learn to move in a straight line, but 10 sessions to
get the hang of going around corners on its own.

In another type of lesson, the human trainer speaks an order -- such as ``go
left,'' ``arm up'' or ``open hand'' -- then makes the robot perform the action
by pushing one of the 32 control sensors on its body.

``The robot associates what it hears with the correct action,'' Weng said.
After 15 minutes' training, SAIL could follow such commands correctly 90
percent of the time, he said.

To strengthen the robot's newfound skills, it next attends an advanced class
of reinforcement learning. The trainer lets the robot ``explore the world on
its own, but encourages and discourages its actions by pressing its `good'
button or `bad' button,'' Weng explained.

Alternatively, instead of pressing the buttons, a trainer says ``Good'' when
SAIL does what it's supposed to do, and barks ``Bad'' when it makes a mistake.

Numbers in SAIL's computer brain are adjusted to reflect these experiences.
The next time, presumably, it will do better.

``The `good' and `bad' commands speed up the learning process,'' said Weng.
``Mind and intelligence emerge gradually from such interactions.''

Weng's research on developmental robots is supported by the Defense
Department's Advanced Research Project Agency and by the National Science
Foundation. Microsoft and Siemens AG, the German electronics giant, also have
contributed money. So far, about $1 million has been spent, he said.

Eventually, Weng hopes ordinary people will be able to buy and train their own
robots to do household chores, take Grandpa for a walk and simply to
entertain.

``Anyone can train a highly improved developmental robot of the future -- a
child, an elderly person, a teacher, a worker,'' Weng predicts. ``You could
personalize it and teach it tricks. You could have a competition -- say, my
robot can dance better than your robot.''

Weng is by no means the only researcher who is struggling to make robots that
can learn on their own. Many others are developing machines that learn to
maneuver in their surroundings with minimum human control.

For example, Ron Arkin, an expert on robot behavior at the Georgia Institute
of Technology in Atlanta, is developing robots that can explore previously
unknown environments. The Defense Department is financing his work on teams of
mobile robots that can scout out hostile territory and report what they find.

``We cannot foresee all the events that may occur to a robot,'' Arkin said.

At Carnegie Mellon University in Pittsburgh, Christopher Atkeson uses a
version of reinforcement learning to train robots. ``The goal is to reduce the
amount of expensive expert human input into robot programming,'' he said.

Even AIBO, the toy robot dog, ``learns'' from experience. If you pat its head
when it puts out its paw to you, that increases the probability that it will
put out its paw again.

But Weng insists that SAIL comes closest to mimicking a real child's learning
process. ``It's like teaching a kid to ride a bicycle -- you push him first
and then let go,'' he said. ``Nobody else does it this way.''
***************************

--J. R.

Useless hypotheses, etc.:
 consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism, GAC, Cyc, Eliza, and ego.

     Everything that can happen has already happened, not just once,
     but an infinite number of times, and will continue to do so forever.
     (Everything that can happen = more than anyone can imagine.)

We won't move into a better future until we debunk religiosity, the most
regressive force now operating in society.



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:10 MDT