Anders Sandberg:
> I met him a few weeks back at a course in Trieste. He has some
> interesting ideas about the cerebellum as a modelling/inverse
> kinematics system which I don't believe in as an explanation for the
> cerebellum, but makes a lot of sense for a robot architecture. One
> subsystem models the expected outcome of actions, the other calculates
> what do do in order to achieve an outcome. He showed some videos of
> the various robots at the project, and they were quite impressive
> (although robots on presentation videos are *always* impressive, of
> course).
The third chapter of _Robo sapiens_ is online at:
http://www.discover.com/sept_00/featbiobot.html
also see: http://robosapiens.mit.edu/
"...Robot researchers take pains to distinguish themselves from robot pundits.
But there is something so magical about the creation of artificial living
creatures -- mechanical entities with lifelike behavior -- that even the
soberest of these inventors wonder about what lies ahead for their creation, and
for humankind.
There is no shortage of soothsayers who prognosticate about the shape of
millennia to come: Kevin Warwick, Rodney Brooks, Bill Joy, Hans Moravec, and
their fellows. Robotics, they all agree, will form the future...."
(There's a photo of Kawato on page 55.)
With respect to your comment: "robots on presentation videos are *always*
impressive, of course", Mark Tilden has this to say:
Tilden is always wary of what he calls "Wizard of Oz" demonstrations: 'Pay no
attention to that graduate student behind the curtain -- I am the great and
powerful roboticist of Oz.' If you see a machine with a whacking great big cable
or antenna coming off it going to a supercomputer run by various grad students,
then you are looking at a Wizard of Oz demonstration. If it repeats a behavior
pattern, then you've got a puppeteer on tape. There are over 10,000
special-effects masters on the planet right now. But if you count the number of
people who are really researching robots, who are trying to go beyond fiction,
the total is small compared to the people who are making Jim Henson machines
that run using human control."
According to _Robo sapiens_, imitation is the essence of human intelligence. We
learn various kinds of tasks by watching, so Kawato recorded human movement for
DB (dynamic brain) to learn from.
One of the most striking achievements in this area was done by Christopher
Atkeson of Georgia Tech, open-loop juggling. Open-loop juggling is the technical
term for juggling without feedback, blindfolded juggling in this case, as the
robot did not use its vision, just its repeat movement, to not lose the balls.
Robots can already do many thing that humans can't. For example, the dexterous
arm robot can balance a pole in the palm of its hand for two or three hours.
(Try to do it for fifteen minutes.) So how does this teach us something about
the human brain?
Stefan Schaal (Dynamic Brain project member and neurophysicist at the University
of Southern California) says they are looking into principles of learning and
self-organization that allow a system to develop and become intelligent by
itself. A major premise in biology is that there is structure in the world and
this structure is recognized by the brain. Based on this structure, the brain
can bootstrap itself, through learning and self-organization, to become better
and better.
This is the process that Dynamic Brain researchers would like to understand.
It's clear this can't happen from nothing, so what must be happening is that the
genome is giving us some information about how to learn -- what the right
learning algorithms are for the problems that the system is facing. And
researchers have to figure out what the basic ideas or these biases are which
come from the genetic coding. But from there onward they believe that the brain
can self-organize.
This is a different way of thinking. It is basically, in the end, a huge neural
network that self-organizes automatically to reach this level of intelligence
and competence that we see in biological systems.
In the textbooks, a neural network is a network of many simple units with simple
capacities. These units are connected in such a way that the connections can
change depending on the circumstances. The idea, which became widely known in
the 1980s, is to model the neurons of the brain, which can knit themselves into
new configurations to create memories and skills. DB researchers aren't doing
neural networking in the old-fashioned sense. This was pretty much gone at the
end of the eighties. It's much more statistical learning -- we have a very clear
understanding about the statistical properties of the learning systems we use,
says Schaal. In statistical learning, the machine has a statistical model of how
its sensory data are generated, and it uses learning algorithms that exploit
this knowledge to assure statistical convergence to good learning results. DB
learns much more from 'statistical insights' than from any attempt by the
researcher to say, "oh, this is how the brain seems to connect neurons, let's
put it together this way and try to understand it later."
>From a biological point of view, what DB researchers are inspired by is the
cerebellum, as you've pointed out. The cerebellum is a huge learning machine, a
real-time learning machine. It has tremendous capacity, and we don't know how to
duplicate that. What DB researchers have developed is not getting even close to
the cerebellum, but it outperforms anything out there, in this domain. Not that
it's better than everybody else, but in this domain, when it comes to motor
learning, nobody can compete with what they are developing. But they have been
developing DB especially for this purpose. So it is like they have a special
domain and there they are good. But if you put DB in another domain, it will
suck (Schaal's words).
For example, the Cog project at MIT has the goal of making something cognitive,
a robot that can think with self-awareness, like a human being. Schaal thinks
that maybe his project is a little bit more boring. He just basically wants to
have a humanoid robot to do computational neuroscience. And he focuses on very
boring details of robotics... small problems. He makes very incremental progress
with these things. He doesn't have a POV about cognition or consciousness, or
believe that the robot is ever necessarily going to be conscious. He believes,
as Kawato says, that this might happen by itself at some point, if we just do
our work and understand more and more about the brain.
Stay hungry,
--J. R.
3M TA3
(As AI passes me on the way to technological singularity, it can look in the
rear view mirror at the letters beneath my initials. In the mirror the letters
read: EAT ME, as that is what some people believe AI will do to us all.)
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:16 MDT