Outtakes from PBS:
Years of effort have gone into cyc. There are millions of facts and
relationships in its database. But it still has less understanding of the
world than a young child. Creating a machine that has common sense is an
immense challenge. (Lenat) will say "right" to that. Some ai researchers
believe that the best way to build machines which can understand the world the
way we do, is to make them even more like us. A team at mit's artificial
intelligence lab has been working on cog, a humanoid robot which experiences
the world in much the same way we do - through a human-like body and senses
which mimic our own. (Brooks) inside every ai researcher is someone who wants
to build the ultimate intelligence. And i think all of us are ultimately
inspired by hal-9000. We want to build that intelligence at the human level.
>From my work I came to believe that it was important to have a body and to
interact in the world and to interact with people in a human like way in order
for the system to be able to learn how to be human. I think in hindsight that
hal was a technological mistake in thinking that it could be as intelligent as
it was without a body. So I've been pushing at looking at robots with a human
form. (Brian Scassellati ) the cog project started about seven years ago. And
in that time, we've changed a little bit of our focus - the way we look at
things and the kinds of questions we ask. But we've always been focused on two
goals really - building a complete engineering system, trying to put together
all of these different pieces. And also in terms of trying to understand
something about human intelligence by building a real machine. The robot has a
visual system. There's two cameras in each eye, one gives it a wide field of
view...
...one by one with its corresponding amplitude, and you can see in the output,
you very qu ickly form a very accurate reconstruction of that individual.
Recognizing faces from photographs is easy for a computer, compared to the
problem of recognizing what's happening in a scene from real life. The holy
grail of vision research is scene analysis - not just recognizing faces or
objects, but describing and understanding the dynamics of a whole scene. At
mit's artificial intelligence lab, researchers have been developing a system
to analyze the events taking place in the surrounding environment. During a
typical day, hundreds of pedestrians and vehicles move around the lab in
patterns which the computer has learned to recognize. Having learned the
normal patterns of activity, the computer can also recognize when something
unusual happens. Systems like this will one day be in use in places like
embassies and airports, analyzing the flow of people and objects and alerting
security forces to any suspicious activities. The forest of sensors is a
project that has been running for about the past three years and we've had
five cameras running continuously in the mit artificial intelligence lab from
the 7th floor looking down at the surrounding area and tracking the cars and
people and trash and whatever else happens to be moving in the environment.
And the goal of the project is essentially to model the environment. Model the
objects in the environment and what those objects do over time. In the upper
left, we have the view from a single camera. Next to it we see where the
camera believes objects are moving. Here comes an object and you can see in
the lower right that object being tracked...
...1960's, when Clarke was writing 2001, most computers looked like this - an
ibm workhorse, used by medium-sized businesses to run their accounts. It was
manned by specially trained personnel, driven by a 100 - year-old technology -
punched cards - and its output device was a teletype. What would a computer be
like in the year 2001? To find out, Clarke and Kubrick consulted experts in a
new branch of computer science called "ai" - artificial intelligence. Many of
the experts, then as now, were at mit - the massachusetts institute of
technology. Ai pioneer marvin minsky was one of the advisors on the film.
(Marvin minsky) kubrick came to visit mit when he was planning the film. In
fact he went to all the universities . He was particularly interested not in
whether the computers would be intelligent by 2001 but in the vision itself of
the film. He wanted to get some idea of what would be the state of computer
graphics in the future and what the computers would do for the spaceship and
that sort of thing. And I think he got all of that right. (Rod brooks) back in
1968, ai was dealing with teletype input where you could type a very simple
phrase or a very simple mathematical equation and the system could echo back
with some analysis of it. There was no real-time vision; there was hardly any
vision at all - maybe of a single photograph, taking many hours to compute.
There was no speech understanding. So to have a computer which could interact
with people on the same basis that people used was something that was clearly
far in the future. [Aram katchaturian's "gayane ballet suite" plays] in the
early days of ai in the 1960's, the ultimate test for machine intelligence was
considered to be the game of chess...
This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:22 MDT