> Franklin Wayne Poley writes:
> > Which do you see as the bigger technical hurdle: recognition of objects in
> > the household or locomotion, once they are recognized?
> The problem is both building highly dynamic maps, and navigating in
> them fast enough, while not failing frequently (falling down the
> stairs, etc.).
I think we can gain a lot of understanding by analyzing this situation in
as much detail as possible so I'll take it as far as I can and perhaps
someone else can shed some light on it. Words like "highly dynamic
maps" and "navigating" can be somewhat ambiguous. So we study exactly what
is happening. It is not so different from what I have done many times in
diagnostic work for mentally-physically handicapped people. We start
with a family report that Johnny who has some mental and physical
handicaps doesn't pick up after himself, is untidy, etc. Other than the
standardized psychological tests we try to zero in on what Johnny is doing
when he crosses the room and picks something up. Exactly where do the
difficulties lie? Is it a visual handicap? Perceptual? A motor
dysfunction? If so, exactly what among many possible motor dysfunctions?
I was very positively impressed by Hans Moraved during
> last Stanford meet, discussing these points
Yes, I think he is one of the top people in his field. Wasn't his PhD work
on the "Stanford Cart" ? So let's call our robopatient SC. What is SC's
(1) In traversing a room SC encounters a visual field with a large number
of objects demarcated from one another by borders/edges, patterns etc.
Is SC able to ITEMIZE ALL OBJECTS in the room, given enough time,
computing power etc? In other words, at the level of a basic visual system
required to detect (itemize) all objects in the field, is SC's artificial
visual system up to the task? Forget about speed, computing power,
programming etc. at this level. It is a question of object recognition.
Now I don't have a definitive answer at this time. I think the best
people to answer that question might be private companies selling
artificial vision systems. I came across the web site for a company called
"Cognex" which billed itself as number one in this field but they still
haven't replied. We are told otherwise that these systems can pick human
faces out of a crowd and use them to match with police records and
Kurzweil (1999) tells us some banks use face recognition technology.
Then there is iris recognition and finger print recognition. Last week,
Discovery did a report on the robotic milking machines which half of the
new milk parlours in Europe now use. Any machine which can recognize a
cow's udder and teat can't be too "visually handicapped". But I still
wonder if there is some new discovery required before a good, all-purpose
general visual system can be developed.
This is crucial to my whole investigation. Do we require "only" huge
amounts of additional labour and funding to advance P2 and P3 to P4 and P5
etc. or do we need some new inventions? Let me illustrate that another
way. Let's suppose we use "cyborg" technology for comparison. Kurzweil
(1999) foresees great progress in this technology before 2099 and I just
saw a report that Michael Saylor is going to invest in it. However, before
the things Kurzweil foresees come to pass some new inventions/innovations
are required. It is almost impossible to rationally predict when these
things will happen or how much they will cost. So researchers can now
"fuse" a few rat or leech neurons with a computer chip. Who can say what
it will cost, etc. to do that on a larger scale and with human neurons?
Next, even if it can be done while keeping the tissue alive, how do we
control the signals from biological to mechanical sub-systems and
vice-versa? These developments and more are very "iffy". On the other
hand, it looks to me like advancing P3 to PX which will surpass human
equivalency (a Moravec concept) in many ways requires "only" quantitative
contributions. That is a very important distinction because if it is
correct it means we can rationally and convincingly spell out what
benefits investors/consumers will get from spending another $100 m. or
$100 b. or $100 t. So that is why I do these analyses. The next being...
(2) Locomotion/motion. Suppose SC can identify everything in the room. Can
it get to these objects? The gross movements in going to the site where
there is, let us say a collection of electric cords isn't a problem as
long as the cords can be recognized and the commands given to relocate.
SC may wheel, crawl or walk its way there. No problem. The fine movements
on site could be a problem. But again it doesn't look to me like the kind
of problem which more time and money can't solve. Range-finding is the key
issue (artificial depth perception). Range finders which can guide a
robotic gripper to a cow's teat or the gas cap of a car must be pretty
efficient. So I'm not worried about this problem area. If early models of
humanoid need a "walker" as a third leg to stabilize their walking that is
not a problem. Humanoids could even be built with 3, 4, 5 or 6 legs for
(3) Artificial Brain Power. I am proceeding on the assumption that
whatever problems we throw at SC, there is a computer powerful enough to
handle them. Before that I am assuming that the commands required for SC
to cross a room and pick up the cords etc. are superimposed on software
which "only" requires a large amount of code writing time.
I think that's about it for an overview of this kind of analysis. The
problem looks overwhelming for a doctoral student (like Moravec when he
was working on this initially). But how much money does a doctoral student
get for equipment etc? He's lucky to get a few thousand. And even as a
professor over a lifetime career he's lucky to get a few million. As you
said earlier Eugene, even $100 m. for Honda Humanoids is a small sum
compared to what is needed.
If these big problems only require big expenditures of labour and
money, that would be very good news for the robotics industry BECAUSE THEY
CAN BE ACCURATELY ESTIMATED. It is a pure guess as to how much it will
cost to build a nanoassembler or to do the genetic/medical research
required to double a human lifespan. But it should be possible to estimate
the number of hours required to write a program so that SC can carry on a
standard conversation as long as somebody out there (in psyc, law,
linguistics etc.) knows all the rules for conversing. Do they? This is the
kind of question the proposed ed tv program would continue with, asking,
criterion-by-criterion, what does it take to attain "human
equivalency" for our consumer robot?
The benefits in theory are enormous. The machine psychology system of
particular interest is that of work performance. Now I'm not saying that
research should take the humanoid or nonhumanoid path but as long as the
Japanese are going ahead with a number of humanoid projects I'm happy to
Is there some aspect of artificial sensation-perception,
artificial behaviour or artificial brain which could not be brought up to
human equivalency for WORK PERFORMANCE ONLY by investing huge amounts of
money and labour? I don't care if PX can breathe like a human or chew gum
like a human or has the skin texture of a human. I only care if it can do
work like a human. If it can, I think we'll pay every cent we have to
develop it. And the GPP (Gross Planetary Product) is about $50 t. So
there's the summary and bottom line of what I am investigating for this
proposed ed tv program. If you can think of any work-related criterion
which would require a new invention rather than more funding/labour please
let me know.
(despite that he's Herr
This is fascinating class of human...unique. We've never had roboticists
on the planet before. So I'm very interested in what people like Moravec,
Warwick, Brooks etc. say. They are a different kind of cat as the saying
goes. For sure they all tell first rate "scary stories" and I'd invite any
of them to a Hallowe'en party.
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:34 MDT