Re: Robots, but philosophers (or, Hal-2001)

From: Franklin Wayne Poley (culturex@vcn.bc.ca)
Date: Tue Sep 26 2000 - 16:56:18 MDT


-------------------------------------------------------------------------------
Machine Psychology:
               <http://users.uniserve.com/~culturex/Machine-Psychology.htm>
-------------------------------------------------------------------------------

> Date: Tue, 26 Sep 2000 08:58:41 -0700
> From: hal@finney.org
> Reply-To: extropians@extropy.org
> To: extropians@extropy.org
> Subject: Re: Robots, but philosophers
>
> Franklin writes:
> > Correct me if I am in error, but the attitude of this (Extropians) list is
> > that AI to surpass human equivalency is either here (in private and/or
> > military budgets) or almost here (within a decade). Minsky OTOH sees AI as
> > having a long way to go to get to that level. I'll forward a small sample
> > of my exchanges with him.
>
> There is a diversity of opinion, but I think only a small but vocal
> minority expects to see AI so fast. I suspect that more people would
> agree with estimates roughly in line with Moravec's 2030-2040 time frame,
> possibly as early as the 2020s.
>
> Hal

How about 2001, Hal? Could it be that by 2001, someone somewhere will
already have AI machinery to surpass human equivalency? If anyone wants to
review the lengthy exchange with Minsky, let me know off-list and I can
give you access to the archives of
EDTV-Robotics-State-Of-The-Art@egroups.com. What I am doing with that
by-invitation-only list is to work out the script for a proposed ed tv
program with a local (Vancouver) tv studio. Unless I am satisfied that the
script covers the state-of-the-art (in the public domain anyway) I don't
want to go ahead with it.
   In summary, here is the argument that AI now is at a stage comparable
to the man-on-the-moon program from a 1960 perspective. In other words, we
mostly need quantitative extensions of what we know now and the
qualititative aspect of this project is not overwhelming. That is, we can
now see the areas which require innovation or invention and we can
reasonably assume that the breakthroughs will be made. The sheer magnitude
of the project should not be a deterrent. If we know how to reach the
objective and it is worth while to do so, so what if it costs hundreds of
billions?
   In my 1976 text (with Al Buss), "Individual Differences", Chapter 3 is
on Intelligence. Pages 41-43 give brief descriptions of 19 primary mental
abilities. We can proceed by different models but the Thurstone-Cattell
model has advantages for inter-disciplinary communication. Cattell's CAB
uses a similar set of factors. If you measure all 19 factors you have a
pretty good estimate of 'g' or general intelligence. If our machinery can
competently churn through all those problems entailed in the 19 factors
and surpass human equivalency, I think we have one helluvah machine. Maybe
we will call it Hal!
   Minsky didn't like the psychometric approach. In the archives you will
see that he says that's because it "doesn't deal with underlying
structures". He is correct. But AI is approached in different ways by
different people. In his book, "Society of Mind" he says "This book tries
to explain how minds work" (p. 17). I am not trying to explain how minds
work. I am the laziest man in the whole wide world. I want machines to do
my thinking for me. I want them to get the results of those problems
presented to me and I don't care much how their artificial minds work as
long as the do that.
   By illustration, arithmetic ability is used enough in the testing of
real human intelligence that it comes out as a factor of its own. My
little $5 calculator is a wizard at Factor N problems, a fact that I
appreciate whenever I recall that when I started graduate school we used
cumbersome Munroe calculators that clanked and crashed around and broke
down every other day and were painfully slow. It doesn't matter to me if
my $5 digital calculator solves N problems like a real human
intelligence. I credit it with human equivalency. Besides, my mental
arithmetic is such a rote procedure that you have to wonder if real
human intelligence in general is any loftier than artificial humanoid
intelligence.
   So we go through the other 18 factors and we realize that AI machines
already surpass human equivalency in many ways. They are superior to
humans in Reasoning Factors (arithmetic/logic/mathematics) if we care to
take the time to write programs for all of these disciplines. They are
superior in various kinds of Memory Factors. They are superior in many
kinds of Visualization tasks. It is with respect to the Verbal Factors
that machines are most at a disadvantage. But even here consider that
spelling is sometimes used in psychometrics and machines meet the human
equivalency criterion in spelling, as they do in 'definitions'. As I
posted yesterday, I think there are 4 areas where innovation/invention
rather than sheer massive investment of time and labour MIGHT be required
to build Hal-2001 so as to meet human equivalency criteria: (1) voice
recognition in difficult circumstances, eg with lots of noise and a weak
signal; (2) visual object recognition or itemization of objects in
difficult circumstances, eg with lots of clutter and poorly defined
borders to objects; (3) conversational ability; (4) reading ability, ie
ability to deal with text-question-answer sequences.
   But note that in all 4 of these areas significant progress has been
made. Maybe Hal-2001 exists already as a private or military project. If
so, what does it mean? It means you could converse with it as you could
with a human. You could ask it for expert advice in law, medicine,
engineering and scores of other professions and it would give expert
answers. Given that it is this learned, it could build replicas of itself
if there is a need to do so. It can work on problems (like advancing
military science or the space program) tirelessly, 24-hours a day and
more-or-less error free and more competently than teams of human experts.
Given that learning abilities are already built into the psychometric
model, it can add to its store of learning. In other words, it is a
learning machine as well as an AI machine.
   What can't it do? It will probably fall short of human creativity vs.
the most capable of humans. I would expect some creative output in the
arts (Kurzweil gives examples in his writings) but I wouldn't expect plays
to surpass those of Shakespeare. I don't expect a Robo-Mozart. However,
given that those 4 areas above are the most difficult problem-areas in
such a development I see no reason why such a machine could not be built
now. And its value should justify hundreds of billions of dollars in
R&D. Indeed, Hal-2001 may exist already.

FWP



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:12 MDT