Re: Robots, but philosophers (or, Hal-2001)

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Sep 28 2000 - 03:10:29 MDT


Franklin Wayne Poley wrote:
>
> On Wed, 27 Sep 2000, Samantha Atkins wrote:
>
> > Franklin Wayne Poley wrote:
> >
> > > How about 2001, Hal? Could it be that by 2001, someone somewhere will
> > > already have AI machinery to surpass human equivalency?
> >
> > Not on this planet. Maybe you want to call on Ashtar High Command or
> > some such. :-) They've been feeding us all of this tech anyway, don't
> > ya know?
>
> How about the "Andromeda Strain", some little AI seed that will grow by
> genetic algorithms until it knows all?
>

Where do you propose this seed came from? Do you honestly think we are
capable of creating it next year? Don't confuse fiction with reality.
We live in sci-fi times but let's keep some groundedness for the sake of
halfway decent real-world plots.

> >
> > Qualitative is not overwhelming?
>
> I listed 4 particular areas. The fact that significant progress has been
> made in all 4 seems hopeful to me. For example, a computer programmer from
> Australia sent me 390 k. on a program for grade one reading (if anyone
> wants it just let me know off-list). I'm not even sure there is a
> qualitative problem to be overcome here. Can you give an example of a
> passage of grade one reading for which we can't program questions with
> correct answers? If not, the programmers go on to grade two and so on.
> What are the inventions required in the other 3 areas I mentioned?
>

Sigh. If you are not sure there are qualitative problems then go read
the literature.
 
> This has to be a joke. We have no
> > idea what qualia even are among other "qualitative" problems of reaching
> > human level intelligence.
>
> I think the specialists working in these areas will be able to give a very
> clear statement on what they need to progress. For example, if it is a
> problem which has to do with edge detection for object
> recognition/itemization they will be able to say so. That is what the
> proposed EDTV-Robotics-State-Of-The-Art program needs to know. I'm not
> interested in writing the script for another "golly gosh" ed tv program to
> "wow" the public and provide a little education at the same time. I need
> these precise statements of what if needed to progess, eg huge amounts of
> additional labor using known technology or inventions of something new.
>

No. They don't give a very clear statement except of those areas we
already have a fair amount of understanding of. In the tougher areas we
cannot yet clearly state what is needed because we don't have that much
clarity on what the functional parts are or how they hook together or
what the resultants are (like the hard problem of consciousness). And
no, we will not solve the problems in one year or even have the full
statement of what is needed.

 
> We have relatively poor grasp of even higher
> > level issues like concept formation and usage.
>
> And I can list these esoteric and mentalist notions until the cows come
> home. How about consciousness, common sense, comphrehension,
> contemplation....? Watson's 1913 ms. in Psych Review exorcised mentalism
> from scientific psychology. My own personal philosophy is dualistic so I
> don't take the "strong behaviourist" position but I use it for practical
> purposes. For a generation before Binet's very practical approach to
> intelligence testing in 1905, psychologists spent enormous amounts of time
> with this mentalistic navel gazing. They got nowhere.

Those "esoteric and mentalist notions" are a large part of what human
consciousness and human-level fucntion is about. If you are going to
play silly reductionist games then I wish you luck but don't pretend the
result is a true human-level consciousness. Concept formation is most
certainly not "esoteric". It is meat and potatoes of a truly
intelligent entity capable of learning and especially capable of
consciously directed concept manipulation. I would like to see you get
something like human self-awareness and thinking without this.

A lot has happened since 1905 that does not allow such slighting of such
simple aspects of higher intelligence.

> I don't mean to be harsh because I think a number of disciplines are
> required to solve the problems presented in AI, but I wonder how many AI
> workers have ever studied the history of trying to measure/observe/define
> real human intelligence, let alone ever given an IQ test? What I see is a
> lot of people treading the same ground with artificial humanoid
> intelligence that philosophers-psychologists-educators-physicians had trod
> with real human intelligence before practical, operational psychometrics
> came along. And it is expected they would do so. After all aren't they
> trying to simulate real human intelligence?
>

Depends a lot on what you call "real" doesn't it? That itself is not a
closed topic. Operational psychometrics are not the end and be-all of
such questions.
 

> Lots of theory, no
> > satisfying fully general and full powered learning programs. No model
> > we are even happy about for describing what humans do with
> > percept-concept-more abstract concept chains.
>
> Just give one example of such a chain which cannot be
> verbalized. Skinner's dictum was "If it can be verbalized it can be
> programmed".

I have nothing at all to say to anyone who is so backwards to quote
Skinner in this conversation.

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:18 MDT