Re: Deep Blue colmun in TIME, May 19th

Mark Crosby (crosby_m@rocketmail.com)
Wed, 25 Jun 1997 08:51:05 -0700 (PDT)


Michael Lorrey wrote:
<David Gelernter may be a professor of computer science at Yale, but
any informed individual can see that he is not only misinfomed about
his own field, but hopelessly behind the times with respect to our
knowledge of neurophysiology if his column in the May 19th issue is
any indication of his knowledge of progress in artificial intelligence
and brain/machine interface.>

I doubt very much if Gelernter writes a "column" for TIME! You should
do a little more research about someone before you flame them based on
a heavily-edited, popular-magazine sidebar. As Damien Broderick
pointed out, Gelernter is the author of _Mirror Worlds: Or the Day
Software Puts the Universe Into a Shoebox: How It Will Happen and What
it Will Mean_ (1992), one of the premier visions of cyberspace, and
he’s also the developer of one of the first parallel processing
languages, LINDA. Gelernter’s also the author of _The Muse in the
Machine_.

A lot of serious AI and cybernetics researchers have doubts about
whether human-level intelligence can be represented by anything
resembling a traditional von Neumann computer. These doubts don’t
necessarily imply that they’re worried about a soul or something.

Mike, throughout your essay you equate AI with brain-machine
interfacing, two very different subjects.

<Any computer professional has heard of Moore's Law ... This is the
sort of technological progress that has the inflation and productivity
estimators at the Bureau of Labor Statistics in apopleptic fits.>

So what does this have to do with the price of tea in China? (Are you
suggesting us incompetent BLS bureaucrats would actually get
apoplectic over our work? I thought we’re all supposed to be parasitic
drones! (-;) I don’t doubt that microprocessors are becoming more
and more ubiquitous, but, as I noted in our discussion on this earlier
this year, the biggest chunk of the average person’s expenditures
still go for such mundane necessities as food and shelter.

<Gelernter is right in saying it is silly to ascribe intelligence to
Deep Blue, as that computer has as much processing power as say, a
housefly or a maybe even a mouse.>

Deep Blue was not just a ‘computer’ - there was a muse in the machine,
namely the team of people who programmed it with a knowledge of chess.

<While he has the right to claim some supernatural source of our
being, so far he cannot prove that the human mind is nothing more than
a biochemical computer with a neural net architecture. Given this, we
can duplicate that architecture in other forms, even silicon.>

And you cannot yet prove that the human mind *is* nothing more than a
biochemical computer with a neural net architecture. So your "given
this" assumption is hardly justified. There are serious scientists who
don’t believe in "supernatural sources of our being" yet still have
doubts about whether traditional computer architectures, and even
neural nets, are adequate to duplicate what the human mind does. You
might check the Principia Cybernetica web for starters.

< Associative thinking is, in fact, largely how we think most of the
time, but this can be duplicated by any relational database.>

Then why is it that hundreds of AI researchers working for decades
have yet to build anything close to a human associative memory. Why
is it that cybernetic systems (real-time process control systems) and
even management decision support systems shun relational DBMS for
object-oriented or multidimensional approaches?

<The claim that thinking is dictated by emotion is utterly false. At
best our thinking is muddied or even given greater color by our
emotions. Studies have shown that heightened emotions tend to give
greater reinforcement to certain memories, especially experiences
heavily laden with adrenaline, but this behavior can also be easily
accounted for in programming if needed.>

If there’s no emotion associated with it, you’re unlikely to remember
it. Again, you’re setting up a straw-man: Most cognitive scientists
don’t claim that thinking is *dictated* by emotion, but they do
believe that the ‘emotional’ organs of the brain are an essential part
of the long-term memory system.

<He goes on, in the end, to state that even if computers are ever
capable of "simulating human thought", they will never be fully human.
"The gap between human and surrogate is permanent and will never be
closed." This is also false. There have been in the last year,
breakthroughs in electronic interfaces between computer chips and
human neurons.>

You’re mixing apples and oranges again. Gelernter is talking about AI
- simulating a human-level mind on a computer - a completely different
subject from man-machine interfaces. Gelernter is talking about the
simulation vs. fabrication problem in systems science, what I call the
issue of whether you can *program* a synthetic intelligence or whether
you have to *grow* one. (Hardly a distinction that the average TIME
reader would be aware of.)

Actually, I like your cyborg scenario and think it’s probably the most
likely path to becoming post-human; but, it’s has nothing to do with
the ‘hard’ questions of AI because there’s still a ‘muse in the
machine’; i.e., an ‘organically-grown’ human mind.

Don’t get me wrong: I think machine intelligence is possible - we
already have it, with some systems that are much more intelligent in
particular domains than an un-augmented you or I could ever be - but
true AI or SI or A-Life, with creativity and individual will and
responsibility requires a lot more than just the raw processing power
implied by your Moore’s law reference.

Processing power is only a small part of the story - growing the right
kind of network is far more important. Those oriented toward
mechanical engineering always seem to assume that the software will
just automagically emerge if you build the hardware properly. I was
disappointed that _Nanosystems_ totally ignored software issues
(actually, Drexler assumes backward-chaining, broadcast instructions
from a pre-existing higher level - again, a muse in the machine!)

<Charges of racism and heresy will fly. Children and less
sophisticated adults will call each other "chip-lover" and
"bio-bigot".>

But (reread Bruce Sterling’s _Schismatrix_), "bio-bigots" don’t
necessarily imply religious types who shun *any* change to their
’God-given’ body - it could also refer to those who prefer genetic
engineering and other bio-approaches to augmentation.

<To remove the augmentations will hobble these personalities as much
as a lobotomy would do so to you or I. These transhumans will lead the
way into this future whether neo-Luddites like Gelernter like it or
not.>

Again, what does this have to do with the price of tea in China?
Augmentations are not AI, and that’s what Gelernter was talking about.
If you’re going to label anyone who has concerns about whether
traditional computing techniques can be used to implement AI as a
neo-Luddite, well ...

Mark Crosby

_____________________________________________________________________
Sent by RocketMail. Get your free e-mail at http://www.rocketmail.com