Re: Deep Blue colmun in TIME, May 19th

Michael Lorrey (
Wed, 25 Jun 1997 13:14:51 -0400

Mark Crosby wrote:
> Michael Lorrey wrote:
> <David Gelernter may be a professor of computer science at Yale, but
> any informed individual can see that he is not only misinfomed about
> his own field, but hopelessly behind the times with respect to our
> knowledge of neurophysiology if his column in the May 19th issue is
> any indication of his knowledge of progress in artificial intelligence
> and brain/machine interface.>
> I doubt very much if Gelernter writes a "column" for TIME!

It wasn't an article, because it was filled with personal opinion and
analysis, that which is normally seen in a column or editorial. He isn't
an editor so he must be a columnists. Maybe an irregular one, maybe a
highly irregular one, but thats the shoe that fits.

You should
> do a little more research about someone before you flame them based on
> a heavily-edited, popular-magazine sidebar. As Damien Broderick
> pointed out, Gelernter is the author of _Mirror Worlds: Or the Day
> Software Puts the Universe Into a Shoebox: How It Will Happen and What
> it Will Mean_ (1992), one of the premier visions of cyberspace, and
> he’s also the developer of one of the first parallel processing
> languages, LINDA. Gelernter’s also the author of _The Muse in the
> Machine_.

Then why the hell did he wax so spiritual about the impossibility? Does
this mean that his fiction and futuristic projections are merely cons
targeted at a market?

> A lot of serious AI and cybernetics researchers have doubts about
> whether human-level intelligence can be represented by anything
> resembling a traditional von Neumann computer. These doubts don’t
> necessarily imply that they’re worried about a soul or something.
> Mike, throughout your essay you equate AI with brain-machine
> interfacing, two very different subjects.
> <Any computer professional has heard of Moore's Law ... This is the
> sort of technological progress that has the inflation and productivity
> estimators at the Bureau of Labor Statistics in apopleptic fits.>

Its merely a minor digression that further shows how even the "experts"
have trouble acknowledging where things are really going due to their
vested opinions.

> So what does this have to do with the price of tea in China? (Are you
> suggesting us incompetent BLS bureaucrats would actually get
> apoplectic over our work? I thought we’re all supposed to be parasitic
> drones! (-;) I don’t doubt that microprocessors are becoming more
> and more ubiquitous, but, as I noted in our discussion on this earlier
> this year, the biggest chunk of the average person’s expenditures
> still go for such mundane necessities as food and shelter.

And what does this have to do with the price of tea, either? Obviously
since computers are making everything else more efficiently run and
made, this would have a synergistic effect on many ohter products, a
portion of whose cost is in the technology used to make, store,
transport, and sell it. Cheaper or more efficient technology at all of
these levels leads to lower product costs across the board, and
therefore, to lower inflation. Its no mistake that this same bunch
refuse to recognise the validity of dynamic scoring in tax reduction,
entitlement adjustments, etc.
> <Gelernter is right in saying it is silly to ascribe intelligence to
> Deep Blue, as that computer has as much processing power as say, a
> housefly or a maybe even a mouse.>
> Deep Blue was not just a ‘computer’ - there was a muse in the machine,
> namely the team of people who programmed it with a knowledge of chess.

Obviously, its a right piece of magic to make a computer with the
processing power of a mouse to beat the Chess Champion of the World.
Imagine what they could do with a computer the size of a human brain.

> <While he has the right to claim some supernatural source of our
> being, so far he cannot prove that the human mind is nothing more than
> a biochemical computer with a neural net architecture. Given this, we
> can duplicate that architecture in other forms, even silicon.>
> And you cannot yet prove that the human mind *is* nothing more than a
> biochemical computer with a neural net architecture. So your "given
> this" assumption is hardly justified. There are serious scientists who
> don’t believe in "supernatural sources of our being" yet still have
> doubts about whether traditional computer architectures, and even
> neural nets, are adequate to duplicate what the human mind does. You
> might check the Principia Cybernetica web for starters.
> < Associative thinking is, in fact, largely how we think most of the
> time, but this can be duplicated by any relational database.>
> Then why is it that hundreds of AI researchers working for decades
> have yet to build anything close to a human associative memory. Why
> is it that cybernetic systems (real-time process control systems) and
> even management decision support systems shun relational DBMS for
> object-oriented or multidimensional approaches?

Merely more elaborate approaches to the same concept do not negate the
validity of the concept. If you look at the previous Deep Blue machine,
you will see that it had half of the processing power of the one that
beat Kasparov. Obviously, a human level artificial database will need to
associate aural, visual, tonsorial as well as memories of all sorts of
other modes of perception in many different ways. This is known. What
is lacking is a machine with enough oomph to do the job. In the interim,
we may be stifled to research with entities like Vinge's DON.MAC AI
Mailman kernel, needing hours of processing time to simulate a few
moments of intelligence. The breakthrough is a machine that can simulate
on a one to one ratio with processing time, or greater.
> <The claim that thinking is dictated by emotion is utterly false. At
> best our thinking is muddied or even given greater color by our
> emotions. Studies have shown that heightened emotions tend to give
> greater reinforcement to certain memories, especially experiences
> heavily laden with adrenaline, but this behavior can also be easily
> accounted for in programming if needed.>
> If there’s no emotion associated with it, you’re unlikely to remember
> it. Again, you’re setting up a straw-man: Most cognitive scientists
> don’t claim that thinking is *dictated* by emotion, but they do
> believe that the ‘emotional’ organs of the brain are an essential part
> of the long-term memory system.

Because the level of emotion, especially adrenaline linked, amplifies
the clarity of the memory, though distortion of a very clear memory into
false memories or merely false details (as I already stated and you cut
out of the reply) is possible. This can also be programmed into a
machine as a function of the priority of the memory based on the amount
of data input per time period (i.e. the amount of "action") at the time
of the experience.
> <He goes on, in the end, to state that even if computers are ever
> capable of "simulating human thought", they will never be fully human.
> "The gap between human and surrogate is permanent and will never be
> closed." This is also false. There have been in the last year,
> breakthroughs in electronic interfaces between computer chips and
> human neurons.>
> You’re mixing apples and oranges again. Gelernter is talking about AI
> - simulating a human-level mind on a computer - a completely different
> subject from man-machine interfaces. Gelernter is talking about the
> simulation vs. fabrication problem in systems science, what I call the
> issue of whether you can *program* a synthetic intelligence or whether
> you have to *grow* one. (Hardly a distinction that the average TIME
> reader would be aware of.)

if you can run the software of the human mind on a computer, and
recognise it as a human intelligence, then there is no reason not to
recognise a human level AI on the same computer as truly aware and
intelligent. If you cannot tell the difference, then there is no
difference. On the issue of whether you can grow an AI on a computer by
itself I say that you cannot raise a human baby to be anything but a
stunted wolf child if it is not raised with a mother or father or both,
so how do you expect to do so with an intelligence thats never existed
before. It will need a nurturing mind in close communion with it to make
it "human" level.

> Actually, I like your cyborg scenario and think it’s probably the most
> likely path to becoming post-human; but, it’s has nothing to do with
> the ‘hard’ questions of AI because there’s still a ‘muse in the
> machine’; i.e., an ‘organically-grown’ human mind.
> Don’t get me wrong: I think machine intelligence is possible - we
> already have it, with some systems that are much more intelligent in
> particular domains than an un-augmented you or I could ever be - but
> true AI or SI or A-Life, with creativity and individual will and
> responsibility requires a lot more than just the raw processing power
> implied by your Moore’s law reference.

Sorry, I see it as partly good programming, a super powerful processing
system, and experiences for it to learn how to use that programming.

> Processing power is only a small part of the story - growing the right
> kind of network is far more important. Those oriented toward
> mechanical engineering always seem to assume that the software will
> just automagically emerge if you build the hardware properly. I was
> disappointed that _Nanosystems_ totally ignored software issues
> (actually, Drexler assumes backward-chaining, broadcast instructions
> from a pre-existing higher level - again, a muse in the machine!)
> <Charges of racism and heresy will fly. Children and less
> sophisticated adults will call each other "chip-lover" and
> "bio-bigot".>
> But (reread Bruce Sterling’s _Schismatrix_), "bio-bigots" don’t
> necessarily imply religious types who shun *any* change to their
> ’God-given’ body - it could also refer to those who prefer genetic
> engineering and other bio-approaches to augmentation.
> <To remove the augmentations will hobble these personalities as much
> as a lobotomy would do so to you or I. These transhumans will lead the
> way into this future whether neo-Luddites like Gelernter like it or
> not.>
> Again, what does this have to do with the price of tea in China?
> Augmentations are not AI, and that’s what Gelernter was talking about.
> If you’re going to label anyone who has concerns about whether
> traditional computing techniques can be used to implement AI as a
> neo-Luddite, well ...

I found his spiritual pose to be in line with many of the other most
narrowminded statements made in the past 150 years by intelligent people
who should have known better. Namely:

"Why would anyone need more than 640K RAM?" - Bill Gates
"There is a world market for approximately 2 computers" (or therabouts)
- IBM, late 1940s
"Heavier than air flight is a physical impossibility that violates the
laws of physics." - Head of the Smithsonian Institute in the late 19th
"The airplane will never sink a capital ship." - The Joint Cheifs of
Staff, 1919, shortly before Jimmy Doolittle sunk a captured German
Battleship and Heavy Cruiser with a squadron of biplanes. (No wonder
they court martialed him)
"The use of rocket propulsion in space violates the laws of physics."
- The New York Times, in response to Dr. Goddards paper on using
rockets to travel to the moon, early 1920's
"God does not play dice with the universe." - Einstein

At least Gelernter can brag that he's in good company.

			Michael Lorrey
------------------------------------------------------------		Inventor of the Lorrey Drive

Mikey's Animatronic Factory My Own Nuclear Espionage Agency (MONEA) MIKEYMAS(tm): The New Internet Holiday Transhumans of New Hampshire (>HNH) ------------------------------------------------------------ #!/usr/local/bin/perl-0777---export-a-crypto-system-sig-RC4-3-lines-PERL @k=unpack('C*',pack('H*',shift));for(@t=@s=0..255){$y=($k[$_%@k]+$s[$x=$_ ]+$y)%256;&S}$x=$y=0;for(unpack('C*',<>)){$x++;$y=($s[$x%=256]+$y)%256; &S;print pack(C,$_^=$s[($s[$x]+$s[$y])%256])}sub S{@s[$x,$y]=@s[$y,$x]}