Deep Blue colmun in TIME, May 19th

Michael Lorrey (retroman@tpk.net)
Tue, 24 Jun 1997 15:03:07 -0400


This is a multi-part message in MIME format.

--------------58DB72ACAF6
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

I just picked up the May 19th issue of TIME from the coffee table and
read Prof. David Gelernter's column claiming that computers never will
be truly as smart as humans. THis is the same issue the Kasparov match
was covered in. Gelernter claims to be a prof of computer science at
Yale fer gawd sakes, yet he expresses ideas that are in total ignorance
of Moore's Law, as well as breakthroughs of the past year in human/chip
interfaces, etc. It got me so ticked off I actually got off my horse and
wrote a rather long letter, possibly enough to be a column in and of
itself, where I poke the holes in his arguments. I'm attaching it here
for you guys and gals to make comments on.

-- 
TANSTAAFL!!!
			Michael Lorrey
------------------------------------------------------------
mailto:retroman@tpk.net		Inventor of the Lorrey Drive
Agent Lorrey@ThePentagon.com
Silo_1013@ThePentagon.com	http://www.tpk.net/~retroman/

Mikey's Animatronic Factory My Own Nuclear Espionage Agency (MONEA) MIKEYMAS(tm): The New Internet Holiday Transhumans of New Hampshire (>HNH) ------------------------------------------------------------ #!/usr/local/bin/perl-0777---export-a-crypto-system-sig-RC4-3-lines-PERL @k=unpack('C*',pack('H*',shift));for(@t=@s=0..255){$y=($k[$_%@k]+$s[$x=$_ ]+$y)%256;&S}$x=$y=0;for(unpack('C*',<>)){$x++;$y=($s[$x%=256]+$y)%256; &S;print pack(C,$_^=$s[($s[$x]+$s[$y])%256])}sub S{@s[$x,$y]=@s[$y,$x]}

--------------58DB72ACAF6 Content-Type: text/plain; charset=us-ascii; name="hrdishum.txt" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="hrdishum.txt"

How Hard is Human? by Michael S. Lorrey

David Gelernter may be a professor of computer science at Yale, but any informed individual can see that he is not only misinfomed about his own field, but hopelessly behind the times with respect to our knowledge of neurophysiology if his column in the May 19th issue is any indication of his knowledge of progress in artificial intelligence and brain/machine interface. One hopes that a tenured member of the Yale faculty was merely presenting those facts which bolstered his defensive position that computers will never be as smart as human beings, rather than demonstrating his obsolescence.

Any computer professional has heard of Moore's Law, first formulated by one of the founder's of Intel back in 1969. Moore's Law, based on data up to that date and proven to still be true today, states that the processing power of a given dollar value of computer chips will double every 18 months, and that the price of any given chip will drop by half every 18 months. This is the sort of technological progress that has the inflation and productivity estimators at the Bureau of Labor Statistics in apopleptic fits. If such drastic deflation were actually accounted for in our government economic figures, cost of living adjustments for seniors, bond rates, and the deficit would crash into the negative in just a few years. Where this applies to artificial intelligence is in the long term. If you apply Moore's law into the future, you will find that sometime between 2025 and 2035, the average desktop computer will have as much processing power as the human brain, which has somewhere ! ! around 10^17 and 10^19 neuron connections.

Gelernter is right in saying it is silly to ascribe intelligence to Deep Blue, as that computer has as much processing power as say, a housefly or a maybe even a mouse. No one has ever assigned much intelligence to creatures of that level (PETA excluded). However, to say that no computer will ever be truly intelligent, will ever truly think is hubris of the highest order. While he has the right to claim some supernatural source of our being, so far he cannot prove that the human mind is nothing more than a biochemical computer with a neural net architecture. Given this, we can duplicate that architecture in other forms, even silicon.

Going on to his further errors, he paraphrases Wordsworth and Coleridge, in that human thought is largely a process of stringing memories together, and that memories are often linked by emotion. Associative thinking is, in fact, largely how we think most of the time, but this can be duplicated by any relational database. Anyone on the internet who uses the more advanced search engines knows that they are getting smarter and smarter every day. They are capable of learning from past experience, capable of making suggestions for other search areas associated by their experience with the parameters the user is searching with.

The claim that thinking is dictated by emotion is utterly false. At best our thinking is muddied or even given greater color by our emotions. Studies have shown that heightened emotions tend to give greater reinforcement to certain memories, especially experiences heavily laden with adrenaline, but this behavior can also be easily accounted for in programming if needed. Criminal Science studies of crime scene witnesses have also shown that these "reinforced" memories tend to also be very inaccurate, leading to dubious conclusions of the usefulness of emotion in thinking accurately.

He goes on, in the end, to state that even if computers are ever capable of "simulating human thought", they will never be fully human. "The gap between human and surrogate is permanent and will never be closed." This is also false. There have been in the last year, breakthroughs in electronic interfaces between computer chips and human neurons. At the very least these will restore sight and hearing to millions of people around the world, probably they will also allow millions of paralysed individuals to move again. Eventually, this will lead to the ability of human beings to plug a network cable into their heads to directly interface with other people and computers all over the world. At this point, if even the most perceptive individual cannot tell the difference between the thoughts of another human on the net and an intelligent computer, will the gap be closed? Probably not, but this is the point that will trigger great debate in our society between progressives and reacti! ! onaries. Charges of racism and heresy will fly. Children and less sophisticated adults will call each other "chip-lover" and "bio-bigot".

What will close the gap will be when a human being successfully transfers their software template to a computer. Will this new mind have a soul? Will it matter? And what about those who will be in between, much as bisexuals are caught in the middle of todays pro- and anti-gay activists. With bio/chip interfaces, there will be a large part of the population who will eventually, with the augmentation of additional artificially made processors and interfaces, become more than just human, but not machines either. To remove the augmentations will hobble these personalities as much as a lobotomy would do so to you or I. These transhumans will lead the way into this future whether neo-Luddites like Gelernter like it or not. --------------58DB72ACAF6--