> > The problem is that we don't know how to use that processing power.
> > If, today, we had a magic computer that had an infinite amount of memory,
> > and, oh, let's say was 10^20 times as fast as today's computers, we
> > still wouldn't know how to write a program as smart as a dog, or even a
> > honeybee.
>
> Playing devil's advocate here.
>
> Other potential problems relate to whether the 'mind' is the result of
> exclusively computational processes. The Turing test says in effect that
> if a machine can mimic the outside of a human then it has thereby
> replicated the inside: if it behaves like a human with a mind, it has
> a mind. But some critics of this idea point out this sounds too much
> like the application of the doctrine of behavioralism. One critic stated,
> "[behaviorism is] the view that minds reduce to bodily motions; and
> behaviorism has long since been abandoned, even by psychologists. Behavior
> is just the evidence for mind in others, not its very nature. This is why
> you can act as if you are in pain and not really be in pain -- you are
> just pretending." (McGinn, "Hello Hal", _New York Times_ (1/3/99) pg. 11,
> col. 1).
Behaviorism is not the view that minds reduce to bodily motions. That would be materialism. Behaviorism is the idea that we should not talk about consciousness, or introspection, or what the mind is doing; we should just put it in experimental situations and observe what it does. This idea was appropriate in the 1940s, but is inappropriate now that we have computers, and the technological ability to make and test models of the mind.
Behaviorism is also associated (that's a pun, but if you get it you don't need to read this) with the notion that mind is nothing but stimulus-response mechanisms. Rodney Brooks' school of thought on AI -- behavior-based AI -- is essentially behavioristic AI. All the critiques that Noam Chomsky made of behaviorism in 1956 still apply to Brooks' program today. The stimulus-response school, including Brooks, rests on the notion that thinking and planning and remembering is a modern accident of evolution, useful only for playing chess and choosing the right wines and other civilized behavior. This view is possible only because the people who hold it haven't observed animals or humans in primitive society, and have the romantic notion that, say, a wolf gets fed by wandering until it finds a deer, then running it down and eating it. Further research into animal and insect behavior shows only more thoroughly that animals and insects have memories and mental maps and are not stimulus-response mechanisms.
The Turing Test is a sort of behaviorism, yes: it is a philosophical assertion that it is the behavior that matters, and that we should treat a computer as a thinking entity if it acts like one, rather than arguing about whether it is conscious or not, since I could argue that you aren't conscious. But it does not suffer from behaviorism's flaws, because it is not a /scientific/ program, it is a philosophical or semantic argument about what criteria something has to satisfy for us to regard it as intelligent and, presumably, as having rights. If you don't accept the Turing test's position, then you risk legitimizing people like Searle, so that when we produce a race of intelligent robots, Searle's arguments can be used to justify enslaving and abusing them.
I have asked Searle twice how he feels about the potential use of his arguments to abuse intelligent creatures. The first time he said "I don't say you can't make intelligent robots that don't use symbol processing", which assumes his position is correct and ignores the fact that it /does/ say you can abuse anything produced by symbol-processing AI. The second time he never answered.
> This problem is better articulated by Searle's Chinese Room
> argument which basically asserts computers are wondrous and working with
> symbols but do not necessarily have any grasp of the meaning of the
> symbols. Also, we need to nail down what consciousness really is and what
> produces it. Some say it is an emergent property of sufficiently complex
> computational structures.
Searle says consciousness is a product of the physical stuff used to implement the brain. You get consciousness because you build out of consciousness stuff. The Chinese room argument is an exercise in sloppy thinking, and Searle defends it against attacks only by blatantly deceptive redefinition of his terms on the fly from sentence to sentence.
> Others, e.g. Penrose and Hameroff, assert that
> consciousness is a unique feature of the homo sapien brain using QM as a
> basis for this assertion. However, they've yet to develop a Hamiltonian
> for their theory and the QM list is filled with a lot of haggling among
> proponents of the theory.
Penrose, while great in his own field, jumped into a field (AI) that he knows nothing about, without studying the literature or (apparently) vetting his work with people in the field. His book Emperor's New Mind contains freshman blunders, and is not worth reading. I don't know who Hameroff is.
> one another to determine which can best satisfy the Turing test. This
> year's winner was judged to be human 11% of the time. So while it is not
> considered human it does qualify to run for public office. :)
Would being considered human disqualify it? :)
Phil goetz@zoesis.com