Robert J. Bradbury Sun, 25 Mar 2001 14:05:49, wrote
> Lee Corbin wrote
>> Yes, but surviving cursory examination by AOL members,
>> which is only on-line and not real-world anyway, is
>> hardly much of a challenge. I agree, instead, with
>> what you wrote earlier: "if a... creature does not
>> have 'feelings' that promote its survival, then it's
>> very rapidly a dead [creature]". Same goes for
>> consciousness, I think.
> I would maintain that I can construct a zombie that
> had no 'feelings' regarding a fear of being hit by
> a car while crossing the street but still *behaved*
> as if it had a fear of hitting a car while crossing
> the street.
Quite right. Robots today are capable of such behavior,
and even little mechanical rats years ago at MIT, I think,
seemed to act as though they were hungry. I don't want to
call these "zombies" because they don't even begin to
imitate full human behavior, and also because they
wouldn't survive very long (except on roadways,
perhaps, for which they were partly designed).
It may sound both unfair and nebulous to demand that a
zombie display a "full range of human behavior". But
this takes us back to how the very first, and to this
day extremely important, arguments broke out among
philosophers about what life and consciousness are.
The opponents of scientific materialism have been waging
(and losing) a war against mechanism for centuries, or at
least since in 1828 when they lost the battle over whether
bodily fluids (such as uric acid) could be synthesized.
One recent claim of theirs is that it is impossible for
an artificial intelligence to actually be conscious, or
have feelings. They maintained that even if you succeeded
programming a robot to behave completely indistinguishably
from a human, it still wouldn't have consciousness or even
be alive. These creatures, which could imitate a human
being in every conceivable behavior aspect, were dubbed
"zombies", because although identical in behavior, they
have no feelings or consciousness, and have no "inner life".
Of course, extropians have left such beliefs far, far
behind. As have some philosophers ever since Turing, we
realize that there is nothing magical about the biological
machines that we are. We, too, are just piles of atoms
obeying physical law. But unless you were there, you
just cannot believe how reluctant almost everyone was
to accept this thirty or forty years ago.
"Functionalism" is the name usually associated with the
doctrine that if it quacks like a duck, walks like a
duck, and acts like a duck in every way, then it's a
duck. Functionalists believe that anything that acts
like a human, etc., really does have human awareness,
intelligence, and feelings.
Except for the tiny detail of lookup tables, I am also a
functionalist, and it seems that practically all extropians
are too. But of course, all the possibilities surrounding
the use of computronium, vast processing spaces and speeds,
virtual reality, remote telepresence, and other recent
conceptual breakthroughs, make it non-trivial to sort through
all the issues. Still, it seems best to say that according
to the original meaning of "zombie"---not conscious, but in
every other way completely indistinguishable from human---
well, zombies are just impossible.
Were we to admit that they were possible, by the way, then
a lot of people like Dreyfus would immediately say, "See?
Even if you do ever succeed with AI, it won't be a REAL
conscious entity that you have, merely a machine. And no
amount of suffering that you inflict on an AI means
anything, because they cannot have any feelings."
And I would just croak if some of these old opponents of
even soft-AI published an article saying, "many forward
thinking people, including extropians, also now concur
today that AIs would only be zombies."
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:43 MDT