At 02:04 AM 4/5/99 -0400, Ross A. Finlayson wrote:
>What if it's a replay of a Turing test?
If I'm not conscious, I'm not in a good position to administer the test, that's true. However, on a pratical level, I more or less have to assume that I am conscious.
>The difficulty of a machine to pass a Turing test depends on who is testing
>it. Some people are more likely to be able to fool or reveal a machine into a
>state of appearing non-sentient than others, but then again, maybe the machine
>would prefer that, or in general, prefer to remain known as non-sentient.
The Turing test, like all other tests, needs a control. It should therefore e performed against both a computer and a human. If the testers are equally likely to confuse the two, then the AI passes.
As far as the AI deceiving us into thinking it's unconscious... we would have exactly the same problem if a person pretended behaved as if they were severely mentally disabled. So long as they kept the performance consistent and fairly convincing, we would have no way of telling if the person was faking or not.
What's the point of a distinction when there is no possible way to differentiate? This is my philosophical pragmatism kicking in here.
-IF THE END DOESN'T JUSTIFY THE MEANS- -THEN WHAT DOES-