From: Kevin Freels (megaquark@hotmail.com)
Date: Thu Jun 19 2003 - 15:28:16 MDT
Hold on. My impression of the Turing test was that the human and the AI
would not be seperated and that the human would have no way of knowing that
it was a computer at the other end. I always assumed this would be done in
an "instant messaging" sort of way. In this case, the additional computing
power of mimicking emotional responses (I am assuming you were referring to
voice inflection, etc) would not be necessary. I don't think that a voice or
emotional capabilities would be important in this case.
Also, with each individual having different emotional responses to different
stimuli, to get an accurate result, would you have to have the human
converse with a variety of people? Say 10, with only 1 being a computer?
Can't the Turing test be manipulated by how "emotional" each human would
act? Couldn't you easily program in randon "HOLY COW!"s and "Gosh darn
it!"s? and as long as the cognitive abilities were up there, the emotional
responses would seem odd (there's many odd people out there) but not
"inhuman"
Furthermore, I have other problems with the subjective nature of the Turing
test itself. Problems come up such as the following:
1.) Given several groups of 100 people from various backgrounds, age groups
and educational levels, you will still have a wide range of people who
either think it is a machine or a human they are conversing with. The
person's own emotional and opinion plays a part.
2.) Even if it were a human on the other side of the conversation, some
people are bound to think it is a machine.
3.) Different people would expect different responses to the same questions
when they are deciding whether or not it is an AI or human,
and finally,
4.) An AI that was nowhere near as capable as a human intellectually still
may pass simply because even an educated human may just think they are
dealing with an ignorant human on the other side.
Am I way off here in thinking that we may never be able to divise a
sufficient test simply because as human beings, we have a wide range of
intellectual capabilities that is too broad to nail down? Can we even use
this test to ensure that a human is human?
----- Original Message -----
From: "Rafal Smigrodzki" <rafal@smigrodzki.org>
To: <extropians@extropy.org>
Sent: Thursday, June 19, 2003 5:47 PM
Subject: RE: greatest threats to survival (was: why believe the truth?)
> Hal wrote:
>
> >
> > If you could come up with a definition of AI (or AGI, whatever), that
> > would provide clear criteria for judgement in some time frame, I could
> > introduce it as a claim. Anyone can add claims to the game.
> >
> ### We could go the easy way and register a claim for Turing-capable
machine
> by the year 20xx, specifically, the machine would have to pass repeatedly
> (30 times, 100 times?) the classic Turing test, with a human examiner, and
> one more human participant, conversing on an unrestricted range of
subjects
> without allowing the examiner to differentiate between the machine and the
> human participant. This would be actually a rather stringent test, since
the
> AGI would need to precisely emulate not only the cognitive capacities of
the
> human, but also have convincing emotional responses.
>
> There was a claim for binary Turing test on ideosphere but it was retired.
>
> Rafal
>
>
This archive was generated by hypermail 2.1.5 : Thu Jun 19 2003 - 15:32:16 MDT