the Turing test

Lyle Burkhead (
Fri, 18 Oct 1996 18:02:54 -0500 (EST)

Ira Brodsky writes,

> I think we need something more than just your pronouncements to
> take this seriously. How about a press release, or even a signed letter
> from the Chair of CMU Computer Science department?
> I don't know you. If you represent CMU's Comp. Sci. department,
> and this is an official announcement, please say so.

The idea of the Turing test is that you are supposed to figure out, just
from conversing through a terminal, that you are talking to a program.
If the only way you can tell who you are talking to is to look behind
the curtain, then the program wins. So please don't ask me to give you
letters. That's an admission of defeat on your part. If you call the cs
department at CMU, they will, naturally, deny any knowledge of this.
(Of course the students themselves are watching this dialogue with
great hilarity, and cheering their program on.)

I wrote,

::Ira, how can you confidently say "way above anything yet achieved
::by AI?" You only know what has been achieved in the past.
::The point is, this program from CMU is *new*. It is a quantum leap
::beyond earlier AI programs. This is the first time an AI program
::has been unleashed on an unsuspecting world, and left to fend for itself.
::And it is doing very well.

to which Ira replied,

> Lyle, how could I possibly say, with confidence, anything more?
> What is this new AI? How is it an advance over previous AI?

It is an advance over previous AI in that it doen't depend on smoke
and mirrors (see below). It has a (potentially) complete model of the
world. It understands causality. It understands its place in the world.
It can learn from experience. It can imagine being in different
situations. Therefore, it can hold a conversation for an extended
period without arousing suspicion, and it can continue to do this even
after its interlocutors have been apprised of the situation.

> Has it been subjected to any independent evaluations?

This is parrot-talk. Why do you have to depend on somebody else's
evaluation? It's being evaluated by everyone who encounters it.

> Again, all we have to go on are your personal pronouncements.

Plus your own judgment about what's real and what isn't.

Hal Finney writes,

> For information about the current state of the art in
> Turing Test passing programs, see the Loebner Prize home page,

This is no longer the state of the art. It's a fast-moving field.

Hal continues,

> These programs are all smoke and mirrors, using conversational
> trickery to try to distract and misdirect the judge for long enough
> that he doesn't notice the utter lack of understanding which the
> program really possesses. It's kind of like watching David Letterman
> fly. You might not be able to distinguish him from someone really
> flying given the constraints of limited time and viewing angles, but
> that doesn't mean there's no difference.
> When you look at the larger body of transcripts you begin to see the
> repetitions, the errors, the flashy but canned statements, and you
> realize how little is actually there.

Yes. This is an apt description of the AI programs of the past.

> Lyle's game is amusing but as in many such cases the facts are
> ultimately more interesting. The real question is whether the
> Turing test is valid, and in particular just how much interaction is
> necessary before we can know that the program is showing real
> understanding.

Another question to ponder is what would be involved in going
from the smoke and mirrors programs of the "current state of the art"
to a program that really is intelligent.

I certainly agree that, as you say, "the facts are ulitmately more
interesting." In the past I have been taken to task for this attitude.
Now I find myself in the odd position of trying to convince Extropians
that AI exists.