**From:** *hal@finney.org*

**Date:** Fri Jan 18 2002 - 11:18:43 MST

**Next message:**Harvey Newstrom: "RE: Transgender marriage"**Previous message:**Steve Nichols: "Traditional AI can never be sentient"**Maybe in reply to:**Spudboy100@aol.com: "Paper: A.I and Penrose"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*> http://xxx.lanl.gov/PS_cache/physics/pdf/0102/0102024.pdf
*

*> Abstract
*

*> It has been commonly argued, on the basis of Godel's theorem and related
*

*> mathematical results, that true artificial intelligence cannot exist. Penrose
*

*> has further deduced from the existence of human intelligence that fundamental
*

*> changes in physical theories are needed. I provide an elementary
*

*> demonstration that these deductions are mistaken. Is real articial
*

*> intelligence possible? Are present-day theories of physics suficient for a
*

*> reductionist explanation of consciousness?
*

Penrose's argument has been refuted so many ways it is amazing that anyone

even bothers any more. However I don't think this particular counter-

argument works. It amounts to concern over Penrose's reasoning where

he proposes to stump the AI by giving it what amounts to its own Godel

sentence. One problem with Penrose's plan is that a real-world AI is

not self-contained but would interact with the world; Penrose proposes

to fix this by providing the AI with a simulated world, making it become

self-contained and non-interactive, and giving it a well defined Godel

sentence. The author challenges this fix, writing:

Penrose [2] gives a number of examples, that appear to show that it

is easy to construct the requisite non-interactive subroutine using

the interactive program as a component.

However, there is a big problem in figuring out how to present the

input to the program, to tell it what theorem is to be proved. Now

the program, which we can call an artificial mathematician, is in the

position of a research scientist whose employer specifies a problem

to be worked on. To be effective, such a researcher must be able

to question the employer's orders at any point in the project. The

researcher's questions will depend on the details of the progress of

the research. ("What you suggested didn't quite work out. Did you

intend me to look at the properties of XXYZ rather than XYZ?") As

every scientist knows, if the researcher does not have the freedom

to ask unanticipated questions, the whole research program may fail

to achieve its goals.

Therefore to construct the non-interactive program needed by Penrose

one must discover the questions the artificial mathematician will

ask and attach a device to present the answers in sequence. The

combination of the original computer and the answering machine is

the entity to which Turing's halting theorem is to be applied.

The author, John Collins, goes on to show that this strategy of providing

answers in advance won't work because each time you change the set of

answers you change the system, hence the Godel sentence, hence you have to

start over from the beginning with a different proof challenge to the AI.

The problem with this reasoning is that this claim that the AI must be

able to ask question seems false. It is given a fully self-contained

mathematical formula which it is asked to evaluate. There is no scope

for ambiguity or confusion in this problem. It is not provided in

some loose language like English; it is in hard mathematical symbols.

This is different from Collins' model where the AI must ask questions to

clarify his problem just like a scientist given some research problem by

his employer.

I've worked on many mathematical problems, challenges and contests over

the years, and they are self-contained. You don't get to ask questions;

in fact in national contests where the problems are provided in writing,

there is typically no one around who would be remotely qualified to

answer any questions about the problems. And these problems are often

described much less formally than the Godel statement which Penrose

would propose to provide.

So Collins' claim that the AI would have to be able to ask questions

about its problem, and that this would require answers to be prepared,

which would change the system in a never-ending cycle, does not seem a

strong refutation of Penrose's program.

Hal

**Next message:**Harvey Newstrom: "RE: Transgender marriage"**Previous message:**Steve Nichols: "Traditional AI can never be sentient"**Maybe in reply to:**Spudboy100@aol.com: "Paper: A.I and Penrose"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Fri Nov 01 2002 - 13:37:35 MST
*