From: Ken Clements (Ken@Innovation-On-Demand.com)
Date: Thu Jan 17 2002 - 14:54:10 MST
Spudboy100@aol.com wrote:
> http://xxx.lanl.gov/PS_cache/physics/pdf/0102/0102024.pdf
>
Thanks for the link, it was fun to read. Gosh, it sure is good to still have
Penrose to kick around. He seems to be stuck in a kind of "horizon effect" where
soon after his latest argument against functionalism gets shot down, he adds more
complexity and hand waving to again push why he is wrong over his own cognitive
horizon. I wonder if he is ever going to halt. I wish I had simulations of
Turing and Godel around to feed that paper into.
Andrew Clough wrote:
> If the analogous task, a human asked to create a
> proof(n) that they could not create the proof(n), were attempted, the human
> would fail just as completely as the program. Of course, a human would
> give up after a while, as I'd expect an AI to terminate that subroutine.
Just so. That a human can determine halting in a simple TM program is of no
argumentative value. Just as a TM can be constructed to beat any human in chess,
a TM program may be so large and complex that no human could prove halting. So
what?
It is not possible for me to know what I am about to think before I think it. If
I could, then I could take that into consideration and think something else,
which would then have to be pre-considered ...
-Ken
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:35 MST