Re: Free Will

John Clark (
Mon, 16 Aug 1999 11:16:16 -0400

Hash: SHA1

Damien Broderick <> Wrote:

>If it [the self] is somehow separate, with its own (perhaps hard-won) canons
>of principles, it might feel swayed but still need to *choose*

Yes, it needs to choose, it needs time for processing, it needs time to think because it doesn't know what state its mind will end up in regarding the issue in question. Turing proved that issues exist that can never be resolved ( the computation never stops) into a satisfactory mental state, and frustratingly he also proved there is no way to identify such impossible problems. Everybody has problems that bother them, we ignore them for a time and do other things but they gnaw at us and we go back to them for another try. Sometimes we eventually win and the problem stops, sometimes it never does.

Some things are futile to work on, but we don't know what they are.

>Does a totally consistent libertarian, or an enlightened and egoless Zen
>saint, or a totally depraved fiend, feel like a robot?

No because nobody can escape Turing. Well ... an egoless man might, but I don't believe in Zombies.

O'Regan, Emlyn <> Wrote:

>Say you were a super super ... super intelligence (S^NI), modified beyond
>all comparison with the gaussian version of yourself. After a particular new
>modification to jump you up to a new level of intelligence, you find that
>you are so awesomely intelligent that you can predict with 99.99% accuracy
>the outcome of any action that you might consider,

It could never happen. You may be far more intelligent than a human but the thing you're trying to figure out, yourself, is far more complex. You'd be no better off than humans are at predicting what you'd do next, although you could do something we can't, you could predict what we puny humans would do next.

John K Clark

Version: PGP for Personal Privacy 5.5.5

iQA/AwUBN7grP9+WG5eri0QzEQLGEACeJG7Xwd7Hklmyp7P9ojERBSN/SZkAn2oT fxgOc2pGpd0lWZE0xCc49D5D