On  Wed, 25 Feb 1998 Damien Broderick <damien@ariel.ucs.unimelb.edu.au> Wrote:
                   
        >Freedom of choice for humans does not mean acting at random like        
        >the Dice Man; that would be psychosis, not freedom. 
                
True, except that it wouldn't even be psychosis because even insanity has a 
cause, it would just be convulsions. I agree with the previous poster who 
said in effect that first we must define what free will is, then we can 
debate if human beings have this interesting property or not. 
I gave my definition:
 A man, animal or machine has free will if it can not always predict what it 
 will do in the future even if the external environment is constant. A third 
 party might be able to make such predictions but that's irrelevant, the 
 important thing is that the person himself can not know what he will do next 
 until he actually does it.
I maintain that this definition is clear, internally consistent and produces 
no contradictions when used with most everyday uses of the word "free". 
I've never heard of another definition that could do as well.  
                
        >our consciousness or ego is a module (either executive or         
        >interpretative) with very restricted information about the full         
        >state of the self. 
                
I agree.
                
        >This means that when we opt, we do so from many more `unconscious'         
        >motives 
                
I have no argument with Freud, there are factors in our mind that we aren't  
conscious of, but there are other things that effect our behavior that aren't  
even "unconscious", they are not in the brain at all because they have not  
been calculated yet. I think this is intimately related to Alan Turing's
discovery that a computer program has free will, that is, it can't understand 
itself, that is, it can't (we can't either) predict if it will ever stop. 
Let me suggest a thought experiment; a man is walking down a road and spots a 
fork in the road far ahead. He knows of advantages and disadvantages to both 
paths so he isn't sure if he will go right or left, he hadn't decided. Now 
imagine a powerful demon able to look into the man's head and quickly deduce 
that he would eventually choose to go to the left. Meanwhile the man, whose 
mind works much more slowly than the demon's, hasn't completed the thought 
process yet. He might be saying to himself I haven't decided I'll have to 
think about it, I'm free to go either way. From his point of view he is 
correct, even a robot does not feel like a robot, but from the demon's 
viewpoint it's a different matter, he simply deduced a purely mechanical 
operation that can have only one outcome.   
But is it really a purely mechanical operation, what about the uncertainty 
principal? I don't see how it effects matters one way or another. It says 
that some things can happen for no cause and thus are truly random, but 
happenstance is the very opposite of intelligence and even emotion. Things 
either happen because of cause and  effect or they don't and if they don't 
then they are by definition random and have nothing to due with volition.
Those who claim that this is the  source of the will must also believe that a 
nickel has free will when you  flip it. This topic muddies the question but 
does not change it.
In my example the demon did not tell the man of his prediction but now lets 
pretend he did. Suppose also that the man, being of an argumentative nature, 
was determined to do the exact opposite of what the demon predicted. Now our 
poor demon would be in a familiar predicament. Because the demon's decision 
influences the man's actions the demon must forecast his own behavior, but he 
will have no better luck in this regard than the man did and for the same 
reasons. What we would need in a situation like this is a mega-demon able to 
look into the demon's head. Now the mega-demon would have the problem.
               
        >I think that the more we make our values `our own', the more free we         
        >feel - that is, against your claim that we would feel like robots.
My claim was that self knowledge would make us feel like robots but we don't  
have to worry about it because self knowledge is impossible, I don't see what 
that has to do with deciding we like a value and adopting it.
Let's simplify things to their essentials. Imagine a world in which the 
environment was so simple it could be predicted with complete accuracy. 
Doubtless we would find such a place boring and unpleasant but I don't think 
we would feel like robots. Thus the origin of the sensation of autonomy can 
not be external. What does it mean to "feel like a robot"? If you could 
always forecast your own behavior and thoughts with complete accuracy then 
you would feel like a  robot. Subjective (not objective) uncertainty is at 
the root of freedom and  choice. 
        >John Clark wrote at random, that is, freely (by his own account):
I like to think it wasn't random but it was certainly free by my definition. 
When I read your post there were several ways I could have responded to it,  
when I sat down at my computer I had all the information I needed but I didn't 
know what I would do, I could have written many things, I had to think about 
it, I was free, in fact, I wasn't absolutely sure how it would come out until 
right NOW.
                                              John K Clark    johnkc@well.com
-----BEGIN PGP SIGNATURE-----
Version: 2.6.i
iQCzAgUBNPRPwH03wfSpid95AQHK7QTw5FJQEe1o+LWNYButhlPr2jpCbWbcjeI8
V0refA7CUlJPFEm5V+iAd/u76+VAHWc+Jjq+XsX+DC/yn3O6Tbreb7YAIKKCAVRV
I1vbMTOQkKXoPDo42Htli/fcJqLbAXVZVLaBk3UK92f9822oQSQPGjbqda2IHXnw
xHzjtHPMquudriyBan/+arjRxeghZYh2EHOJrAOP7Jmxh+4sBlA=
=sLMY
-----END PGP SIGNATURE-----