> Michael Bowling said, "I say self awareness in any being presupposes and 
> necessitates legal rights and protection.".
> 
> What do you mean by "self awareness"?  What are your criteria for "self 
> awareness"?  Is a dog self aware?  Why or why not?  Are plants self aware?  
> Why or why not?  As you're thinking about this, are you deliberately arranging 
> the definition of "self awareness" so that it applies only to humans?  
> 
> One could argue that a plant is self aware, because it adjusts its internal 
> functioning according to it's needs, so the plant is aware of itself (it's 
> needs) to some extent.
> 
> Does "awareness" mean 'responding to stimulii'?  If so, then "self awareness" 
> would seem to mean 'responding to self generated stimulii'.  This definition 
> of "self-awareness" would cover many, many types of organisms.  Should these 
> organisms receive legal protection?
"Self awareness" was a poor choice of words.  Upon a little research that 
I should have done before I posted, I see that one could claim that 
animals other than humans are self aware.  
I'm going to take a different approach.  Instead of arbitrarily slapping 
words on something and granting it legal protection b/c of that label,  
I'll ask "What can it do?"  This way I hope to avoid the difficulties of 
assigning properties to "conciousness" and "self aware,"  two words that 
seem to have very plastic definfitions, depending on who wields them.  
Note that I consider this a working concept,. ,ion, and as such it is 
subject to refinement, and/orot ridicule :-).
This time I am making the assumption  that any _artificial_ being that 
fills the following requirement can communicate with whoever (human or 
not) made it.  0f course this assumes that those who create artificial 
intelligence will have no use for AI that cannot communicate with its 
creators.  I apply this test only under the bove conditions, and does 
not apply where is a descrepency between a creatures apparent 
ability to reason and its inability (for whatever reason) to communicate
with whoever despenses justice.  For instance animals, ET.
Here is my improved criteria for granting legal protection:  If a being 
is able to grasp the concept of legal protection and ask for it, then 
that being should receive it.  
That should be followed with "because..."  If I try to take it any 
further right now, I'll end up saying "because I said so."  
Just like my belief that it is more wicked to force someone to help himself than it 
to let that person kill himself out of his ignorance.  I have not yet been 
able to connect either of these preferences to concretes (is that 
possible?).  Both issues require more thought...  
Qualifiers aside,
I think this test works b/c it is simply a hoop to jump through, and 
doesn't depend on the current consensus on the meaning of elusive words.  
If an AI is smart enough to ask for it, it deserves protection under the 
law.  However, it is for a special case and does not currently cover 
important cases like the mentally handicapped, small children, many 
others that escape me right now. 
> 
> Why should legal protection be based on how *conscious* a system is?  Why not 
> on how beautiful a system is or on how much it weighs or some other property? 
>
Only systems that are *concious* enough to grasp justice have a need for 
it.  If a beautiful or super-heavy system is smart enough to ask for 
protection, then protect it.  Other wise let those with a concept of 
beauty, mass, property, and justice pay for the  protection of that which 
they value.
 (Typically, it's based on how similar the system is to those granting legal 
> protection.) If a conscious or even sentient system is unable to defend itself 
> from injury or destruction, do other conscious systems have an *obligation* to 
> protect it if they are able to?  Why or why not?
I think to answer this question I would first need to figure out how to 
connect my own prefences to reality, if that can be done.
Exovivo!
Michael Bowling