I would suggest two interpretations to the above questions.
Human ethics evolved to regulate people's relations with other persons
and their possessions in the physical world. Some of the ethical
principles must be generic and good for regulation relations between
different entities (the degree of identity granulation in an AI world will
be much different from human, so I don't really expect "personalities"
there). Other principles may be just carried to the new world, because
we "feel like it". I think this is a wrong approach. Should we cover
software agents with a decency crypto-veil in the moments of replication,
or tax successful computer programs to provide a decent level of
operation to their underprivileged inefficient brethren? This can get
quite ridiculous.
We should better think of how to efficiently set up new systems, and
what kinds of protocols may provide us - and AIs - with greatest mutual
benefits.
If there are any ethical controversies with AIs, they will probably happen
only in the short period of time (10 to 20 years?) when AIs have near-human
intelligence.
After that, AIs will take care of their benefit, and of ours as well. They would
not be concerned about our watching them either, as we are not concerned
about our pets watching us.
-------------------------------------------------------------------
Alexander Chislenko <http://www.lucifer.com/~sasha/home.html>
Extropy Online <http://www.extropy.org/eo/index.htm>
-------------------------------------------------------------------