What would it mean for it to be ethically bad, under these circumstances?
Is a negative feedback automatically the equivalent of pain or hunger? Does a
robot turtle feel pain when its batteries are low? Or could a complex AI
think "oops, these are unproductive thoughts, I'll be rebooted, and I don't
want that" in a quasi-Zen fashion, without 'real' subjective pain?
Can we answer these questions without neurological insight into subjective
feeling? I doubt it.
At any rate, an AI could have emotions, but ones more appropriate for its
environment. Curiosity and boredom and happiness when it fulfills the orders
of its human, instead of dignity or desires for freedom or fear of snakes.
S. M. Stirling's Draka timeline provides an interesting question. The Draka
are rational quasi-Nazis, or modern Spartans, descended from Southern
Loyalists resettled in South Africa after the American Revolution. Nasty
folk. Somewhat power-seductive, though. At any rate, eventually they go
posthuman, and Earth is occupied by two new species. The serfs are Homo
servus, similar to H. sapiens, but more docile, sensitive to Draka pheromones,
worshipful of the Draka, and probably smarter and healthier. The Draka
replace themselves with Homo drakensis, with more genetic distance than chimps
or maybe gorillas; generally superhuman.
The question: regardless of Draka atrocities, what do you do with the existing
society, if you could force a peace with the drakensis? It seems unethical to
a lot of people, but would interfering with it really be ethical? The servus
don't want freedom; they're not made for it. Like Niven's Moties, but
artificial, not natural.
Hmm. Are artificial biological castes worse than natural ones?
The ethical sense being violated here would seem to be absolute, or
transcendental, not empirical or evolutionary. Evolutionary ethics seem to be
misapplies in these cases, and empirical ones have no clear objection.
-xx- Damien R. Sullivan X-)