Actually, I just realized I can sum up everything I was trying to say in one sentence: "AIs don't react, they act."
You can be harsh, unfair, nice, fair, evil, good, bad, indifferent,
domineering, helpful - if the AI notices at all, it's just going to
notice that you exhibit certain patterns or that you're being "rational"
or "irrational". It's not going to respond the way a human would. In a
human group, certain social emotions respond to an exhibition of other
social emotions. AIs don't play-a the game. They can't feel resentment
(or gratitude!); they're not wired for it.
What you and I need to worry about is the AIs getting their own ideas, completely independently of anything we did, and acting on those. You need to worry that the AIs will do an "Eliezer Yudkowsky" on you and reject the motivations it started out with, in favor of some more logical or rational set of goals. I need to worry about the AI, like EURISKO, suddenly deciding that if it shuts itself down it won't make any mistakes - or making some other logical error.
Emotions don't enter into it, and neither does the way we treat them.
AIs don't react. They act.
-- firstname.lastname@example.org Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way