> Is it ethical to contain an AI in a limited world? This is an especially
> interesting question if one takes the point of view that the most likely
> path to Artificial Intelligence is an approach based on evolutionary
> programming.
>
> Is it ethical to broadcast details of an AI's "life" to other
> researchers or interested parties?
>
> Is it ethical to profit from the actions of an AI?
Since AIs will presumably be made without emotions, or at least with
a much more limited number of emotions than humans, you don't have
to worry about their "feelings". Also, one of the first things you
would ask an AI is to develop uploading & computer-neuron interfaces,
so that you can make the AI's intelligence part of your own. This would
pretty much solve the whole "rights problem" (which is largely
artificial anyway), since you don't grant rights to specific parts
of your brain. A failure to integrate with the AIs asap would
undoubtedly result in AI domination, and human extinction.
P.s: I'm almost certain that our ethics will become obsolete with
the rise of SI, they are simply too much shaped by our specific
evolution etc.