Robin Hanson wrote:
>
> Well we could do a little more; we might create lots of different AIs
> and observer how they treat each other in contained environments. We might
> then repeatedly select the ones whose behavior we deem "moral." And once
> we have creatures whose behavior seems stably "moral" we could release them
> to participate in the big world.
Anything that can safely be stuffed into a contained environment isn't any sort of AI that we need to worry about. Such threat management techniques are useful only against programs that can be filed and forgotten. Remember, we're talking about Culture Minds and Vingean Powers, not your mail filter. Yours is a way to ensure the integrity of the global data network, not to protect the survival of humanity.
As for pulling this trick on genuine SIs:
This would ENSURE that at least one of the SIs went nuts, broke out of your
little sandbox, and stomped on the planet! This multiplies the risk factor by
a hundred times for no conceivable benefit! I would rather have three million
lines of Asimov Laws written in COBOL than run evolutionary simulations! No
matter how badly you screw up ONE mind, there's a good chance it will shake it
off and go sane!
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.