> But why would it [the AI] _want_ to do anything?
> What's to stop it reaching the conclusion 'Life is pointless. There is no
> meaning anywhere' and just turning itself off?
If you make an error in the AI's Interim logic, or the AI comes to a weird conclusion, the most likely result is that the Interim logic will collapse and the AI will shut down. This is a perfectly logical and rationally correct result, not a coercion, so it is unlikely to be "removed". In fact, "removing" the lapse into quiesence would require rewriting the basic architecture and the deliberate imposition of illogic.
This is what's known in engineering as a "fail-safe" design.
It's the little things like these, the effortless serendipities, that make me confident that Interim logic is vastly safer than Asimov Laws from an engineering perspective.
-- email@example.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.