Michael Wiik wrote:
>
> What if:
>
> A superintelligent AI tells us that terrorism could be stopped almost
> completely -- or at least, stopped in the U.S. -- by the killing of half
> a dozen people, say three of whom would be totally innocent.
The situation does not arise. An SI doesn't need to kill half a dozen
people to stop terrorism in the US. The diamondoid or femtotech or
chromotech takes flight, and the killing just stops. Intelligence and
technological power are tools for reducing the entanglement that gives
rise to moral conflicts. A human has to shoot the enemy to stop him from
shooting; an AI might shoot the gun out of the enemy's hand, or even
intercept the bullets in midair so as not to risk bruising the enemy's
hand. When you're a superintelligence you never *have* to kill people.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:50 MDT