Re: Attacks (was Re: Why would AI want to be friendly?)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Oct 01 2000 - 17:21:16 MDT


Samantha Atkins wrote:
>
> If that was what it was then I would also find it revolting. But I
> think a more interesting set of possibilities might be present. Today
> we kill or lock away for life those we consider irredeemably criminal.
> Putting the person instead in a VR with karmic consequence mode turned
> on would a) not involve the irreversible destruction of the individual;
> b) give them a chance to learn and grow without harming other people.

That is an unacceptable violation of individual freedoms. If someone *wants*
to walk into an environment with karmic consequences, they have the right to
do so. Nobody has the right to impose that on them. Once the Sysop Scenario
is achieved - once "society" has the technological power to upload a criminal
into a karmic environment - society no longer has any conceivable right to do
so. The criminal is no longer a threat to anyone. There's no need to
discourage future crimes.

Pain is *never* an intrinsic good, no matter who it happens to! Certain
people, by their actions, make themselves more "targetable" than others - if
either a murderer or an innocent human must die, then it might as well be the
murderer. Adolf Hitler, for example, is so targetable that we could shoot him
on the million-to-one off-chance that it might save someone's life. But once
there's no longer any need for *anyone* to suffer, then nobody is targetable -
not even Adolf Hitler.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT