Re: [2] Freedom or death

Eliezer S. Yudkowsky (sentience@pobox.com)
Mon, 11 Aug 1997 22:54:09 -0500


Advance summary:

1) You cannot override the will of others. Death isn't that important.
2) I can't guarantee the Powers will say the same.
3) But *you* are not a Power.

[den Otter asks...]
> Tell me, what crime is worse: kidnapping or murder? (rhetorical question!)

WRONG!

This question is *not* rhetorical.

As I said:
[http://tezcat.com/~eliezer/algernon_ethics.html]

'Over time, I've learned to value free will over human life. No doubt,
this is and will remain a controversial statement. The truth is that
the intuitive reasons why I believe this are not derived from the
pure ethical logic of goal and subgoal. Life and death are simple
things, a binary duality. There's nothing palpably evil about death.
Certainly, nobody who believes in God or an immortal soul (11)
should, logically, look on death as more than an inconvenience. Six
thousand people die every hour. Out of the one hundred and fifty
thousand people who die every day, how many of them are children?
Ten thousand, I should think, at the very least. To quote Michael
Resnick's apotheosis of evil, Conrad Bland: "If there is a God, then
He has passed a death sentence on every human being from the
moment of conception. I am but a talented amateur." (12)'

To summarize the rest - death is too cheap, and too simple, to really count as
evil. But the enormously complex entity called "The Human Mind" can be
perverted in truly evil ways, rather than simply rendered wholly or partially
nonfunctional. So that, then, is Evil.

Being a cognitive scientist, I see the human mind as having far more causal
density - deep substance - than its biological substrate. To die is simply to
cease functioning; it doesn't really alter - much less *pervert* - the causal
model, simply erases it. Evil in my book is reserved for perverting the mind,
because that's what my mind can get a grasp on.

These are complex issues; in truth, I don't know how to properly articulate my
intuitions on the subject. I do trust my intuitions. Not because of any
dramatized conflict between Emotion and Rationality, but because my intuitions
are almost always right, and are rarely in "conflict" with anything.
Intuitions are reasons I haven't learned to articulate. Which is why I'm
going to deliver some complex ethical judgements, which - I admit it - are
based largely on "unarticulated reasoning".

My judgements on the cryonics and involuntary-uploading issue are both
ambiguous. Cryonic freezing takes place after the subject is *dead*.
Therefore it does not really constitute kidnapping. You aren't placing
yourself in conflict with anyone's will, you're just picking up a dead body
and sticking it in liquid nitrogen. After revival, the corpsicle/newly
created individual can decide to commit suicide - unless involuntarily
uploaded; see below.

Likewise, for involuntary uploading, a *human* cannot decree it. But - In My
Controversial Opinion - by the time we have uploading, it won't be humans who
are doing it. Posthuman Powers might be able to ethically override our wills
with impunity, because they, in the time-worn justification that _we_ can
never trust, really *would* be right! The requirements of mutually respected
motives are human protocols, derived from human history as a protection
against human nature. If Powers are never, ever mistaken - which could happen
- they might invent different rules.

I'm just not sure that game theory applies to posthumans. It's based on
rather tenuous assumptions about lack of knowledge and conflicting goals. It
works fine for our cognitive architecture, but you can sort of see how it
breaks down. Take the Prisoner's Dilemna. The famous Hofstadterian
resolution is to assume that the actions of the other are dependent on yours,
regardless of the lack of communication. In a human Prisoner's Dilemna, this,
alas, isn't true - but assuming that it is, pretending that it is, is the way out.

While a Power might set up a segregated, logical line of reasoning that would,
as a Turing machine, inevitably be the same as the reasoning used by the other
partner... so that the two would inevitably arrive at the same decision.

The problem is that this doesn't work for a "human vs. Power" Prisoner's
Dilemna. The Power isn't pretending anything. It isn't acting out of respect
for anyone's motives. It isn't giving slack. It isn't following a
Tit-For-Tat strategy. It *knows*. A Power in a human/Power PD might be able
to work out the human's entire line of logic, deterministically, in advance,
and then - regardless of what the human would do - defect. (Or it might
cooperate. There are no real-life Prisoners' Dilemnas. Defecting always has
continuing repercussions.)

But in the case of involuntary uploading, the Power might well disregard our
opinions entirely. It *knows* what is wrong and what is right, in terms of
ultimate meaning. It *knows* we're wrong. Unlike a human, it has no chance
of being wrong - not of schizophrenia, not of being in an elaborate
hallucinated test, *nothing*. We can never acquire that logic, being unsuited
to it by evolution, however seductive that logic may seem.

Given that all Powers share exactly the same goals with no conflict arising
even as a hypothetical, or that they can use the above
identical-line-of-reasoning logic to ensure that no uncertainty ever arises...
given that there is never a conflict of opinion... then the Powers have no
need for game theory! Even if some of the above conditions are violated,
they'd still have no need for game theory with respect for humans. What are
we going to do? Say, "Bad posthumans! No biscuit!" Does respect for
another's motives apply when you can simulate, and intimately understand, a
neural or atomic-level model of that person's brain?

Anyway, den Otter has, more or less by coincidence, managed to choose
ambiguous cases, those of cryonics and uploading.

If, however, somebody chooses to *destroy* their brain via explosives...
If, however, *you* are faced with the decision of force-uploading someone...
Or in any lesser conflict between free will and *your* principles...

Tough luck. You aren't a Power. You don't know. You are bound by the same
rules of game theory that everyone else accepts to make life livable. And if
you violate those rules, then the rest of us will have to enforce them.
You're just going to have to respect other people's opinions, even the
"obviously wrong", the "obscene", the "indefensible" ones. Even the fatal
ones. Because you and I are mortal and fallible, and - for all we know - the
whole world could be an elaborate computer simulation, designed for the sole
purpose of getting us to make this one wrong decision. For all we know, a
decision to force cryonics on others will provoke a worldwide backlash against
all technology.

My intuitions about this are hard to verbalize. They involve terms, such as
"causal density", which were invented less than a week ago. What I do know is this:

1) You cannot override the will of others. Death isn't that important.
2) I can't guarantee the Powers will say the same.
3) But *you* are not a Power.

-- 
         sentience@pobox.com      Eliezer S. Yudkowsky
          http://tezcat.com/~eliezer/singularity.html
           http://tezcat.com/~eliezer/algernon.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.