A specific refutation of genocide

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Sep 16 2001 - 21:13:25 MDT


"Robert J. Bradbury" wrote:
>
> I know that by raising this issue, I am going to take a large
> amount of flak.

Yes, you will. Let us hope that you are defeated, and that post-defeat
you are not quoted as being representative of Extropianism.

But to explain why genocide is un-Extropian requires more than horror.
Unpopularity does not prove falsity. So I'm going to skip the exercise
period I was about to take, and focus solely on responding to Robert
Bradbury's proposal. I hope that his adherence to the ethical principle
of not concealing his beliefs does indeed turn out to outweigh the
specific, obvious, and predictable consequences of his posting those
beliefs to the Extropian mailing list.

> From my perspective the analysis is relatively simple. If the
                                                          ^^
> population of Afganistan, or the people supported by them
> delay the onset of an era of advanced technological capabilities
> by 6 months or more, the value of their lives is negative.
                          ^ ^^
                         net will have been

Post-Singularity, some members of humanity may request that their "score"
be totalled up (i.e., what was the net effect of their pre-Singularity
lives). It seems likely that the scores of many individuals will be
negative, including class-action lawyers, many Congressfolk, and of course
terrorists. It may also be that, because of the Taliban, the mean net
score of "all the individuals in Afghanistan" will be negative. It may
even be that if a natural catastrophe were to wipe out the nation of
Afghanistan, superintelligences post-Singularity would view it as a
definite net positive in history, the way some historians now view the
Black Death.

Nonetheless, it is impermissible for a modern-day human to kill people
whom she expects to have a negative score post-Singularity. The action
will probably not accelerate the Singularity, and the reasoning behind it
is flawed.

There are many reasons for this, and I'll probably only be able to touch
on a few, but I'll try.

PART I:

First, let's review the simple reasons - the ones that don't touch upon
that strangest of ethical heuristics, "the end does not justify the
means".

Within the first-order scenarios, we have the big "IF" in the logic, i.e.
"IF the Afghani lives turn out to be negative". Maybe somewhere in
Afghanistan is the next Einstein or Newton. Maybe the Third World
countries, like Gollum, still have some part yet to play. Maybe today's
rogue states will someday act as haven for Luddite-hunted technologists
instead of terrorists.. There are AI researchers today in Germany and
Japan. It's easy to construct scenarios in which the first-order action
of wiping out a country goes wrong directly.

Now you could say that these are all possibilities as tenuous and
constructed as Pascal's Wager, and that equal and opposite scenarios in
which Afghanistan contains the next Hitler are just as easy to construct.
And this is true; uncomfortable, but true. Still, there are many possible
outcomes, and if the negative and positive possible outcomes are equally
balanced, then all that remains is the definite and certain short-term
cost of twenty-five million deaths. Stalin killed tens of millions of
people, allegedly in pursuit of benefits that turned out to be simply
nonexistent. So the difficulty of accurately modeling the world does need
to be considered. To the extent you really don't know the effects of a
human's life, the net value converges to the intrinsic valuation of "one".

Then there are the second-order reasons, having to do with the reaction of
other humans to the action of genocide. Exterminating the Afghani would
probably send an equal or much greater number of states rogue, and result
in the rise of hundreds of new terrorist organizations. Justifiably so!
Using nuclear weapons against Afghanistan might easily result in the use
of nuclear weapons against the United States and would most certainly
render the United States a pariah among nations. International law would
be dead. The global political situation would be destabilized enormously.

Within this list, even the proposal of committing genocide has damaged
your reputation. Many respondents note that you have probably just
damaged the Extropian cause and managed to slow down the Singularity a
little. Maybe this is a necessary cost of ethically required honesty, but
it's a cost nonetheless.

Of course, the second-order reasons, as presented, are strictly
environmental; they note that people react badly to genocide without
considering whether or not genocide is unethical and thus whether the
reactions are rational. It is sometimes ethical to defy the majority
opinion, although that doesn't change the real consequences of doing so.

PART II

The bad reaction to the proposed genocide of Afghanistan *is* rational,
and the proposal *is* unethical.

In Robert Bradbury's worldview, there is a certain future event such that
the delay of this future event by one additional day results in 150,000
additional unnecessary deaths. (A feature shared by my own worldview as
well.) Under Robert Bradbury's proposal as I understand it, if the
predicted result of 25,000,000 deaths is that the event will be
accelerated by at least six months, this leads to taking the action of
killing those 25,000,000 people.

Different worldviews, however, may have different expectations, as well as
different morals, and even different standards for rationality. These
different worldviews may regard Robert Bradbury's actions as undesirable;
may furthermore decide that certain actions are desirable which Robert
Bradbury finds to be undesirable. The key insight is that these different
worldviews can be treated as being analogous to players in the Prisoner's
Dilemna. Refusing to kill even when it seems justified is analogous to
cooperating, and employing "The ends justify the means" logic is analogous
to defecting. Furthermore, and this is the most important point, this
analogy still holds EVEN IF ONE WORLDVIEW IS RIGHT AND THE OTHER ONES ARE
WRONG.

Let's say that we have Robert Bradbury in one corner of the room, and Bill
Joy in another corner. Robert Bradbury, because Bill Joy is spreading
technophobia, decides that it's moral to shoot Bill Joy out of hand. Bill
Joy, because Robert Bradbury wrote an article about Matrioshka Brains,
decides it's okay to shoot Robert Bradbury out of hand. Bang, bang; they
both die. If their negative and positive values cancel out, the net loss
is two human lives. If Robert Bradbury's positive score would have been
greater than Bill Joy's negative score, the game is *very* negative-sum
and the Singularity has been delayed.

The reason not to shoot people whom you think deserve it: If everyone
shot the people who they thought deserved it, the world would be much
worse off - from your perspective, and from everyone's perspective, simply
because so many people would die, once the shooting started. But
especially from the Extropian, scientific perspective, because scientists
and technologists of all stripes would be among the first victims.

This doesn't change even if you assume that one worldview really is right
and that all the other ones really are wrong. If you give all worldviews
equal credence, if you assume symmetry, then your perspective is analogous
to that of an altruist watching an iterated Prisoner's Dilemna being
played out among multiple people whom you all care about equally; you want
them to play positive-sum games and not negative-sum games so that the
global positive sum is maximized. Specifying that one worldview is right
and the others wrong is analogous to taking the perspective of a single
player whose goals are wholly personal; that player will still adopt a
strategy of playing positive-sum games and cooperating, because that is
how you win in the iterated prisoner's dilemna.

The analogy here is not between a selfish player and a selfish worldview,
but between a purely selfish player and a worldview that gives zero
credence to other worldviews, and between a purely altruistic player and a
worldview that gives equal credence to all other worldviews. It should be
understood that this analogy is STRICTLY MATHEMATICAL and is not in any
sense a moral analogy between selfishness and daring to defy the majority
opinion. (I go along with Reverend Rock on this one: "An open mind is a
great place for other people to dump their garbage." I may value a
Luddite's life and volition but I don't value her opinion.) The point is
that a selfish player will *still* cooperate. You don't have to be a
cultural relativist about the value of technology to refrain from shooting
Ralph Nader.

Not all cases of "the end justifies the means" fail. Not all instances of
military force go wrong. The American Revolution involved killing people
for what seemed like a good reason - and it worked, establishing a lasting
happiness that went on to vastly outweigh the blood spent. But the
twentieth century saw the rise of Stalin and Hitler, who killed millions
under justifications that never materialized. It also saw the rise of
Martin Luther King and Gandhi, who achieved the lasting respect of the
entire world for their espousal of refraining from violence even in
contexts where it appears justified. So today, in the world that World
War II created, we tend to be more cautious.

Tutored by the twentieth century, the moral people of today's world have
agreed among themselves to value all human lives equally. Some, though
not all, say that a murderer becomes targetable and can be killed to
prevent further killing; none say that it is moral to kill arbitrary
targets in an attempt to optimize the system. To get along with the
cooperators of this world, you must agree to adopt cooperation as a
working theory, even if you don't believe that it reflects the ultimate
underlying morality. To do otherwise will be seen as a "defection" by
exactly those players that you most want to interact with in the future.
Including me. I would interpret what you've done so far as "willingness
to take flak as a result of playing Devil's Advocate", rather than "Robert
Bradbury announces his intention to defect", but others may not see it
that way. (Though I sort of wish you'd asked me or Robin Hanson or Nick
Bostrom about this controversial idea of yours before posting it directly
to the Extropians list...)

POST SCRIPTUM:

Students of evolutionary psychology will know that organisms are
adaptation-executors rather than fitness-maximizers; selfishly motivated
cooperation can turn into an impulse toward genuine altruism, either
because the latter is cognitively simpler and easier to evolve, or because
a potential partner will prefer to partner with a genuine altruist rather
than a "fair-weather friend". The mathematical analogy for
worldview/players is an impulse to *genuinely* value all other worldviews
equally, not just as a working theory adopted for the sake of
cooperation. In small doses, this can counter the innate human tendency
toward self-overestimation and contribute to meta-rationality. In large
doses, it turns into cultural relativism.

I do *not* like cultural relativism. Unlike people, worldviews do have
different inherent values; some are right and some are wrong. Science,
for example, can be seen as a set of rules for worldview interaction that
will, over time, favor correct worldviews over incorrect ones. This
involves allowing some initial credence to all worldviews, but the amount
of credence allowed can rapidly go to zero if the worldview fails to make
original correct predictions, or one if the new worldview outpredicts a
current theory.

There is no ethical imperative to value someone else's worldview. There
is an ethical imperative to not take actions that value that other
person's worldview at absolute zero - for example, suppressing an idea
that you believe to be incorrect. (What if someone else, believing your
own worldview to be definitely incorrect, censored you?) But that refusal
to censor is not the same as actually changing your own opinion's based on
Ralph Nader's.

In the case of the great iterated non-zero-sum game that is human
existence, I think that the genuine altruism that has evolved is more
valid than the game-theoretical selfishly motivated reciprocal altruism
that is evolution's "motive". Modern-day humans have sex with birth
control, and I think that's sane, so I can engage in altruism that I
regard as an end in itself. (Of course the philosophy is more complex
than this! But anyway...)

Still, I do not believe that cultural relativism, the unconditional
imperative, is more valid than the conditional imperative of agreeing to
not take action against other worldviews for the sole reason of not
wanting other worldviews to take action against you. I think that wanting
all worldviews to play fair with you is the legitimate reason for playing
fair yourself, and that there is *not* a moral imperative to actually
believe those other worldviews, to let them impact your beliefs. Beliefs
should be strictly governed by reality. Meta-rationality (see Robin
Hanson) is a good reason for taking the beliefs of others into account as
additional sensory data which may be useful for finding the truth. The
human tendency toward self-estimation is a good reason to pay attention to
the opinions of others and not just your own. But the need to get along
with the neighbors is *not* a valid reason to change your beliefs. Your
neighbors may be wrong.

POST POST SCRIPTUM:

The rest of the Universe doesn't necessarily obey the same rules that hold
in our own backyard. The above analysis of ethics is geared to
human-level intelligence, the human emotional complex, observed human
history, the rules of human existence, and the adaptational base created
by the human ancestral environment.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:49 MDT