Anders Sandberg wrote:
> But this is not about whether we would be justified in killing Bin Laden
> if any opportunity presents itself - your post dealt with the deliberate
> genocide of millions of people!
While I haven't seen anyone else, anywhere, speak in terms of genocide with
respect to US response to the 9-11 terrorist attacks (other than on this
list) I have seen a large number of people suggest indiscriminate bombing
and other means of killing large numbers of people.
The ethical considerations are the same whether a whole people is wiped out
or whether only one innocent person is killed. The assumption that it's okay
to kill one innocent person in order to save several other lives is at the
root of the utilitarian nature of the responses on this list to Robert's
questions about whether genocide might be a solution to the terrorist
problem.
I can think of situations where I would be tempted to say it's okay: one is
where passengers on an airliner are on a collision course with a skyscraper.
At the point at which it is clear that the passengers will be dead within a
few minutes, regardless of any action I might take, I think it would be
acceptable to shoot down the plane.
I have asked myself what I would have done if I were on Flight 93, the one
where the passengers knew the likely destination of their plane, with a
hijacker holding a knife at the throat of another person, threatening to
kill that person if anyone else makes a move. What course of action should I
take? In this case, no matter what I do, innocent lives will be lost. This
is comparable to the example Eliezer brought up, where a murderous
government is attacked at the price of killing some innocent people. Based
on my past behavior in crisis situations, I'm fairly certain I would have
chosen to fight the hijacker, even knowing that a person's throat would be
slit the moment I made a move.
But this sort of exception to the rule against killing innocent people is
dangerous, because it can lead to agreeing that it is okay to kill innocent
people whose lives are going to end tomorrow or next year rather than in
five minutes, or whose lives are worthless because they are unhappy or lived
in superstitious ignorance. I think this is what Robert was suggesting.
My own personal rule is to consider all my options and only to risk harm to
another person (whether or not that person has initiated force against me)
if there is absolutely no other choice that's likely to save my life. In
dangerous situations, my usual strategy is to flee, unless I'm backed into a
corner, in which case I am willing to fight to the death of either myself or
my opponent.
In the present terrorist case, I don't believe the US has been backed into a
corner, although I think the leaders perceive that they have been, for
reasons I've set forth in previous posts. This is the danger of my personal
rule: sometimes in the heat of danger, fear or lack of imagination prevent
one from considering all possible solutions.
> But if you don't understand what your values are, what you
> are trying to achieve and how this is valuable, then the calculation
> can't provide an optimal answer.
If one says only that one values human life, then cost-benefit analyses such
as Robert's make perfect sense. To avoid treating people as interchangeable
widgets, you have to value the individual person *as* a unique individual.
You have to state that you value each individual human's life, not just
generic human life.
Viewing people as widgets is the root cause of all sorts of nasty isms,
including racism.
>I get the worrying feeling that you
> have just assumed "minimize number of deaths before singularity" to be
> the value and optimization goal, and completely ignored issues of what
> kinds of *lives* there will be.
The kinds of lives one could expect to live in societies founded on various
philosophies is an important question to address, even if most of us may
feel as though we have no power to change the societies in which we live.
I've observed that people generally have more power than they realize.
> OK, playing along with this game: what about foreign policy? By behaving
> like this, the US would demonstrate to all other nations that it is
> dangerous and willing to use force to achieve its goals even when the
> victim has not attacked it. The logical conclusion for everybody else is
> to view the US as the new rouge nation and start preparing to deal with
> it. Even if China, Russia, the EU and Australia might not share the same
> pre-emptive idea, they would see where the threat lies and act
> accordingly. Hmm, suddenly the concept of MAD begins to rear its ugly
> head, doesn't it? And what happens when nano is being developed in this
> kind of scenario? You can guess - nano-MAD. Say hello to global
> ecophagy.
This is all true, but I think it's important to keep in mind that this
results ultimately from not valuing individual human lives.
> One should be able to voice *anything* to the extropians list, including
> the most vile hate propaganda - but it is our responsibility to rip such
> evil ideas to shreds according to the ideals of transhumanism.
I was not going to post anything on this thread until I read this. You're
right, Anders. Each of us is responsible for speaking up if we wish to
direct our own lives to any extent. Thank you for pointing this out.
Barbara
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:51 MDT