Eugene Leitl wrote:
>
> Difference in relative evaluation of damage is also what drives violence
> autofeedback loops. "They killed 5 of us, men who were in their prime. The
> widows are grieving, the enemy has torn a hole into our midst. We will
> retaliate, and hurt them as much as they hurt us. This will require at
> least 50 of them.". Iterate.
>
> This is the neolithic algorithm that is still in operation. It arose when
> communities counted around ~100 individua, and went to war with neighbour
> tribes. Conflicts petered out naturally then. Allowing this algorithm to
> guide our actions now is suicide.
Axelrod and Hamilton, when investigating Tit for Tat algorithms, found
that it apparently pays to be nice (not the first to defect), retaliatory
(not insensitive to defections), and forgiving (returning to niceness
after one retaliation). "Tit for Two Tats" even came close to
outcompeting Tit for Tat.
Unfortunately, this fails to take politics into account.
When its actions must be justified in front of a larger audience trying to
enforce justice, an evolved organism will overestimate the harm done to
itself by others, and underestimate the harm its own actions deal to
others. Combine this with the tendency to value only the lives of your
own tribe and to count the enemy as dust, and the stage is indeed set for
suicide. Conflicts of this sort did not "peter out naturally" so much as
they ended in the extinction of one tribe or the other.
> "They killed 5 of us, men who were in their prime. The
> widows are grieving, the enemy has torn a hole into our midst. We will
> retaliate, and hurt them as much as they hurt us. This will require at
> least 50 of them.". Iterate.
This is what I really, really object to about the talk of going to war
over the World Trade Center. It may be worth going to war to prevent the
future nuclear bombing of New York. But killing a hundred thousand Afghan
citizens to prevent ten future terrosist attacks costing an average of
five thousand American lives apiece would be bad mathematics even if it
worked. Killing twenty thousand Afghani as *revenge* for an incident that
took five thousand American lives is bad mathematics, even if you believe
in revenge.
If you want revenge for the World Trade Center, then drop a grand total of
THREE bombs on Afghanistan, killing no more than FIVE thousand Afghani.
Nobody even suggests this. They want war.
And the part that makes it really appalling is that "my fellow Americans"
are not just talking about a vastly disproportionate revenge - because it
is revenge, as they see it - they are talking about taking this vast
revenge on a small country that is totally unable to retaliate in kind.
Would America even consider invading Afghanistan if Afghanistan could turn
around and invade the US? Of course not. If we ourselves were some
dirt-poor Third World country located next to Afghanistan, we wouldn't
invade because we might get invaded in turn, and the World Trade Center
isn't worth that. We would, as I said before, drop a grand total of three
bombs and kill five thousand Afghani, so that if the Afghani then decided
to retaliate for that, it would be a bearable blow. The rhetoric about
grinding Afghanistan into rubble is not just aggression, it is bullying.
We wouldn't be so disproportionate in our revenge if the target had any
hope of taking revenge in turn.
Like I said, I'm not a pacifist. I do think there are times when war is
necessary and justified. But the sole possible justification for war
against Afghanistan (and Iraq!) is the elimination of rogue states as
breeding grounds for terrorists with weapons of mass destruction. The
sole possible justification for war lies in things that haven't happened
yet, because the World Trade Center is not anywhere near as horrible as a
war. And the part that disgusts me is not that human nature leads
countries to take disproportionate revenge, but that human nature leads
countries to take disproportionate revenge when they think they can get
away with it.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:47 MDT