From: Hal Finney (hal@finney.org)
Date: Wed May 28 2003 - 15:13:30 MDT
Regarding the question of whether different altruists could have differnet
goals, Wei wrote:
> But the totalist and the averagist do not have the same goals. One wants
> to maximize total happiness, and the other wants to maximize average
> happiness.
A couple of comments on this. First, I think it was Wei himself who
pointed out to me a few years ago that the "averagist" philosophy is
internally inconsistent. This is the view that we should maximize
the average happiness of human beings, i.e. the sum of all happiness
divided by the number of people. The problem is that half the people
are of below average happiness, hence they are dragging down the curve.
By killing off the less happy half of the human race, we can greatly
increase average happiness, hence this is an acceptable goal for the
averagist. However it does not stop there, for having done so, the
remaining half can once again be divided in to the more and less happy,
and we can justify killing off the less happy half of the remainder.
This is repeated until only one person is left, the single happiest
individual of the entire human race, who probably has something wrong
with him.
I think that totalism is also flawed, as there is no absolute scale on
which to measure happiness. Decision theory shows that utility scales
are arbitrary up to (at least) a linear transformation. That means
that we can't tell whether a person has a net positive or a net negative
happiness. If we measure happiness on a scale from 1-10 then it's always
positive. If we measure it from -10 to 10 it might be negative. I would
argue that there is no meaningful answer to the question of whether a
given person's net happiness contributes positively or negatively to
the total for the whole human race. And of course this leaves aside
the whole problem of inter-personal comparisons of utility.
Now, these are rather simplistic analyses. In practice, altruists
probably have different degrees of weighting which they would apply to
the happiness of people and the total number of people. I believe that
virtually all altruists will have a degree of uncertainty about whether
a given increment in the number of people is balanced by a decrement in
average happiness, or vice versa.
Based on this uncertainty, I still think it is possible that altruists
actually do agree on their goals, but disagree on how to get there.
The issues of how to balance different people's levels of happiness,
as well as how to weight the total number of people, are questions of
tactics and not fundamental goals. The goal is to maximize the happiness
of the human race. Philosophical, scientific and pragmatic uncertainty
makes it hard to know how best to achieve that goal. From these areas
of uncertainty arise the disagreements among altruists about the best
course of action.
I'll send another message discussing the "inability to disagree" results.
Hal
This archive was generated by hypermail 2.1.5 : Wed May 28 2003 - 15:28:46 MDT