Re: Status of Superrationality

From: Hal Finney (hal@finney.org)
Date: Sat May 31 2003 - 17:11:04 MDT

  • Next message: Eliezer S. Yudkowsky: "Re: Rationality of Disagreement (Was: Status of Superrationality)"

    I see both the averagist and totalist positions as being too extreme to
    be effective strategies for altruism. The averagist falls into the trap
    that eliminating unhappy people makes the average happiness level rise.
    And the totalist falls into the corresponding error, of increasing the
    population beyond the carrying capacity until everyone is just one step
    from committing suicide.

    All else being equal, everyone would agree that more and happier people
    are good. The problem is in how you judge the tradeoffs. At some
    point, adding more people is not good if it makes them less happy.
    And making them happier is not good if it reduces the numbers who can
    enjoy the happiness. We need a balance.

    Then there is the problem of inter-personal comparisons of happiness.
    It is very questionable whether this can be done properly. Some people
    may be internal "dramatists" who see relatively small changes in their
    circumstances as causing great swings in their happiness. Others are on
    a more even keel. Some would say that they are generally happy, others
    that they are generally unhappy. I don't see how we can say that person
    A's happiness ranges from -500 to 1000 while person B's happiness ranges
    from -50 to 20, and somehow combine these values numerically to decide
    that A's welfare is more important than B's.

    Economists manage to avoid this problem. Essentially, they assume that
    happiness is not inter-comparable. If you do this, your job becomes
    easier. All you can have as your goal is Pareto-optimality.

    A Pareto-optimal state is one where you can't make anyone happier
    without making someone else less happy. Generally in economics this is
    judged solely in terms of distribution of goods, but I think it could
    be generalized to compare world-states, making it a tool for altruists.

    Now, it might seem at first that rearranging goods to make one person
    happy will inevitably require making someone else unhappy. However that
    is not the case; it is often possible for A and B to exchange goods such
    that both sides end up happier. These are what we know as voluntary
    trades. A Pareto-optimal state is one where all voluntary trades have
    been allowed to occur. Once you have reached this state, no one can be
    made happier without hurting someone else.

    From the altruist's perspective, this is not a bad solution. If everyone
    is happier in state X than in state Y (or at least, no less happy),
    then X is a better goal state than Y for the altruist. Totalists,
    averagists, and altruists of all stripes would agree on this, I think.
    The Pareto-optimal states are just those where there are no states that
    "dominate" in terms of happiness, no states where everyone is happier,
    or at least as happy.

    The problem is that Pareto-optimality is too weak a condition. There are
    too many states that are Pareto-optimal but which don't satisfy our
    instincts about altruism. For example, a state where one guy has
    everything and everyone else has nothing could be Pareto-optimal.
    There's no way to improve anyone's state without taking something from
    the guy who has all the goods. But this won't sound like a very good
    world to most altruists.

    Nevertheless, Pareto-optimality seems to be the best you can do if you
    reject inter-personal comparison of happiness. To go beyond it, you
    have to start taking from Peter to pay Paul, and ultimately that means
    you have to decide that Peter's happiness is less important than Paul's
    (where there may be multiple Pauls and just one Peter, for example).
    You have to be able to come up with a number that tells just how much
    each person's happiness is affected by changes in the world, and some
    way of comparing these numbers between people.

    It looks to me like a very tricky philosophical problem, making these
    kinds of judgements. People differ enormously in their attitudes
    towards pleasure and reward; some of their responses are biological,
    and some due to training. An altruist better be prepared to address
    these thorny philosophical issues, if he is going to compare happiness
    levels between individuals.

    Hal



    This archive was generated by hypermail 2.1.5 : Sat May 31 2003 - 17:23:21 MDT