RE: Status of Superrationality

From: Rafal Smigrodzki (rafal@smigrodzki.org)
Date: Tue May 27 2003 - 19:04:49 MDT

  • Next message: Robert J. Bradbury: "Extropy.org and Folding@Home"

    Hal Finney wrote:

    >
    > Without analyzing it in detail, I think this level of honesty,
    > in conjunction with the usual game theory assumption of rationality,
    > would be enough to imply the result that the two parties can't
    > disagree. Basically the argument is the same, that since you both
    > have the same goals and (arguably) the same priors, the fact that the
    > other party judges an outcome differently than you must make you no
    > more likely to believe your own estimation than his. Since the game
    > theory matrix makes the estimated utilities for each outcome common
    > knowledge, the two estimates must be equal, for each outcome.

    ### But isn't the main problem an irreconcilable difference in the goals
    between players, the difference in weighing outcomes? The simplified
    depiction of the averagist vs. the totalist is just the beginning: you could
    imagine all kinds of global payoff matrices, describing attitudes towards
    outcomes affecting all objects of value, and even differences in what may be
    considered an object of value. There are those who favor asymmetric
    relationships between wishes and their fulfillment (meaning that while the
    total rather than average utility is to be maximized, at the same time a
    limited list of outcomes must be minimized). There are fundamental
    differences the lists of subjects whose preferences are to be entered into
    the ethical equation, and the methods for relative weighing of such
    preferences.

    I would contend that even perfectly rational altruists could differ
    significantly about their recipes for the perfect world.

    Rafal



    This archive was generated by hypermail 2.1.5 : Tue May 27 2003 - 16:15:31 MDT