Re: Status of Superrationality

From: Jef Allbright (jef@jefallbright.net)
Date: Tue May 27 2003 - 20:32:11 MDT

  • Next message: Mike Lorrey: "Re: Guns vs. Tyranny"

    Lee Corbin wrote:
    > Rafal writes
    >
    >> Hal Finney wrote:
    >>
    >>> Without analyzing it in detail, I think this level of honesty,
    >>> in conjunction with the usual game theory assumption of rationality,
    >>> would be enough to imply the result that the two parties can't
    >>> disagree. Basically the argument is the same, that since you both
    >>> have the same goals and (arguably) the same priors, the fact that
    >>> the other party judges an outcome differently than you must make
    >>> you no more likely to believe your own estimation than his. Since
    >>> the game theory matrix makes the estimated utilities for each
    >>> outcome common knowledge, the two estimates must be equal, for each
    >>> outcome.
    >>
    >> ### But isn't the main problem an irreconcilable difference in the
    >> goals between players, the difference in weighing outcomes? The
    >> simplified depiction of the averagist vs. the totalist is just the
    >> beginning: you could imagine all kinds of global payoff matrices,
    >> describing attitudes towards outcomes affecting all objects of
    >> value, and even differences in what may be considered an object of
    >> value. There are those who favor asymmetric relationships between
    >> wishes and their fulfillment (meaning that while the total rather
    >> than average utility is to be maximized, at the same time a limited
    >> list of outcomes must be minimized). There are fundamental
    >> differences the lists of subjects whose preferences are to be
    >> entered into the ethical equation, and the methods for relative
    >> weighing of such preferences.
    >
    > At this stage, I'm not going to claim that I understand what you
    > have written. But would you care to comment upon
    >
    > http://hanson.gmu.edu/deceive.pdf
    >
    > It mentions the annoying result that "if two or more Bayesians
    > would believe the same thing given the same information (i.e.,
    > have "common priors"), then those individuals cannot knowingly
    > disagree. Merely knowing someone else's opinion provides a
    > powerful summary of everything that person knows, powerful
    > enough to eliminate any differences of opinion due to differing
    > information."
    >
    > I could certainly use a hand in getting to the bottom of this.
    >
    > Lee
    >
    >> I would contend that even perfectly rational altruists could differ
    >> significantly about their recipes for the perfect world.
    >>
    >> Rafal

    To me the problem is simple in concept, but limited in practice. We can
    never have absolute agreement between any two entities, due to their
    different knowledge bases (experiences.) However, two rational beings can
    approach agreement as precisely as desired by analyzing and refining their
    differences. It's interesting to note that all belief systems fit perfectly
    into the total web of beliefs that exist. It couldn't be otherwise if we
    accept that the universe itself is consistent. From this we might infer that
    superrationality is what you get when you extrapolate any more limited
    concept of rational behavior to a timeless setting. This seems particularly
    appropos to extropians who hope and plan to live forever.

    - Jef



    This archive was generated by hypermail 2.1.5 : Tue May 27 2003 - 20:43:50 MDT