RE: Status of Superrationality

From: Rafal Smigrodzki (rafal@smigrodzki.org)
Date: Wed May 28 2003 - 17:49:51 MDT

  • Next message: Hal Finney: "Re: PHYSICS: Black holes on demand?"

    owner-extropians@extropy.org wrote:
    > Rafal writes
    >
    >> Hal Finney wrote:
    >>
    >>> Without analyzing it in detail, I think this level of honesty,
    >>> in conjunction with the usual game theory assumption of rationality,
    >>> would be enough to imply the result that the two parties can't
    >>> disagree. Basically the argument is the same, that since you both
    >>> have the same goals and (arguably) the same priors, the fact that
    >>> the other party judges an outcome differently than you must make
    >>> you no more likely to believe your own estimation than his. Since
    >>> the game theory matrix makes the estimated utilities for each
    >>> outcome common knowledge, the two estimates must be equal, for each
    >>> outcome.
    >>
    >> ### But isn't the main problem an irreconcilable difference in the
    >> goals between players, the difference in weighing outcomes? The
    >> simplified depiction of the averagist vs. the totalist is just the
    >> beginning: you could imagine all kinds of global payoff matrices,
    >> describing attitudes towards outcomes affecting all objects of
    >> value, and even differences in what may be considered an object of
    >> value. There are those who favor asymmetric relationships between
    >> wishes and their fulfillment (meaning that while the total rather
    >> than average utility is to be maximized, at the same time a limited
    >> list of outcomes must be minimized). There are fundamental
    >> differences the lists of subjects whose preferences are to be
    >> entered into the ethical equation, and the methods for relative
    >> weighing of such preferences.
    >
    > At this stage, I'm not going to claim that I understand what you
    > have written. But would you care to comment upon
    >
    > http://hanson.gmu.edu/deceive.pdf
    >
    > It mentions the annoying result that "if two or more Bayesians
    > would believe the same thing given the same information (i.e.,
    > have "common priors"), then those individuals cannot knowingly
    > disagree. Merely knowing someone else's opinion provides a
    > powerful summary of everything that person knows, powerful
    > enough to eliminate any differences of opinion due to differing
    > information."
    >
    > I could certainly use a hand in getting to the bottom of this.
    >

    ### Certainly a quite complex article. I think that what you quoted above
    means that the Bayesian would treat the output of another Bayesian as data
    of the same validity as the output of his own reasoning. If you know that a
    fellow Bayesian sincerely believes in flying saucers, you have to believe in
    them, too, unless your priors are wildly divergent ("having a memory of
    seeing a flying saucer as clear as my memory of seeing my car is sufficient
    to profess belief in flying saucers" vs. "no amount of subjective visual
    experience is sufficient to profess belief in flying saucers"). If the
    honest Bayesian says he saw a flying saucer, you have to believe him, or
    else assume he is not Bayesian at all, or has a higher visual/cortical
    malfunction rate than you (i.e. is less Bayesian than you). Barring these
    doubts, you would become as convinced about the existence of flying saucers
    as the person who actually saw them, despite not having the direct sensory
    input that he had. In effect, his beliefs are as valid an input for your
    future reasoning as your own sensory and logical subsystem outputs.

    All this, as I mentioned before, has little bearing on the discussions where
    the disagreements are about goals, rather than facts, and may be persistent.

    Rafal



    This archive was generated by hypermail 2.1.5 : Wed May 28 2003 - 15:02:50 MDT