RE: Status of Superrationality

From: Lee Corbin (lcorbin@tsoft.com)
Date: Tue May 20 2003 - 20:06:11 MDT

  • Next message: Lee Corbin: "RE: Plato's big lie"

    Hal writes

    > ...an altruist could play against a selfist if the payoffs were
    > set up something like this:
    >
    > Selfist Cooperate Selfist Defect
    > --------------------------------------------------------
    > Altruist Cooperate | Everyone in the world | Everyone in the world
    > | gains $5 | loses $10 except selfist,
    > | | who gains $10.
    > -----------------------------+---------------------------
    > Altruist Defect | Everyone in the world | Status quo; no gain, no
    > | gains $10 except selfist, | loss
    > | who loses $10 |
    >
    > The question is whether the altruist would defect in this situation.

    Ah, quite ingenious. I don't believe that Eliezer addressed this in
    his reply. Now what does the person playing rows, the altruist, know?
    If he knows that he's playing another altruist, then of course he
    Cooperates. But I take it that he knows he's playing against a Selfist,
    and the Selfist knows that he's playing an Altruist. Now it seems to
    me that the Altruist must "defect" (despite his constitution!). The
    payoff table seems to be clear on that point---unless you have a more
    subtle interpretation than is occurring to me right now. Therefore,
    the Selfist must anticipate this, and cooperate, on the chance... that
    the Altruist will cooperate... but I've just said that won't happen.
    Yes. Very ingenious.

    Perhaps this falls under my dictum, then, that in PD you don't *have*
    to defect if you don't know what your adversary is going to do. Thus,
    you *may* perhaps Cooperate. Of course, if the $10 is not symbolic of
    utility (which I have contended is the only possibility that renders
    the game interesting), then of course the marginal utility of a $10
    gain for most people is far outweighed by the marginal utility of a
    $10 loss, for most of the population, who would find loss of ten U.S.
    dollars to be catastrophic.

    > I was trying to set up a PD scenario between two rational altruists,
    > but it is hard since they are both motivated the same way, to increase
    > the total happiness of the world. The only thing that could lead to
    > a difference in payoff for a particular outcome is disagreement about
    > how happy it would or does make people. For example, supposed Eliezer
    > believed that average happiness were what mattered, while Lee believed
    > that total happiness were more important. Then an outcome which increased
    > the population while making them more unhappy might be rated highly by
    > Lee but low by Eliezer.

    Well, you're pretty on-target understanding *my* values!

    > However, these hypotheticals don't really work, because rational people
    > can't disagree! See our previous discussion on this issue.

    Sorry, I've lost track of that. Do you have a pointer?

    Lee



    This archive was generated by hypermail 2.1.5 : Tue May 20 2003 - 20:20:56 MDT