From: Hal Finney (hal@finney.org)
Date: Sun May 25 2003 - 12:44:38 MDT
Wei writes:
> On Tue, May 20, 2003 at 11:30:14AM -0700, Hal Finney wrote:
> > However, these hypotheticals don't really work, because rational people
> > can't disagree! See our previous discussion on this issue. So I think
> > that for rational altruists, all 2-person games have identical payoffs
> > to both individuals, making them in effect one person games.
>
> Actually, it's rational "honest truth-seeking agents with common priors"
> who can't disagree. Rational altruists can certainly disagree if they are
> not honest or are not truth seekers, which they are not because being
> honest and truth seeking is not always the best way to increase total or
> average happiness. Meaning if you are an altruist belonging to the
> total-happiness school, you might find it useful to lie (or do something
> worse) to someone who believes in the supremacy of average happiness.
Well, I'd suggest that a rational altruist must be truth-seeking, since
his goal to maximize the happiness of the world requires knowing the
truth about what states of the world achieve that goal.
As for honesty, this points out a problem in the formulation of game
theory. We assume that both sides have access to the payoff matrix,
which accurately depicts the utility for each person. This is actually
an unreasonably strong assumption, since how could one know the utility
of an outcome for someone else?
And I think it is important in the definition of the Prisoner's Dilemma
that you do know what the payoffs are for the other guy. Just seeing
and knowing your own payoffs is not enough to create the dilemma.
So it does seem that in the context of PD analysis, we do have to assume
a certain level of honesty, at least to the point that each party has true
knowledge of the worth to the other side of the various outcomes.
Without analyzing it in detail, I think this level of honesty,
in conjunction with the usual game theory assumption of rationality,
would be enough to imply the result that the two parties can't disagree.
Basically the argument is the same, that since you both have the same
goals and (arguably) the same priors, the fact that the other party judges
an outcome differently than you must make you no more likely to believe
your own estimation than his. Since the game theory matrix makes the
estimated utilities for each outcome common knowledge, the two estimates
must be equal, for each outcome.
Hal
This archive was generated by hypermail 2.1.5 : Sun May 25 2003 - 12:56:05 MDT