From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue May 20 2003 - 13:50:46 MDT
Hal Finney wrote:
>
> A final point; Eliezer points out that the "irrational" action of
> cooperating leads to a greater payoff. I believe the consensus
> among game theorists is that this does not change the fact that it
> is irrational. The reasoning is similar to that in Newcomb's paradox
> (take one box or two, but God has arranged that you will get more money
> if you take only one box). Taking two boxes physically entails getting
> at least as much as in the first box, hence it is the rational action.
> In this paradox, as well, being irrational leads to a greater outcome.
> I copied out some analysis on this issue from a decision theory book at
> http://www.finney.org/~hal/Newcomb.html. The argument doesn't go over
> directly to the PD case, but the flavor is the same: it is possible for
> an irrational action to lead to a greater outcome.
Yes, Newcomb's Paradox is a good example of a situation which is very
straightforward to rationally resolve for maximum benefit using the Golden
Law(*), as discussed on the AGI list. Game theory has not caught up with
this yet, but historically it has often taken game theorists much too long
to realize that the "irrational" action of cooperating under situation
XYZ, which does in fact deliver a higher payoff, is really rational after
all. In this case solving the problem requires a timeless formulation of
decision theory, of which ordinary decision theory is a special case. Be
it noted for the record that Eliezer Yudkowsky disagrees with the
consensus of game theorists about what, mathematically speaking,
constitutes "rationality", not just in the case of the Prisoner's Dilemna,
but also for Newcomb's Paradox and a wide variety of other situations in
which similar or identical decision processes are distantly instantiated.
Be it also noted that the actions Eliezer Yudkowsky computes as formally
rational are the ones that any sane non-game-theorist would take and that
do in fact correlate with maximum payoffs.
(*) Not to be confused with the prescriptive Golden Rule in human
morality, the Golden Law states descriptively that identical or
sufficiently similar decision processes, distantly instantiated, return
identical or correlated outputs. The prescriptive formulation of the
Golden Law states that you should make any decision between alternatives
as if your decision controlled the Platonic output of the mathematical
object representing your decision system, rather than acting as if your
decision controlled your physical instantiation alone. Wei Dai's
formulation is even simpler; he says that you should choose between
alternatives A and B by evaluating your utility function on the state of
the multiverse given that your choice is A, versus the state of the
multiverse given that your choice is B.
But again, see the discussion on the AGI list. (Google on AGI + "Golden
Law".)
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue May 20 2003 - 14:03:17 MDT