From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat May 17 2003 - 11:03:20 MDT
Lee Corbin wrote:
> James writes
>
>>Whatever the cause is, I believe it is a quirk of natural
>>selection that should, at least in principle, yield to
>>superrationality.
>
> Does anyone believe in "superrationality" in game theory
> anymore? I can't tell from a quick reading of
> agi@v2.listbox.com/msg01025.html">http://www.mail-archive.com/agi@v2.listbox.com/msg01025.html
> where our friends appear to be debating it, although Wei Dai
> reflects what I read in game theory papers these days:
> Hofstadter was wrong about superrationality. Once, in 1985
> or so, I wrote him about it, and we exchanged a few letters
> (he was always incredibly good that way). What I would like
> to do now---though I won't because it would be "rubbing it
> in"---would be to write him and ask whether he would Cooperate
> or Defect against the 1983 version of himself if he could go
> back in a time machine. *OBVIOUSLY* Doug 2003 should Defect,
> since he knows that the 1983 version is going to cooperate.
Not if Doug-2003 is also an altruist. There's more than one possible
reason to cooperate, after all. From my viewpoint, the only good reason
to *defect* is a Tit-for-Tat retaliation to maintain social order.
Otherwise the calculation is very simple; maximum benefit to all sentients
is optimized by taking C. For a true altruist, the game theory that would
ever lead to choosing D is as surprising and counterintuitive as the game
theory that can lead a selfish agent to choose C.
For that matter, Doug-1983 anticipates being Doug-2003, so it's in his
best selfish interest to cooperate even if he thinks Doug-2003 will
defect. This, of course, makes it harder to threaten subjunctive
defection, which is what you want to do to maintain mutual cooperation,
which maximizes the return to both temporal slices of yourself.
Mixing time travel and the Prisoner's Dilemna sure does make for some
interesting situations. It can be resolved to mutual cooperation using
the Golden Law, but only if Doug-1983 can accurately simulate a future
self who knows his past self's decision. The time-travel oneshot PD can
also be resolved much more straightforwardly by making a binding promise
to yourself, using self-modification or a pre-existing emotion of honor.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat May 17 2003 - 11:15:34 MDT