From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue May 20 2003 - 09:28:28 MDT
Lee Corbin wrote:
> Eliezer writes
>
>
>>>[In the non-iterated Prisoners' Dilemma] *OBVIOUSLY* Doug 2003 should
>>>Defect, since he knows that the 1983 version is going to cooperate.
>>
>>Not if Doug-2003 is also an altruist. There's more than one possible
>>reason to cooperate, after all.
>
> Well hell's bells. If that's the case, then why ever Defect?
As far as I can tell, from an altruist's perspective, the only good reason
for defecting in a Prisoner's Dilemna is that you were just defected
against, and you think that by defecting in retaliation you can discourage
future negative-sum behavior by the other player, thus maximizing the size
of the global pie.
Defection, for altruists, is just as counterintuitive as cooperation for
the selfish, with the same aha! notion - Tit for Tat - being crucial in
both cases.
> Listen, the whole idea behind the *entries* in the table is that
> they are the payoffs of the players. That's what the numbers
> *mean*, the utilities of the players.
What the experiments *measure* is monetary amounts.
> Now, although I say that it means you should definitely defect
> in almost all cases, I am not the kind of extremist that Smullyan
> is, who claimed that he would defect even against his mirror image.
> (Was he just trying to be provocative, or did he slip a bearing?)
He was feeding you lies, to minimize the amount of competition you
presented. Never trust a D player.
> I say that you should Defect (and that is what the entries in the
> table say too) whenever you know what your opponent is going to do.
> If you know he is going to Defect, then you Defect. If you know
> that he is going to Cooperate, then you Defect. It's right there
> in the table.
Not if your opponent determined his behavior by running an accurate
simulation of what you would do if you knew definitely what your opponent
would do. Think timeless.
> The highly interesting case is if you don't know what he's going
> to do AND his behavior is probably correlated with yours. For example,
> I would be afraid to Defect against a physical duplicate if he was
> a close (or near) duplicate. After all, whatever thoughts I think
> are likely to course through his head as well. But if it any entity
> whatsoever, including 1983 versions of myself, then the table says
> "Defect".
And knowing this, your past self will defect against you, since he knows
you know what his behavior will be.
By your logic, in fact, you and your past self should play this way even
on the *iterated* Prisoner's Dilemna. After all, you know what his
decision will be on every round. Oddly enough, though, the past you
remember is a string of defections. How sad.
Meanwhile, the two cooperating Eliezers walk away, grinning, pockets
loaded with utility tokens. So who's the rationalist now, wise guy?
(This is not intended as an insult; it is the standard formal rejoinder
that should be delivered to anyone claiming that a behavior which delivers
noticeably poorer results is "rational". It can be very hard to show that
the best solution is a special case of Bayesian reasoning, but it always is.)
>>Mixing time travel and the Prisoner's Dilemma sure does make for some
>>interesting situations. It can be resolved to mutual cooperation using
>>the Golden Law, but only if Doug-1983 can accurately simulate a future
>>self who knows his past self's decision. The time-travel oneshot PD can
>>also be resolved much more straightforwardly by making a binding promise
>>to yourself, using self-modification or a pre-existing emotion of honor.
>
> Yes, there are all sorts of interesting ways to change the numbers
> in the boxes as written.
All three of the scenarios above work for pure utility boxes, with the
standard numbers.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue May 20 2003 - 09:39:44 MDT