From: Lee Corbin (lcorbin@tsoft.com)
Date: Tue May 27 2003 - 22:50:12 MDT
Jef writes
> > http://hanson.gmu.edu/deceive.pdf
> >
> > It mentions the annoying result that "if two or more Bayesians
> > would believe the same thing given the same information (i.e.,
> > have "common priors"), then those individuals cannot knowingly
> > disagree. Merely knowing someone else's opinion provides a
> > powerful summary of everything that person knows, powerful
> > enough to eliminate any differences of opinion due to differing
> > information."
> >
> > Lee
> >
> >> I would contend that even perfectly rational altruists could differ
> >> significantly about their recipes for the perfect world.
> >>
> >> Rafal
>
> To me the problem is simple in concept, but limited in practice. We can
> never have absolute agreement between any two entities, due to their
> different knowledge bases (experiences).
Yes, although I'm sure that you meant to mention that each rational
being ought to regard the other as a truth-machine of approximately
the same verisimilitude at himself. That is, as has been stressed
in the writings at hand, we can learn to count on others as much as
we count on ourselves. (Indeed, I do rely on at least one of my
friends in this precise way.)
> However, two rational beings can approach agreement as precisely
> as desired by analyzing and refining their differences. It's
> interesting to note that all belief systems fit perfectly into
> the total web of beliefs that exist. It couldn't be otherwise
> if we accept that the universe itself is consistent.
Okay.
> From this we might infer that superrationality is what you get
> when you extrapolate any more limited concept of rational behavior
> to a timeless setting. This seems particularly apropos to extropians
> who hope and plan to live forever.
Yes, I think I know what you are saying. Certainly if in the non-iterated
Prisoners Dilemma I begin to suspect that the other player is myself, either
a freshly minted duplicate or a self from another time, then my incentive
to Defect vanishes. (Because I see Lee Corbin getting benefit no matter
"who" wins.)
Now then, timelessness also invites one to confuse a present setting with
members of the set of all equivalent settings. I shall refer to the atavistic
axioms of Kolmogorov, (which are all that I believed until indoctrinated by
the present ilk of Bayesians who infest these lists). My interpretation of
Kolmogorov axioms were just this: say one has tossed a die high into the
air, and then just as it hit the table, slapped a hat on top of it so that
one cannot see the value that the die has assumed. Then on the one hand
we accept that the die has assumed a definite value (and is not really in
any sort of superposition of all six possibilities). My interpretation of
the K axioms was that we are invited to deliberately confuse the present
circumstances with all those other historical cases in which a die has
been tossed (Kolmogorov's "class G"). Then from this vast sample, the
theoretical fraction of instances in which the "six" is showing is one-sixth!
And thus we have probability.
It is possible that you meant something like this by your "timelessness".
As for superrationality itself---as originally defined by Hofstadter and
others---it's merely the mistaken doctrine that one should Cooperate in
the non-iterated Prisoners Dilemma under the normal conditions that the
adversary is not a version of oneself (and therefore has behavior completely
uncorrelated with one), and that one is not an altruist. The values in the
Table say "Defect", and I think that that is all there is to it (given the
Totally correct assessment of the non-Repeatability of the PD).
Lee
This archive was generated by hypermail 2.1.5 : Tue May 27 2003 - 23:00:55 MDT