Re: How You Do Not Tell the Truth

From: Robin Hanson (rhanson@gmu.edu)
Date: Fri May 04 2001 - 12:14:16 MDT


At 03:49 PM 5/3/2001 -0700, you wrote:
>The part about Robin's paper (http://hanson.gmu.edu/deceive.pdf) that
>I have a hardest time understanding is the discussion of common priors.

That's probably because that's the weakest part of our argument :-).

We note that a list of assumptions, taken together, imply a conclusion
at odds with observation. We do not take sides regarding which assumption
exactly is at fault.

>It's not clear to me whether the question of whether two humans share
>a common prior is an empirical one, or a matter of definitions and
>methodology. Is it a matter of fact whether you and I share common
>priors? Or is it a question of which assumptions are most useful to
>analyze our interactions? (Or something else?)

It is a theoretical claim with empirical implications. If we hold
constant other assumptions, we can test it. Or we can vary other
assumptions to try to preserve it against failed tests. This is the
usual case in modeling.

>I also don't understand the claim [1] above relating non-common priors
>to lack of truth seeking, unless it is a short-hand for argument B.

It is the weakest part of our argument - I hope to do better in the
next version. I want to argue that believing that you are more able
that others, with no evidence to support that belief, is irrational
per se.

Hal Finney wrote:
>This suggests that being a Bayesian truth-seeker is not enough, it
>is also necessary to understand the irrationality of disagreement.

Yes.

>Of course, since Bayesians draw all correct conclusions from existing
>data, they have full, implicit knowledge of all of mathematics and hence
>were completely aware of this result long before Aumann published.

As I mentioned to Curt, this is actually not true - one can describe
non-logically-omniscient Bayesians.

>One thing which is unclear in the paper is how you should react if you
>become convinced that both you and the other party are both rational
>and fully informed about these results. ... As a
>practical matter, what means do you apply to resolve your disagreement?

If you sure you were both so rational and informed, you would be
absolutely sure that any apparent disagreement was not actually one -
it was just a random fluctuation that looked like disagreement. If
you are not sure, you can reasonably infer that apparent disagreements
are real, and so you can conclude that you are *not* both so rational
and informed. And unless you have reason to be absolutely sure you
are rational, you should seriously doubt your own rationality.
"Before taking a mote from another's eye, look to the log in your own."

>... Seemingly this would
>mean that if Bayesians John and Mary gradually move closer to an eventual
>compromise that they agree upon, this violates the theorem because they
>could have predicted this path. Yet it seems obvious that something
>like this must be how agreement is reached.

It is far from obvious to me. Any formal model of Bayesians exchanging
information, for example, is not like this.

Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Asst. Prof. Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:02 MDT