Re: How You Do Not Tell the Truth

Date: Fri May 04 2001 - 18:39:13 MDT

Robin writes:
> We note that a list of assumptions, taken together, imply a conclusion
> at odds with observation. We do not take sides regarding which assumption
> exactly is at fault.

I'm surprised you say that; your paper seems to come down pretty hard
against the assumption that people seek the truth, in the title and
in most of the last part.

> >One thing which is unclear in the paper is how you should react if you
> >become convinced that both you and the other party are both rational
> >and fully informed about these results. ... As a
> >practical matter, what means do you apply to resolve your disagreement?
> If you sure you were both so rational and informed, you would be
> absolutely sure that any apparent disagreement was not actually one -
> it was just a random fluctuation that looked like disagreement.

I've been reading some of the papers in your references, and the way
I see it now is that there is a process which must occur in order for
everyone to be mutually informed about each other's beliefs. If the
parties state their beliefs for the first tiem and they don't match,
that is fully compatible with the theorem. It just means that they have
different information.

However they are not yet fully mutually informed after this exchange,
because one or both of them may have or will have updated their internal
models based on this information. The paper "We Can't Disagree Forever"
describes contrived scenarios where the two parties can just shout
their differing beliefs at each other N times, then suddenly at time
N+1 they both agree. This is fully rational because each exchange
actually updates the internal model of what the other person knows,
and the models are gradually converging even though the stated opinions
don't change until the end.

When I was asking about disagreement above, I was talking about the
initial stage in this process, when you first discover that you have
different posterior beliefs. What is the best approach to resolve
the dispute? The methods I have seen so far (as in the paper above)
are quite impractical for human beings to apply IMO as they seem to
require excessively detailed knowledge about the state of the world.

I have come up with a more straightforward plan, which is, if both parties
are sincere and honest, for each party to estimate the confidence level he
has in the accuracy of his belief. He would also need to keep statistics
about how often he has been right in the past on other beliefs where he
had this same subjective level of confidence. Based on this the parties
can come up with a probability that each is right, and choose the belief
with the higher probability. (Or perhaps they should choose the belief
which maximizes their joint probability, something along those lines.)

As a rough rule of thumb, you might just accept the belief of the player
with the higher Idea Futures score, or better, use a subscore which is
based just on claims similar to the question at hand.

> >... Seemingly this would
> >mean that if Bayesians John and Mary gradually move closer to an eventual
> >compromise that they agree upon, this violates the theorem because they
> >could have predicted this path. Yet it seems obvious that something
> >like this must be how agreement is reached.
> It is far from obvious to me. Any formal model of Bayesians exchanging
> information, for example, is not like this.

Yes, the Geanakoplos paper above also didn't work in this way which was
intuitive to me, so I guess it is not really how things would go.


This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:02 MDT