**Next message:**Anne Marie Tobias: "Re: Risk vs. Payoff (was Re: Alcor on KRON)"**Previous message:**Eliezer S. Yudkowsky: "Re: Risk vs. Payoff (was Re: Alcor on KRON)"**Maybe in reply to:**Robin Hanson: "Opinions as Evidence: Should Rational Bayesian Agents Commonize Priors"**Next in thread:**Robin Hanson: "Re: Opinions as Evidence: Should Rational Bayesian Agents Commonize Priors"**Reply:**Robin Hanson: "Re: Opinions as Evidence: Should Rational Bayesian Agents Commonize Priors"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

In a message dated 5/8/01 6:37:53 AM, rhanson@gmu.edu writes:

*>You were addressing updating on the fact that someone else received
*

*>the prior they did, not on the fact of yourself receiving the prior
*

*>you did. I again point you to: http://hanson.gmu.edu/prior.pdf or .ps
*

You make a huge assumption with the consistency assumption, that a prior is

obtained by updating the uberprior with the fact of the prior assignment.

This assumption is a generalized equivalent to my condition that prior

probabilities must be a particular function of the world state. Essentially

you assume that all priors are extremely well founded. An easy example

where this is false is that the uberprior offers two possible priors (25% of

Q vs. 75% of Q) and a 50% chance of Q, regardless of assigned prior.

Updating the uberprior with the assigned prior leaves the expectation for

worldstate probabilities unchanged and at variance with the obtained prior.

I haven’t proved it yet, but if you apply your condition to my simplified situ

ation you’re going to extract my prior probability requirement. Because your

condition is more generally applicable it’s not immediately obvious what an

incredibly strong assumption it is.

Another way of stating it is that you effectively assume everybody has the

same prior (the uberprior) which has only been altered by events in perfect

accordance with Bayesian inference, even though the inference process is

cloaked; i.e., private information. Given that assumption, yes, it is

rational for John and Mary to update their beliefs on each other’s priors;

the differences between their priors results only from good Bayesian

inference on honest data. But that is not my experience of priors, at all;

as far as I can tell, they’re pretty random even where humans should have

built-in data like social interaction and completely random for, say,

chemisty.

Technically, you're right that we agree; we've both shown a strong

restriction on prior assignments is necessary and sufficient to cause

Bayesian agents that hold such priors to hold common beliefs on encountering

each other. [Technically, I showed it's necessary and you that it's

sufficient; but the inversions are trivial.] Your derivation is more general

and complete than mine. However, the restriction is generally false in the

real world; this is immediately obvious the way I state it but not so obvious

the way you do.

**Next message:**Anne Marie Tobias: "Re: Risk vs. Payoff (was Re: Alcor on KRON)"**Previous message:**Eliezer S. Yudkowsky: "Re: Risk vs. Payoff (was Re: Alcor on KRON)"**Maybe in reply to:**Robin Hanson: "Opinions as Evidence: Should Rational Bayesian Agents Commonize Priors"**Next in thread:**Robin Hanson: "Re: Opinions as Evidence: Should Rational Bayesian Agents Commonize Priors"**Reply:**Robin Hanson: "Re: Opinions as Evidence: Should Rational Bayesian Agents Commonize Priors"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

*
This archive was generated by hypermail 2b30
: Mon May 28 2001 - 10:00:03 MDT
*