Re: Opinions as Evidence: Should Rational Bayesian Agents Commonize Priors

From: CurtAdams@aol.com
Date: Mon May 21 2001 - 17:17:36 MDT


In a message dated 5/14/01 10:56:52 AM, rhanson@gmu.edu writes:

>As I said in my last message, "I was making a *normative* argument about
>rational beliefs, not a descriptive model of actual beliefs." If there
>are constraints on what beliefs are rational, then upon discovering
>that your beliefs violate those constraints, you should want to change
>your beliefs to avoid those violations. This sort of change seems
>perfectly rational to me, even if it violates a naive Bayesianism.

Bayesianism is a certain process for updating beliefs based on evidence.
Maintaining your priors isn't a part of "naive" Bayesianism; it's an
essential part of Bayesianism, period. Change your beliefs any other way and
you're bookable. If your claim is that Bayesians are naive and you have a
better way, fine, but a) that needs justification and b) be upfront about it.

>>You assume everybody agree on world_state_function/belief_state and
>>on probability(belief_state). It's trivial to derive a
>>belief-state-independent
>>world state function (i.e. a prior) just by averaging
>>world_state_function/belief_state
>>by probability(belief_state). So you do assume everybody has the same
>>uberprior.
>
>I don't follow you. My short paper http://hanson.gmu.edu/priof.pdf
>describes uberpriors as q_i, where the i subscript allows different
>people to have different uberpriors. I do not require q_i = q_j.

I apologize for misreading your assumptions. I knew you had the assumption
of shared uberpriors in there and missed that it wasn't explicit. It's a
consequence of the extremely strong consistency assumption. Your consistency
assumption requires that all possible views be consistent with my view in the
sense that they derive from my uberprior with only the evidence of their own
existence. As I showed, this requires the belief probabilities follow the
distribution function f(A) = cA/(1-A). Beliefs from that distribution follow
your consistency assumption only with an uberprior of 1/(1+c).

Under the more normal assumption that uninformed priors are not highly
informative, the only prior that meets the consistency assumption with my
prior is my prior. So your consistency assumption holds, in effect, that
either a) all beliefs result from highly informative brainstates, reasoning
from a shared uberprior or b) everybody has the same prior. Neither holds
for people. Your consistency assumption could hold only for interactions
between backups of one perfectly rational individual.

If you are maintaining that human beings should commonize priors, you have
demonstrated this is not true if people are Bayesian (given what we know
about people). You've been suggesting that Bayesian humans should coordinate
priors by Bayesian reasoning; but based on your work that has to be modified
to a recomendation that some currently unspecified groups of non-Bayesian
people should coordinate priors by some currently unspecified non-Bayesian
method. Call these unspecified groups and methods "Hansonists" and
"Hansonism". From what you've said in the past Hansonists should be "people
who follow Bayesian inference except that they coordinate priors with other
Hansonists." My best guess for Hansonism is "the procedure that results from
participation in an idea futures market" but I'm less sure of this, since
that's not formally defined.



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:07 MDT