In a message dated 5/21/01 5:59:22 PM, firstname.lastname@example.org writes:
>You are in essence arguing that there can be no further constraints on
>what beliefs are rational than standard probability coherence and updating
>by conditionalization. Your argument is that if you found that your beliefs
>violated some constraint, you could not change your beliefs to avoid the
>violation because doing so would violate conditionalization.
Not at all! I'm not saying there's never a reason to violate Bayesian
conditionalization (Bc), just that if you do so you're violating Bc and not
entirely Bayesian. I think people violate Bc all the time due to
computational reasons. I do consider Bayesian inference the ideal, so I want
to see a reason to give it up before I do so. If, for example, maintaining
an independent prior resulted in getting booked or taken in an idea market,
then that could be a legitimate reason to ditch Bayesianism.
>This flies in the face of a long philosphy literature considering further
>constraints on rational beliefs. And I think it takes a good idea too far.
>Violating conditionalization only opens you to a dutch book if you do it
>in a predictable way. Your expectation of your future belief must equal
>your conditional belief, but a non-zero variance is allowed around this.
You are correct and my statement was too strong. However, adding variability
to post-conditionalization opinions will on average worsen disagreement, so
that won't serve to coordinate priors. You've also neglected the other way
to avoid strict Bayesianism, which is simply to refuse the bets. I think
this would be the only way out of some hypothetical inconsistent/unusable
prior set - the agent has must either refuse the actions that expose the
problems with the prior or refuse bets on their responses to having bad
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:07 MDT