CurtAdams@aol.com wrote:
>
> My personal experience is that priors and private information are only weakly
> informative; i.e., even given world-state Q obtains, it isn't particularly
> difficult to find uninformed individuals with strong disbelief in Q. Given
> this, the probability of a given degree of belief A in Q varies only mildly
> with whether Q obtains. Hence the information derived from a given person
> holding a degree of belief Q is small and a rational Bayesian should have
> only a small change in belief on learning another's opinions. Rational
> Bayesians, then, generally should maintain differences of opinion due to
> differences in priors. Under most circumstances, for two agents to commonize
> priors requires a violation of Bayesian inference.
What your analysis leaves out (I think) is the possibility of symmetry
between the observers. Of course it isn't rational for a perfect Bayesian
reasoner to adjust vis opinions based on what humans think - at most, vis
opinions should be adjusted when ve encounters a human who could plausibly
be making vis conclusions based on novel but correct information.
However, a human, encountering another human, must consider the
possibility of an internal mistake as well as an external mistake. That
all humans evaluate themselves as having a considerably above-average
meta-rationality, and hence a considerably lower-than-average possibility
of underestimating how likely an internal mistake is, is not compatible
with rationality on the part of all observers; it implies an evolved bias
to overestimate rationality or meta-rationality.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:03 MDT