Hal F. writes:
>Robin Hanson has an interesting argument that factual disagreements should
>not persist. However I think it took as one of its premises that people
>believe that others are basically rational. If that is the case then
>over time people should modify their beliefs in the face of persistent
>disagreement. Disagreements about factual matters should not be stable.
>
>What if people hold the contrary belief, that there are significant
>numbers of other people who are stubbornly irrational? It would seem
>that we might have a stable outcome similar to what we actually see:
>people collecting into subgroups with shared opinions, where they believe
>other members of their group are rational (at least on this issue!).
>However the existence of significant groups with other opinions does
>not lead them to change their ideas because they simply assume that the
>others are irrational.
First a clarification: I did not originate the argument that persistent disagreement is irrational. I only extended the argument from Bayesians to arbitrarily computationally constrained agents.
Your theory has a certain plausibility, but I'm bothered by thinking about
how these people evaluate whether they are rational or not. Imagine a
rational but dumb mind core which makes use of lots of smart but perhaps
irrational mental modules. If the core is evaluating the rationality
of its modules on the basis of private information about how well those
modules have met various rationality tests, we're back to the same
situation again. It's irrational for the mind cores to agree to disagree
about the rationality of either of their modules.
This suggests that the sort of irrationality needed here must be very
deep; there can't be a rational but dumb part of you that considers
whether parts of you are irrational.
Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-2627