Re: Opinions as Evidence

CurtAdams@aol.com
Thu, 27 Mar 1997 03:01:28 -0500 (EST)


hanson@hss.caltech.edu (Robin Hanson) writes:

>CurtAdams@aol.com writes:
>>>Is opinion divergence on matters of fact, where each side knows of the
>>>other's opinion, really ubiquitious among ordinary people?
>>
>>Well, yes, it is ubiquitous. For starters, a large percentage of the US
>>population believes the Bible is *literally* true, including such patently
>>ridiculous ideas as the world being created only 6,000 years ago.

>This is your best example. Here one is tempted to postulate broken
>cognitive processes.

>>Most people think that a die is less likely to come up 6 if it has
>>come up 6's the last 3 times. Many people think that the Social
>>Security Trust fund is significant when compared with Social
>>Security's debt. Most people think that when they buy a house they're
>>"building equity", when in reality 98% of what they pay goes to the
>>bank. Most people believe that the greatest danger to their children
>>is posed by strangers (when children are molested, kidnapped or
>>killed, it's almost always friends or family). I could go on forever
>>like this.

>These are cases where the public thinks one thing, and experts think
>another. Here it is not clear that the public knows of the expert's
>opinion. Without that, it is not an example of the phenomena at issue.

I think it's quite germane. The point is that most people are using "broken"
cognitive thought processes (i.e. they're not acting at all like Bayesian
agents) and that when they consult other people's opinion they consult other
people (the public as a whole) who also are mostly not acting like Bayesian
agents. So their opinions are not well-formed, and are not good evidence.

Many of the elements I quoted involve information which is quite easy to
research and which is genuinely relevant to ordinary people. My point is
that people generally do not act like good Bayesian agents in usual
circumstances. I see no reason why experts in general are any different. My
impression of scientists is that they mostly learn to be passable Bayesian
agents by having their ideas repeatedly disproved until they learn to
approach even plausible, compelling ideas with scepticism. In the social
sciences and humanities this school of hard knocks really only applies to a
few fairly quantitative subspecialties.