Re: How You Do Not Tell the Truth

From: CurtAdams@aol.com
Date: Wed May 02 2001 - 01:17:37 MDT


[Robin posts a paper by him and Tyler Cowen on the implications of
disagreements]
http://hanson.gmu.edu/deceive.pdf or .doc

I have no idea where to begin. I'm having such trouble formulating coherent
stuff these days. I'm not going to quote from the paper as there's too much.
 I *strongly* recommend people read the paper, even if you have no interest
in what I say. http://hanson.gmu.edu/deceive.pdf or .doc

I mean it. Read the damn paper. http://hanson.gmu.edu/deceive.pdf or .doc
(Robin, could you make a Web version?)

I agree with the conclusion of the paper - that people form ideas for their
personal benefit, not in search of truth. If gene a induces carriers to seek
truth and gene b to have more babies, which wins? If meme a induces carriers
to seek truth and meme b to proselytize, which wins? Necessarily, we are
constructed by entities whose goals need not include truth at all, and will
include it only as a means to an end. Science has constructed an elaborate
set of principle to enhance truth-seeking such as hypothesize-test, peer
review, and consideration of as many alternatives as possible. The existence
of the scientific method leads directly to two observations: 1) that it's
needed in the first place and 2) that it is often inadequate. Hence humans
carry strong anti-truth biases.

I think though, that the paper overlooks other limitations on human Bayesian
behavior, primarily limited computational ability. I would say most of the
non-Bayesian behavior you outline can be explained by human inability to be a
good Bayesian, as well as by the fact that people aren't trying to be good
Bayesians in the first place.

First, as I've said before, people aren't good Bayesians. Being a good
Bayesian requires logical omniscience. People are very, very far from this,
as explified by behavior in psychological cooperator/defector games. People
start off with "obviously" wrong actions - typically too cooperative,
although that varies by game, and shift to proper behavior only with
experience. People are good Bayesians only when the situation is simple and
they can manage the computation.

Bayesian behavior is correct, of course, in the sense of vulnerability to
Dutch book. Humans can't be good Bayesians, and they know it. As a result,
they take a logical route - they refuse Dutch book situations. This is one
of two reasons why group don't have to have group Bayesian behavior - groups
typically refuse book. If they had to place bets, they'd place them among
themselves because they'd get better deals. The other reason is that each
member is typically interested in his own benefit - group be ****ed, he's
going to make the best bet he can.

Resolving disputes by commonizing prior is a cost-benefit negative. Changing
priors is horrendously expensive, in the sense that all those mind-expensive
probability calculations must be redone. At the same time there's no benefit
- one prior is as good as another. High cost, no benefit - of course people
don't bother and agree to disagree. The only time people will commonize
priors is if they have too - say to participate in a market. Non-Bayesian
human behavior feeds into maintaining independent priors, as well. If we
were both good Bayesians I'd be more willing to accept your prior as an
alternative to mine. But I know that you don't want to be a good Bayesian
and couldn't be if you tried. I'd be a fool to swallow your priors, and vice
versa.

The John and Mary example also assumes John and Mary can efficiently exchange
information. This doesn't match human experience. It's very hard to
communicate experience - language is a low-capacity channel compared to
eyeballs and fingertips. Time is valuable. John hasn't time to get all
Mary's data, or vice versa. There's also a significant risk to exchanging
data. Human memories are not tape recorders, but actively constructed. To a
certain extent, if Mary says she saw something, John will remember seeing it,
even if he didn't. Communication between people involves a certain amount of
data loss due to this principle. Sequential communication eventually
converts data into urban legends, stuff that's particularly easy for humans
to remember or pleasant to recount, with little or no connection to truth.

Robin and Tyler mention the predeliction of scientists to support their own
theories even if evidence is against them. This is logical behavior from the
personal gain POV. The inventor or popularizer of a new theory gains great
benefit in science. Supporters of existing theories gain little benefit.
Hence novel theories *should* be supported by their inventors or
rediscoverers, even if the evidence is against them, as long as evidence
isn't *too* much against them. Robin and Tyler explain scientific
intransigence to group benefit from considering multiple alternatives, which
is there, but selection on individuals explains more directly than that on
groups. I propose that the group would benefit more if people supported
theories developed by other people, due to greater skepticism. Hence I think
the behavior exists primarily for individual, not group benefit.

Sorry my comments are so negative. I really liked the paper. But, you know,
I gain little from accepting your conclusions and a lot if I get to put out
something successful of my own ... :-)



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:01 MDT