From: Robin Hanson (rhanson@gmu.edu)
Date: Wed Jun 18 2003 - 13:28:51 MDT
On 6/18/2003, Hal Finney wrote:
> > we may well evolve to become even *more* self-deceived than we are now,
> > as believing one thing and thinking another becomes even harder than now.
>...
>Wei Dai had made a somewhat similar point ...
>: ... The incentive for self-serving rationalizations becomes much higher
>: when lying is impossible. It's not clear whether this could be prevented
>: by any kind of technology.
>
>I'm not sure though that Robin's and Wei's point(s) are necessarily
>valid; just as people could become economically compelled to tell the
>truth, they might feel equal pressures to seek the truth, that is, to
>not self-deceive. It doesn't do me much good to know that you won't
>lie to me, if I can't tell if you're lying to yourself. ...
>There also might be pressure, along the lines that Robin describes
>for standardization, towards mental structures which are relatively
>transparent and don't allow for self-deception. ...
The key problem is that beliefs seem more local than biases. It should
be easier to show you my current beliefs on a particular matter than to
show you the entire process that led to those beliefs. I could hide my
bias in thousands of different places. So either you need a global
analysis of all my thinking, to verify that the whole is unbiased.
Or you have to accept that you can see my beliefs but not my biases.
This seems to suggest a discrete split in our futures. Either agents,
or certain internal modules, are standardized enough to allow biases to
be checked, giving truly unbiased agents or modules, or biases are hard
to see but beliefs are not, so that agents self-deceive.
To see that there really is a demand for such self-deception, let's
work through an example. Let us say I know how much I really like my
girlfriend (x), and then I choose my new beliefs (y), under the
expectation that my girlfriend will then see those new beliefs, but not
any memory of this revision process. (Of course it wouldn't really like
this; analyzing this is a simple way to see the tradeoffs.)
I face a tradeoff. The more confident I become I like her the worse my
future decisions will be (due to the difference y-x), but the more she
will be reassured of my loyalty (due to a high y). The higher my x,
the higher a y I'm willing to choose in making this tradeoff. So the
higher a y she sees, the higher an x she can infer. So this is all
really costly signaling.
Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Assistant Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323
This archive was generated by hypermail 2.1.5 : Wed Jun 18 2003 - 13:39:48 MDT