From: Dan Fabulich (dfabulich@warpmail.net)
Date: Mon Jul 14 2003 - 17:17:33 MDT
Robin Hanson wrote:
> One approach to reducing hypocrisy is preaching, i.e., pointing out to
> people how their actions fall short of their ideals. This seems to have
> mostly played itself out; preaching has been long tried and our ability
> to self-deceive seems robust against it. Perhaps such preaching will be
> more effective when aided by more detailed descriptions of how evolution
> helps us to self-deceive, but I see little evidence of this in the
> behavior of those who best understand evolutionary psychology and
> self-deception. The jury is still out here though.
[I think you're a bit hard on preaching here... We have every reason to
think that hypocrisy would only be higher than it is today if it weren't
for the preachers (coupled with journalists who run exposes, and, to a
lesser extent, the public justice system). I have to agree with you that
the "last mile" solution for hypocrisy won't come from this quarter, but,
as the 80/20 rule goes, preaching was an incredibly important first step.]
> Another approach to reducing hypocrisy is social pressure enhanced by
> more transparency. Our tendency to hypocrisy evolved in small tribes
> where transparency is far higher than in our modern society. But
> perhaps a future society will be even more transparent, for example
> letting us see each other's thoughts. However, it is not clear that
> social pressure always reduces self-deception and hypocrisy; on some
> issues it may well increase hypocrisy.
I think, broadly, that the answer we're looking for is transparency
*coupled* with a capacity to predict/identify consequences. You've argued
in the past that a "transparent" society that allowed high access to each
other's beliefs/goals might apply pressures for extremely self-deluded
people; I think this would be true, *if* we weren't also able to tell much
of the truth about the rest of the world as well.
For example, suppose a party of politicians were to believe that acquiring
power for themselves would promote the good; suppose as well that they
believed this on account of some bogus science in which they also
believed. Well, if everyone were able to establish clearly that their
claim to power was based on bogus science, all else being equal, this
party would have a very hard time accomplishing their goals.
On the other hand, if it were very hard to tell whether their science was
bogus, then there *would* be pressure to join this party; if mind-reading
were possible, then there would be pressure to self-deceive into believing
this bogus science and working with the party to acquire power.
Eliezer has suggested that the capacity to detect self-deception would, in
turn, be as easy to detect as lying. I suspect that this would be no
easier than telling whether someone was simply wrong; this requires you to
be able to tell not only what a person is really thinking, but also what
the truth is.
Of course, if we notice that someone has acquired a belief by means of a
blatantly irrational method, that might be grounds for calling them
self-deceptive, but, really, more to the point, it'd be grounds for
calling them wrong, after we ourselves employ a rational method for
identifying the truth.
I think the mechanisms of transparency have already been well-discussed in
this context. In the short to medium term, ubiquitous surveilance is the
obvious mechanism; in the longer term there may be some kind of
mind-reading.
So, how do we make sure that everyone is good at identifying the truth?
Well, again, falling back on the old 80/20 rule, education is obviously a
crucial first step. From where would we get the last 20%?
Of course, if you think that intelligence enhancement just *is* an
increased capacity to understand and identify true statements, then this
question has, in fact, been considered at length here on the list; I've
got nothing in particular to add to it here. Still, this is probably more
long-term, along the lines of the "mind-reading" solution. (Nootropic
suggestions notwithstanding.)
In the shorter term... well, I've always been a fan of the Idea Futures
markets as mechanisms for maximizing visibility into what can be an
extremely tricky field. But, IMO, Idea Futures markets are really bets
around the results of what Drexler called a "fact forum", often called
"science court" especially when they would be run by the government. It
is on these systems of fact-finding that any idea futures markets would
hinge; the markets themselves would just be mechanisms for making their
predicted results quantitative and visible. So, in a sense, when I say
that Idea Futures would be a good idea, I mean that publicized fact forums
would be a good idea.
Another way of putting this is that our scientific results should be a
matter of proofs, only they should be proofs which, themselves, intimately
involve human idea judges. The judges should be as transparent as
possible, making their systems of dispute resolution as clear as possible
so others can follow the steps in their process for themselves. These
judges should also be well-trained, and should be able to prove as much.
But let's suppose a system like this were to be in place, and suppose it
were to be the primary mechanism of social pressure acting on us, and we
in turn are making correct actions and claiming that we're doing so based
on the findings of fact forums. At that point, do we care about
hypocrisy? It seems to me that the answer must be no, or at least, we
shouldn't. If so, that suggests to me that the ubiquitous surveilance and
mind-reading would be, at best optional; certainly there is the plausible
argument that transparency itself could lead us to your third approach:
totalitarianism.
> A third approach to reducing hypocrisy is a more totalitarian democracy.
> Democracy seems to induce people to vote their ideals, since there are
> almost no other personal consequences of your vote besides how that vote
> modifies your self-image. So the more kinds of behavior are dictated by
> a totalitarian democracy, the more such behavior might be dictated by
> shared ideals. This tendency might be restrained by international
> competition, if some ideals make nations lose such competition, but
> democratic world government might be less restrained.
I think most people would agree that, even if this would somehow have the
effect specified, it would be a Bad Outcome. (Certainly all extropians
would, but that's neither here nor there.)
We really have no want or need to eliminate "hypocrisy" as such. We just
want to maximize people doing actions that *actually* have good
consequences. This has a great deal more to do with identifying the truth
and with noticing what people DO than it does with what they believe...
knowing what people believe is, on this picture, only useful so far as it
helps you to know/predict what someone will do.
In summary, to answer your questions:
> 1) To what extent might we expect humans or their descendants to reduce
> their hypocrisy?
It could be possible if people were better able to identify the truth,
through, at least, some use of transparency and fact-finding forums.
> 2) If they do, in which direction will the resolution be, toward current
> ideals or actions?
If we ONLY get ubiquitous transparency without any increased capacity to
identify the truth, then the direction will resolve away from ideals by
increasing self-deception. To the extent that we can, in fact, identify
the truth better than we could before, we will resolve towards ideals by
changing our actions.
> 3) Will such a reduction in hypocrisy be a good thing?
Probably not to the extent that ubiquitous surveilance gets involved;
certainly not to the extent that these mechanisms of surveilance become
part of a totalitarian government.
Hypocrisy itself isn't really what we wanted to fix; all we really wanted
was to get people (especially people "in charge") to act well, regargless
of whether they really believe in the principles they uphold (though, of
course, it helps a great deal if they believe in them).
-Dan
-unless you love someone-
-nothing else makes any sense-
e.e. cummings
This archive was generated by hypermail 2.1.5 : Mon Jul 14 2003 - 17:27:03 MDT