"Eliezer S. Yudkowsky" wrote:
> The problem, Robin, is that, even under your theory, the observed data is
> consistent with a world in which rational people do exist, but are so
> sparsely distributed that they rarely run into each other.
> ... the three-sigma level of a Foresight
> Gathering is still not enough intelligence to eliminate self-deception.
> I'd pick the third possibility. Foresight Gatherings do show *some*
Some perhaps, but not enough to give great hope about a six-sigma group.
> But anyway, let me see if I can predict the general reaction to your
> paper's abstract:
The *general* reaction is probably "huh?"; "what's for lunch?" :-)
So I'll presume you mean some set of especially intellectual folks.
> "... your paper does not ... change to my underlying model, in which *I* am
> one of the sparsely distributed rational people." ... the
> fact of *observing yourself* to think up this particular counterargument
> does not license you to conclude that you are rational. ...
> "My experience in the Korean War enabled me to stop being self-deceptive"
> Of course, ... An external observer, though, is likely
> to abstract away the idea of an extra added underlying cause and see all
> proffered excuses as identical, or belonging to the same class ...
> In this way, we finally arrive at a situation in which some observers may
> reason themselves into a corner from which *no* utterance allows you to
> conclude that a party is not being silly, even if that party is really and
> truly Not Silly. ...
I have hope that the situation is not quite this bad. As I hinted at briefly
the paper, we do have a number of reasonable candidate objective signs
of self-deception. Let you and I first look at third parties, and find signs
which indicate which of them are more or less self-deceived. *Then* let
us apply those signs to ourselves to see which of us is more self-deceived.
Yes, if your self-deception is flexible and foresighted enough, it might
anticipate which signs will make you look good and bias your evaluation
of third parties. Which is a good reason to defer to other people's
research on self-deception and its signs, rather than favoring your
> ... Even an actual genius, if she comes out
> and says "I am a genius", will be plugged into a Bayesian prior that
> estimates a million-to-one chance for genius and a ten-to-one chance for
> self-overestimation, producing an estimated prior of 100,000:1 that the
> speaker is an overconfident fool.
This gives a glimpse of just how different it might be to live in a world
of mostly meta-rational posthumans.
> ... I also find that Robin Hanson's more recent
> papers bear a remarkable resemblance to concepts that appear in "Friendly
> AI". Since self-deception and stupidity generally allow for arbitrary
> factors to creep in, the fact of convergence probably implies that one or
> both of us is getting smarter and less self-deceptive.
Or just converging on the same path of error. Let us hope that like
families, every disfunctional mind is disfunctional in its own
unique way. :-)
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:02 MDT