Re: Why believe the truth?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jun 16 2003 - 21:27:22 MDT

  • Next message: Hal Finney: "Re: Why believe the truth?"

    Robin Hanson wrote:
    > On 6/16/2003, Eliezer S. Yudkowsky wrote:
    >> ...
    >> It's one thing to admit the philosophical possibility that there are
    >> hypothetical scenarios where the "right" thing to do is believe
    >> falsely, just as it is possible to construct thought experiments where
    >> the right thing to do is commit suicide or overwrite your own goal
    >> system. .... The rule "Just Be Rational" is more reliable than you
    >> are. Sticking to the truth is a very simple procedure, and it is more
    >> reliable than any amount of complex verbal philosophization that tries
    >> to calculate the utility of truth in some specific instance.
    >
    > It would be nice if what you said were true, but alas I think it is
    > not. We actually usually follow the rule "Just be Natural". Evolution
    > has equipped us with vast and complex habits of thought, which people
    > mostly just use without understanding them. This isn't a particularly
    > dangerous strategy for the usual evolutionary reasons: your ancestors
    > did pretty well following these habits, so you probably will too.

    That works to the extent that:

    1) Your sole and only goal is to reproduce statistically more frequently
    than your contemporaries, above all other considerations including
    individual survival.
    2) You are in the ancestral environment.
    3) You are playing the ancestral game. If you live forever, for example,
    than developing your personal potential to the fullest may call for quite
    different rules than if you are only granted 40 years or so.

    Even if we consider only personal survival, what are, on the average, the
    greatest probable threats to the survival of any given human living today?

    1) Existential risks of transhuman technologies.
    2) Answering "no" to an FAI who asks "Do you want to live forever?"

    All of a sudden, engaging in group self-deception does not look quite as
    clever. Why not? Because listed above are two major risks, completely
    unexpected, completely unancestral, emerging from left field. You and I
    know that *specific* information. But the only heuristic that could
    prepare someone ignorant of that specific information is: "Just Be
    Rational, Or The Unknown Variables Will Devour Your Soul." The more
    complex things become, the more nonancestral they become, the more all the
    evolved *distortions* of reasoning become anti-useful instead of useful.
    The distorting information that evolution imposes is very specific, very
    adapted, very special-purpose.

    Anyone who anticipates living forever should, I think, Just Be Rational
    because you are making decisions for the long term.

    Anyone trying to solve a complex nonancestral problem should Just Be Rational.

    Anyone trying for any goal aside from reproduction, especially any
    altruistic goal, should Just Be Rational.

    Anyone with ambitions more complicated than pumping out babies and then
    dying should Just Be Rational.

    Probably anyone in a First World country should Just Be Rational. (I
    don't know enough about Third World countries to say.)

    Anyone with a significant chance of seeing a Singularity in their lifetime
    should Just Be Rational.

    And of course, anyone who wants to retain their self-respect should Just
    Be Rational.

    > ...They are natural. They are safe strategies...

    The two are not synonymous at all. What's the third greatest threat after
    the above two, in the First World? Heart disease. What is the cause of
    heart disease? Taste buds containing obsolete statistical information
    about "the correlation between taste and the satisfaction of nutritional
    demand for environmentally available foods".

    > You might tell your girlfriend that she is average among the girlfriends
    > you have had, and that you think you are likely to stay with her for the
    > average time. You might admit you are an average lover and driver, that
    > the products you sell are average, and that your in-group is no more
    > morally justified than any other group. You might rarely disagree, and
    > accept that others are right as often as you. You might admit that you
    > care almost nothing about poor people in Africa.

    If you plan on living forever and growing up, then admitting to such
    problems is the first step toward actually doing something about them.

    It is rather like the argument for believing in an afterlife because it
    comforts you for the fear of death. It sounds very clever, but y'know,
    that sort of clever-sounding trick rarely if ever works in real life. It
    happens that we have the specific knowledge of how this trick fails. We
    know that, poof, all of a sudden, out of the left field, comes the
    possibility of immortality... and suddenly, believing in an afterlife is
    not "comforting"; it is deadly. It can kill you. The net yield of all
    the decisions ever made to believe in an afterlife may turn out to be
    steeply negative, even if we concede that its average value in earlier
    times was positive, which, incidentally, I do not concede. We have that
    specific knowledge... but suppose we didn't? It is very specialized
    knowledge, after all. You have to be able to read far ahead to see the
    *specific* way in which this clever-sounding choice ends up killing you.
    The *general* heuristic which would save us from this mistake: Don't try
    to be clever about your beliefs; just be rational.

    Even if you should successfully sum up all the risk factors for whether or
    not it is wise to deceive yourself in order to keep a girlfriend (editor's
    comment: if you do this, *something* is wrong with your life)... that's
    you, Robin Hanson, summing up all those risk factors. Look at all the
    knowledge you needed to do that. Look at where you got that knowledge
    from. You had to know what the bias was, where it came from, how it
    worked, its evolutionary origin and cognitive mode of action, game theory,
    Bayesian reasoning, evolutionary psychology... by the time you could make
    an informed decision about it, you no longer had the option of deceiving
    yourself; you knew too much. Even if we suppose that it might *turn out*
    to make sense to deceive yourself *in that instance*... how's J. Layman
    Boyfriend supposed to compute that answer? If he is ignorant enough to
    still have the opportunity of self-deception, how can he possibly know
    enough to figure out whether self-deception is safe?

    What makes you think that you and I now finally know enough to see all the
    threats to which ignorance gives rise? You won't ever see the bullet that
    kills you.

    -- 
    Eliezer S. Yudkowsky                          http://singinst.org/
    Research Fellow, Singularity Institute for Artificial Intelligence
    


    This archive was generated by hypermail 2.1.5 : Mon Jun 16 2003 - 21:37:30 MDT