Re: Why believe the truth?

From: Dan Fabulich (dfabulich@warpmail.net)
Date: Mon Jun 16 2003 - 20:23:09 MDT

  • Next message: Dan Fabulich: "Re: Why believe the truth?"

    Robin Hanson wrote:

    > I edited out more of the discussion above because it all comes down to
    > one simple point. Yes, truth is instrumentally useful in various ways,
    > but unless you assign it overwhelming primary importance, it is easy to
    > find situations where you should be willing to give up the instrumental
    > advantages of truth to obtain other things. Such as in marriage,
    > promoting group loyalty, and much else. Can't you imagine any such
    > situations?

    I'd like to add another argument to Eliezer's "Just Be Rational" argument.
    (This is apparently one of those rare occasions when intelligent people
    remind me of Nancy Reagan.)

    Robin, you suggest that you can think of a non-trivial number of
    situations in which we should be willing to give up the instrumental
    advantages of truth to obtain other things, perhaps even in day-to-day
    life. I think that the sort of cases you might be considering fall into
    three distinct classes: the wireheaded, the rare, and the heuristic.

    The first category, the wireheaded, includes cases where you'll be happy
    that your goal was met by simply believing that your goal was met, even
    though it wasn't. I think this class has its defeat laid at its own feat:
    if you are anything like rational, it is against your goals, whatever they
    may be, to wirehead. Your goal is an objective aside from its own
    satisfaction... to the extent that you do something else instead of
    satisfying your goals, especially to the extent that you do something that
    makes it substantially less likely that they would be satisfied, you
    violate your own goals.

    The second category, the rare, is the category Eliezer himself has
    thoroughly maligned: it includes those cases which ethical philosophers
    might consider but which would, in fact, be so rare that considering them
    would have net negative utility. Of those, I think, no more need be said.

    But another final category might include cases in which we act on
    approximating models, where the fundamental objects, properties and
    relations in those models may not actually exist in the real world, not
    even as "compositions" of real lower level objects. We use heuristic
    models like these all the time to do our jobs and cope with the world; we
    often act on these models automatically or on reflex. What's more, it's
    obvious how, given the actual structure of our minds, setting these
    behaviors up as reflexes might sometimes be a good idea.

    But in all of the cases of this kind, I take it that if a well-informed
    person (that is, someone who knew of a particular more accurate but less
    usable model) assumed one of these models, even acted on it by reflex,
    that person should not, on the whole, really *believe* in the models as
    fundamentally accurate or true. A well-informed person takes these false
    approximations as tools. He or she need not make a leap of faith and
    *agree* with these models in order to get the job done.

    OK, so let's consider your actual objections now with these categories in
    mind. You suggest, for example, that I might believe that my hypothetical
    spouse is faithful, in order to maintain the happiness of my marriage.
    (Of course, this is just a special case of group loyalty, where the group
    is a small family.)

    Well, there are really *two* ways in which that could be instrumentially
    beneficial; these need to be teased apart in order to properly understand
    the example. First, to the extent that you simply believe that your
    spouse is faithful, you may believe yourself to have a successful
    marriage, and may therefore be happy. Second, to the extent that you
    *act* as if your spouse is faithful, you won't, for example, obviously
    withhold trust from your spouse, which may (on the whole) benefit you and
    your marriage.

    The first instrumental "benefit" is just a special case of wireheading.
    And it doesn't make any more sense here than it does in general. You
    _want_ a successful marriage; you don't want to *believe* that you have a
    successful marriage. So, on this dimension, which is separate and
    orthogonal from the latter instrumental benefit of action, there is no
    instrumental benefit in believing a falsehood.

    But as we can clearly see by separating out the effects, the latter
    benefit can be acquired simply by *acting as if* the spouse were faithful:
    don't accuse, don't divorce, don't act any differently at all, just go
    about your life knowing what you know. You can argue as to whether that
    may or may not be the right thing to do, but what you can't argue is
    whether you could *do* it without going off the deep end and actually
    *believing a known falsehood*.

    Or could you? Certainly it could be the case that, despite your attempts
    to be on your best behavior, your spouse might see through your act,
    realize that you know, and could then ruin the marriage somehow. Sure,
    it's possible, but this is a stereotypical case of the "rare" class that
    Eliezer has already said so much against: what are the odds that, in order
    for your goals to be accomplished, you not only need to *act* on an
    incorrect model, but *believe* in that incorrect model, to get that last
    0.1% of instrumental value? What are the odds that you'd do this by
    mistake and get yourself into even more serious trouble? That's what we
    mean when we talk about net negative benefit.

    This is just one example, but I assert that the same applies to any and
    all of the counter-examples you had in mind, especially if they're special
    cases of group loyalty. Either you can get the benefit by "acting-as-if",
    or you've fallen for wireheading, or your case is so rare that it's not
    worth considering.

    Hence, with the use of approximating models, we can get 99% of the value
    of believing a falsehood without actually doing so. This bolsters
    Eliezer's argument in the rarity of those "rare" cases (in which going off
    the deep end and actually believing a known falsehood would be
    beneficial); furthermore, it supports the claim that the rare cases really
    do have trivially low pay-offs, especially when compared to the odds that
    you might make a mistake, as well as the penalties that you might incur.

    -Dan

          -unless you love someone-
        -nothing else makes any sense-
               e.e. cummings



    This archive was generated by hypermail 2.1.5 : Mon Jun 16 2003 - 20:33:07 MDT