Re: Why Does Self-Discovery Require a Journey?

From: Dan Fabulich (dfabulich@warpmail.net)
Date: Mon Jul 14 2003 - 03:11:21 MDT

  • Next message: Eliezer S. Yudkowsky: "Re: Why Does Self-Discovery Require a Journey?"

    Robin Hanson wrote:

    > On 7/13/2003, Dan Fabulich wrote:
    > > > A happiness metric does not require that people be informed about the
    > > > consequences of their choice.
    > >
    > >Of course not. Perhaps I'm misunderstanding your argument, but I thought
    > >the claim was: "We can think of two ways of identifying what you'd
    > >'really' want. First, we can see if you'd be happier if you got it.
    > >Second, we can see if you'd choose to get it when they are briefly and
    > >privately informed. In the case of betraying the tribe, the person would
    > >do this if briefly and privately informed. Therefore, this is what they
    > >really want."
    > >I'm claiming, against this: "But it wouldn't, in fact, make them happy.
    >
    > The question was about the choice between power/status and doing good for
    > the tribe. We have been assuming that typically when faced with this choice
    > people fool themselves into thinking they are actually doing good for the
    > tribe. We have been asking if this is what they "really want", and I
    > proposed considering two standard ways to define what we "want." I claim
    > that people are happier in the situations where they get power/status,
    > relative to the situations where the tribe has been done good to, in part
    > because they can fool themselves into thinking they are doing good by
    > getting power/status.

    Ah, I see. So, would you agree that, if I am right that most people who
    make a briefly informed betrayal would be miserable, your happiness metric
    would, at least, be in contradiction with the "informed choice" metric in
    those cases?

    Of course, the opposing results may also be reversed: even in cases where
    people are actually blindly obliviously happier when their tribe is worse
    off, it may still be the case that the "informed choice" *would* lead them
    in an entirely different path, rejecting their earlier programming, were
    it to be applied.

    Having established that it's possible, there's the question of
    establishing the real answer: just how common *are* these contradiction
    cases? Eliezer and I can only provide prima facie back-of-the-envelope
    responses, e.g. Eliezer claims that he can't think of anybody who has been
    informed of their programming and hasn't tried to shake it off; he also
    claims that most people who choose betrayal in an informed way turn out
    miserable from guilt. Eliezer's arguments seem right to me, but actually
    testing this requires real research to which I can only give an
    "intelligent layman's" response.

    Regardless, the latter argument I'm making allows you to win this claim
    without forcing me to conclude that you've identified "real" wants; on
    that point, it's philosophy. [In which, you know, I actually have a
    degree. ;)]

    > >Your argument is that we're self-decieved: we are "really"
    > >untrustworthy, in the sense that we "really" want things that no one would
    > >trust us if they knew we wanted them (and that we would act to get them).
    > >You follow that up with the claim that if individuals become honest to and
    > >with themselves, they'll suffer serious consequences.
    > >
    > >Any plausible normative corollary to this claim would require that either:
    > >1 we should cast off our "self-deception" and be honest to ourselves, come
    > >what may, accepting that we all really want terrible things and that no
    > >one should be trusted in the ways that they've been pressured to claim, or
    > >2 we shouldn't believe the truth, as you argued in an earlier thread.
    >
    > I have so far avoided making any moral claims; I have just been making
    > claims about what people "really want".

    I was actually quite aware of that; I went back and reviewed your postings
    on this matter and was surprised by how little you had to say about the
    practical component of this argument.

    Still, I stand by my claim that there are basically two plausible
    normative implications of your claim: 1 accept it honestly, "come what
    may" or 2 reject it, even though it is true; while you needn't commit
    yourself to either one in particular, (e.g. if I utterly destroy 2, you
    may still take refuge in 1, and vice versa,) I think you must ultimately
    agree with *one* of those two normative corollaries if you accept the
    arguments you're making about what people "really want."

    And if both of the possible moral implications of doing this are bad, when
    there's another option which doesn't have these bad moral implications, I
    can just say "Look, accepting this claim would require me to act
    sub-optimally, or perhaps in a way that is deontologically forbidden;
    furthermore, there's an alternative proposition I can accept that doesn't
    lead me down the wrong moral road. Therefore, I should accept the
    alternative proposition instead, and I should morally exhort others to do
    the same." (Which is, of course, what I've actually done.)

    > >As you may recall, I argued that 2 was self-contradictory in the
    > >"Should we believe the truth" thread: the short version of that
    > >argument was that it was contrary to 2's own normative principles to
    > >believe that it was true. So, if it were true, we shouldn't believe
    > >it: it would be its own first casualty. I claimed further that 2 was
    > >in violation of normative logic: that, rather, the truth is what we
    > >should believe. (This is because I shouldn't believe that "there is
    > >some X such that X is true but I shouldn't believe X.")
    >
    > You have generalized your 2 far more than need be. One might instead
    > claim that we shouldn't believe certain particular truths. That claim
    > would not be self-contradictory.

    That doesn't seem right at all... Supposing you did try to argue that
    this was just one particular truth that we shouldn't believe, you'd at
    least fall under the *domain* of the second argument from normative logic,
    since you'd be claiming that "there is some X that's true but I shouldn't
    believe it." You may come to reject my second argument, but I don't think
    you could claim that it "doesn't apply" to your argument because my
    argument is generalized whereas your claim is specific.

    Similarly, I'd argue that any "specific" argument you might make would get
    hit by the first argument as well. By arguing that your claim about our
    "real desires" is true, but you shouldn't believe it, you'd be forced to
    accept the logical inference that there is, therefore, some truth that you
    shouldn't believe; that you shouldn't always believe the truth. But this
    was precisely the claim I was responding to in the first place. (Perhaps
    you thought I was merely arguing against a case where you should
    disbelieve the truth "most of the time," but I take my argument to be
    adequately general that it could apply even when you propose that there's
    just a few white lies we should believe.)

    > >... being anti-realistic about the set of desires D, those which
    > >the economist seems to find that we have, allows us to act correctly on
    > >the economic data without believing anything that would lead us to have to
    > >accept 1 or 2. We can say, without contradiction, that the economist has
    > >found a set of things that might be called desires but aren't "really"
    > >what we desire at all.
    >
    > It seems to me you are going through these contortions for the mere purpose
    > of being able at the end to say that we really want to be moral. Sure, you
    > say, in the ordinary sense of want we don't want to be moral, but in this
    > new spiffy philosophical sense of want, that we just invented for this
    > purpose, we do want to be moral. Why not just accept that people don't want
    > to be moral?

    [I'd like to open with an aside on rhetoric here. When two people discuss
    X and ~X, "Why don't you believe that X?" is a perfectly reasonable
    question, whereas "Why don't you accept that X?" is an extremely loaded
    question.

    In the minds of many readers, the latter question asks the interlocutor to
    explain what psychological hang-ups are preventing him from accepting the
    truth, X. The former question asks for rational reasons why one wouldn't
    switch positions.

    I'm calling attention to this not because I think you're trying to load
    the question, so much as because I think if you'd noticed this
    connotation, as I did, you probably wouldn't use it. I'd also like to
    politely ask that we avoid using that particular phrase in the future.]

    OK, with that aside, I'll now actually answer your question: Why not just
    believe that people don't want to be moral?

    You answered this question yourself at the opening of the thread:

    On Mon, 07 Jul 2003, Robin Hanson wrote:

    > Real journeys of self-discovery would largely be dark affairs, wherein
    > mounting evidence forced people to believe ignoble things about
    > themselves that they would rather not tell others. And those who do
    > struggle over decades to learn the truth about what people want, and who
    > are willing to tell others, would face largely indifferent or hostile
    > audiences.

    It's easy to see how life will be miserable for anyone who believes and
    espouses this extremely cynical view; I take your depiction here to be a
    pretty gross understatement as to how bad life would really be for someone
    who took this philosophy truly seriously. Just imagine how much
    association you'd like to have with someone who was in the habit of saying
    things this: "I've gone through a terrible self-discovery process, and
    discovered that I don't really like other people, and, what's more, I
    realized that almost no one else does either. I'd rather be famous than
    help anybody; I believe most intimate relationships are based on commonly
    accepted lies, and that most people believe in our common mores on account
    of comforting self-deception."

    But even supposing that it would ONLY be as bad as you depicted earlier,
    well, hey, that's pretty bad. If it turns out that I have the choice of
    doing that or living a much better life, without compromising my
    intellectual honesty, it seems obvious to me which I (or anyone else)
    should prefer. Supposing I DID have to go through contortions to get it,
    it seems to me they would be worth it!

    Furthermore, I think you are in no way licensed to conclude that you've
    correctly captured the notion of "want" that agrees with "the ordinary
    sense"; indeed, if anyone's more licensed to make the claim to agree with
    "ordinary common sense" here, it would have to be me: you're the one
    proposing that almost everyone is basically wrong (self-deceived) in the
    way that they use the word "want".

    This brings me back to Davidsonian style arguments: if you were
    interpreting the "ordinary sense of want" correctly, which is to say
    charitably, you would never accept an error theory like the one you
    currently propose. Interpretative theories should have the possibility
    for mistakes, but that's a world of difference from near-universal
    self-deception; if that's your conclusion, it seems that you obviously
    failed to apply enough of the principle of charity.

    > I don't think I follow you here, but it seems irrelevant to me; if need
    > be I'll just put on my philosopher's hat and claim to be in both roles.

    If you like; I maintain that your economist's hat won't help you much in
    the realism/anti-realism debate, but we're cosmopolitan these days; you
    can wear any hat you like. ;)

    -Dan

          -unless you love someone-
        -nothing else makes any sense-
               e.e. cummings



    This archive was generated by hypermail 2.1.5 : Mon Jul 14 2003 - 03:21:48 MDT