Re: Why believe the truth?

From: Dan Fabulich (dfabulich@warpmail.net)
Date: Tue Jun 17 2003 - 22:40:51 MDT

  • Next message: Brett Paatsch: "Re: Why believe the truth?"

    Eliezer S. Yudkowsky wrote:

    > Robin Hanson wrote:
    > > On 6/17/2003, Eliezer S. Yudkowsky wrote:
    > >>
    > >>> The vast majority of humanity believes they are in this situation.
    > >>
    > >> What of it? ... As it happens the belief is wrong, and isn't that the
    > >> point? ...
    > >
    > > It sounds as if you don't disagree with my claim; you just don't see why
    > > I would bother to make such a claim.
    >
    > That is correct; it seems uncontroversial that the majority of humanity
    > doesn't know what's coming down the line. Of course, neither may we, but
    > that would only seem to bolster the point about rationality; if the future
    > is not as strange as we imagine it will probably be stranger.

    Uh, I think you agreed with too much here. In particular, I think we all
    agree that the vast majority of humanity believes that they are in a
    situation in which the future will not be that different from the 20th
    century or before.

    But, as Robin said:

    >>>> In this situation, I claim the extra effort to substantially overcome
    >>>> our inherited biases and be more rational than we naturally would be
    >>>> seems usually more effort than it is worth, given the goals
    >>>> specified.

    On this point, it sounds as if you might be in agreement with Robin: that,
    for example, it might have been a good idea (I suppose we oughtn't call it
    "rational") not to be avidly truth-seeking in the mid-19th century, in
    light of the fact that the 20th century, despite all of its myriad
    changes, would be not all THAT different from the 19th, at least in the
    dimensions in which Robin is interested.

    But, isn't there an argument in favor of rationality that doesn't
    crucially depend on the claim that there's a Singularity right round the
    bend? The arguments I've presented thus far would apply equally well to
    Socrates as they do today.

    Is your argument for truth-seeking *merely* a special case of
    Singularitarianism? Can't we formulate an argument for rationality with
    appeal to Singularitarians and others alike? Shouldn't we, after all,
    accept Singularitarianism *under* some ethical/epistemic requirement to
    pursue/believe the truth, rather than accepting Singularitarianism and
    then (thereby) finding a commitment to reason?

    > Deliberately strive for modularity? In a consilient universe?

    Yes, if only so as to prevent your conceptual scheme from becoming an
    ivory tower or walled garden. [Or, skipping ahead just a bit, because we
    need to take P(T|~B) into account.]

    > There is only ever one explanation. In it, all the pieces fit together
    > perfectly, without strain. Any divisions in that explanation are
    > artificial - human biases.

    > I would not take it as a good sign if my theories about one part of
    > cognitive science were consonant with many possible alternatives
    > elsewhere; it would be a sign that the theory was inadequately
    > constrained by the evidence.

    As a Bayesian, this is just nonsense. If your cog-sci theory is T and
    your background theory is B, P(T) is higher, all else being equal, if
    P(T|~B) is higher, not lower, no matter how high you think the odds of B
    are.

    Putting that another way: Remember, all the logical facts (which are
    utterly "unconstrained" by the evidence) are true regardless of
    "alternative" background theories, yet they are the most certain of all of
    our beliefs. They're "bad" science, I suppose, but they make for great
    beliefs!

    Being consistent with alternate explanations makes a belief *better*, not
    *worse*.

    > I certainly wouldn't deliberately alter the theory to make it consistent
    > with more than one alternative elsewhere.

    That's a pity, because, all else being equal, you'd have a more likely
    theory.

    > Nature doesn't work that way in constructing explanations; how could it
    > be a good method for discovering them?

    My first response: "I would least expect to get this kind of argument
    HERE, of all places! Isn't it rather the point that we can do a bit
    better than nature?"

    But let me put that a better way: You say that nature just *picks* its one
    explanation. Fine: so the probability of everything being just-so in
    light of everything being the way it is, from *nature's* point of view, is
    1. That's all fine and dandy, but what you're saying is that *nature
    isn't being a Bayesian*.

    But we know better, don't we? We form our believes in a Bayesian way, not
    because the world is Bayesian, (as you rightly point out, it just picks
    possibilities and locks their probabilities to 1; it has no priors to
    adjust) but because Bayes is the best way we have of getting *our beliefs*
    to *look* like the world.

    [In fact, I think you won't like this conclusion at all: if you were to
    give up on anything, I'd expect you to give up on the notion that nature
    is non-Bayesian. But, of course, in that case, it's all the more
    important to ensure that your beliefs rank well, which means taking
    P(T|~B) into account.]

    > Your first priority should be to discover what the real answer is about
    > the usefulness of rationality. When you know the real answer, then
    > worry about how to explain it - constructing an accessible explanation.
    > How can you do both at the same time without biasing your conclusions?

    You can't help but do both at the same time. And our conclusions are
    "biased..." by our priors! It's a feature.

    -Dan

          -unless you love someone-
        -nothing else makes any sense-
               e.e. cummings



    This archive was generated by hypermail 2.1.5 : Tue Jun 17 2003 - 22:50:34 MDT