Re: Bayesian constraint by the evidence

From: Dan Fabulich (dfabulich@warpmail.net)
Date: Thu Jun 19 2003 - 03:48:07 MDT

  • Next message: Alex Ramonsky: "Re: OOPs. Re: Offlist Re: List Moderator Suggestions"

    Eliezer S. Yudkowsky wrote:

    > Dan Fabulich wrote:
    > >
    > > I think you're getting yourself confused with the language of
    > > "unconstrained by territory". I also note that you simply snipped my
    > > Bayesian argument, which I took to be the meat of my point.
    > >
    > > Given a theory T and our background theory B, we regard our theory T to
    > > *more* likely if P(T|~B) is higher, *regardless* of P(B), all else being
    > > equal. Do you contest this for some reason?
    >
    > It's a Bayesian answer to a most un-Bayesian question. We do not want the
    > probability of our theory P(T) to be as high as possible. Seriously. We
    > don't. There is no "our" theory in rational thinking. We want P(T) to be
    > whatever P(T) really should be. We do not want to argue *for* P(T), we
    > want to equably assemble all the evidence for and against T. This being
    > the case, I as a human find it quite highly suspicious when people try to
    > have their cake and eat it too.

    This completely missed the point. The point is that we want to pick the T
    with the highest probability. I'm not trying to MAKE P(T|~B) higher so as
    to make P(T) turn out better for my favorite T, I'm suggesting that we
    ought to select T over T', under which P(T|~B) is low; that, furthermore,
    if we can find a T* that's even *more* likely, we should switch to that
    instead.

    > It moreover follows from Bayes' Theorem that if B is evidence for T,
    > then ~B *must* be evidence against T - you *cannot* have your cake and
    > eat it too.

    The degree of confirmation that B is for T is given by the log of the
    likelihood ratio

      (lR) log (P(T|B)/P(T|~B)).

    You're right, it can't be both positive and negative. But what of that?
    I'm precisely proposing a case where T should be (largely) independent
    from B, or at least more independent from B than T'. (Note that T is
    fully independent from B only when lR is 0; but we can say that T is
    *more* independent from B than T' if the lR of T is closer to 0 than the
    lR of T'.)

    > > Let's suppose I agree with you that any theory T for which P(T|B) and
    > > P(T|~B) are both high is less "constrained by the territory" than an
    > > alternate theory T', under which P(T'|B) = P(T|B), but P(T'|~B) is very
    > > low. T' is constrained by the territory. T is less constrained by the
    > > territory. Are you trying to tell me, against Bayes, that we should hold
    > > T' to be more likely than T, because T is "mere philosophy" whereas T' is
    > > "constrained by the facts"?
    >
    > Nope! Reality has no preference one way or the other. Therefore
    > neither should our theories. If T and T' start out being mostly equal,
    > then in the situation you name, we should slightly prefer T to T'.

    Well, that's my whole point! You smear T as "mere philosophy", but my
    whole point is that we should slightly *prefer* it for its modularity, not
    reject it.

    > *However* this is because the prior probability of T' starts out lower,
    > and then, if we observe B, this will be evidence about T' which raises
    > it to the around the level already occupied by T; however we haven't
    > observed B yet.

    Sure.

    > You cannot say that both B and ~B are arguments for T - that is not
    > possible.

    Of course not. I'm saying that T is more independent of B than T', and
    therefore, all else being equal, preferable.

    > The most you can say is that T is about evenly compatible with B and ~B
    > such that B does not interact much with T and is not evidence about T one
    > way or the other. Interpreting P(T|B) and P(T|~B) as "high" does not mean
    > that both B and ~B are strong arguments for T; this is *impossible*. It
    > can only mean that the prior probability of T is high and that B is not
    > much additional evidence one way or the other.

    Yup. That's good ol' modularity for you. You have a problem with that?

    > > But that's not at all the case in my T & T' example, by construction, and
    > > it doesn't apply to our "Why believe the truth" argument either.
    > > Nobody's saying that we shouldn't consider the probability of a
    > > Singularity [P(T|B)] in our calculations. But we ARE saying that a theory
    > > that applies well to Singularitarians and non-Singularitarians alike is a
    > > better theory, more likely to be true, than a theory that applies equally
    > > well to Singularitarians but not at all to non-Singularitarians, ceteris
    > > paribus.
    >
    > The key word is that "ceteris paribus". It never is. Ceteris paribus,
    > what you are saying is true, but ceteris is only paribus if you just
    > happened to stumble across two theories T and T' and you're wondering
    > vaguely which of them is more accurate. This business of trying to
    > deliberately assemble evidence *for* a theory is never Bayesian. You
    > are just trying to sum up all the support and anti-support you run
    > across. If it happens to support the theory, great, but you can never
    > *try* to make a theory stronger by *trying* to show it is compatible
    > with ~B if that is not a natural way for the theory to work.

    So, here's where I think you've got me confused. You see me as saying:
    "Hmm. I wanna believe claim X. It's certainly probable given S. How
    could I make it more probable given ~S?"

    But I'm not saying/doing this at all. I'm saying: "You've got an argument
    A that entails X. A is certainly sound given S, but it depends on S; if
    ~S, A is probably not sound. We should keep an eye out for some argument
    A' (which might also entail X) that's independent of S." And, indeed, I
    take myself to have offered a few such arguments, arguments which are
    independent of Singularitarianism.

    These independent arguments are better arguments for X; how much better
    simply depends on how likely a near-future Singularity may be.

    -Dan

          -unless you love someone-
        -nothing else makes any sense-
               e.e. cummings



    This archive was generated by hypermail 2.1.5 : Thu Jun 19 2003 - 03:57:43 MDT