Re: Fear not Doomsday

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jun 03 2003 - 14:29:54 MDT

  • Next message: Terry W. Colvin: "FWD (SK) Re: Chomsky"

    Robin Hanson wrote:
    > Eliezer S. Yudkowsky wrote:
    >
    >>>> ... You can go forward from the discovery of new evidence; I'm not
    >>>> sure it makes sense to selectively eliminate evidence you were born
    >>>> with and ask what your "priors" were before that. ...
    >>>
    >>> ... Perhaps you were born a much smarter baby that the rest of us,
    >>> but most babies have no idea what their name is, how many humans have
    >>> lived before them, how it is that a universe coughs up a mind, or
    >>> even that they are in fact a mind that a universe coughed up.
    >>
    >> I was born with the evidence. I hadn't yet processed that evidence,
    >> but at birth, I was human. There was never a point at which I started
    >> doing anthropic calculations knowing I was a sentient being, but not
    >> that I was human.
    >
    > But that is the situation with pretty much all the evidence we ever get
    > or ever could get. The universe knew it when we were born, but we did
    > not know it. If you're going to refuse to consider your priors over
    > possibilities that the universe had rejected when you were born, you'll
    > have to refuse to consider pretty much all possibilities other than the
    > actual state of the universe.

    It's not just that the universe knew it, it's that the evidence was
    actually embodied in the shape of my mind at birth - that this was a human
    mind and not a posthuman one. For there to be priors, evidence, and
    posterior, there must actually be a mind that holds the prior, an
    encounter with the evidence, and then updating to the posterior. In this
    case, you're trying to extrapolate back from before the prior...

    Actually, this argument is now disintegrating because I'm trying to phrase
    it in a language that I have learned is wrong. Let me try again.

    Let's say that we have a doctor, a patient who may or may not have cancer,
    and a mammogram. Before the doctor sees the mammogram, we can calibrate
    the doctor's pointer state directly by figuring the frequency of that
    pointer state next to patients with the environmental fact of cancer.
    After the doctor sees the mammogram, we can, if we like, recalibrate the
    doctor's pointer state directly by figuring the frequency of that pointer
    state next to patients with the environmental fact of cancer. If the
    doctor knows the correct probabilities throughout, the relation between
    the doctor's calculated probabilities will obey Bayes' Theorem, just as
    the actual correlation between the pointer states and the environment
    obeys the naturalistic version of Bayes' Theorem.

    In other words, we have the interesting fact that the calibration of
    pointer states, as it changes over time, obeys Bayes' Theorem with respect
    to environmental correlations borne by incoming evidence. This is why
    Bayes' Theorem works. Or to put it another way, this is a naturalistic
    description of the fact that Bayes' Theorem works.

    It looks to me like you're trying to extrapolate back my pointer state to
    a prior that I never actually had, that is poorly defined (what is the set
    of observers?), and where many of the members of the so-called "reference
    class" may not yet exist, or may not exist at all, or may exist or not
    exist depending on present-day choices. Why should I extrapolate back to
    this prior I never had, to this non-naturalistic object, especially if it
    is ill-defined? Why not calibrate the pointer state directly?

    Each time you recalibrate the pointer state from scratch, you should
    arrive at an answer consistent with applying Bayes' Theorem to past actual
    pointer states - otherwise you haven't been a good rationalist, and the
    Fairy of Doubt won't bring you any new evidence for Newtonmas. But this
    doesn't mean you can extrapolate back before there were any pointer
    states, by subtracting evidence that already calibrated your pointer state
    at the moment you booted up, to get a "pre-boot-up" pointer state. Not
    surprisingly, such "What were your priors before you were born?" questions
    are often ill-defined, and so they often give ill-defined answers. Asking
    whether pointer state recalibrations obey Bayes' Theorem when they are
    extrapolated backward in time to before my birth strikes me as an
    unnaturalistic thing to do. Should I really give a hoot whether I was a
    good Bayesian before I actually existed? Yes, if you give a
    *well-defined* reference class of "pointer states that don't point to
    anything yet", you will get the Self-Indication Assumption on that
    reference class, followed by the Doomsday Argument to whatever your
    pointer state actually says. But so what?

    -- 
    Eliezer S. Yudkowsky                          http://singinst.org/
    Research Fellow, Singularity Institute for Artificial Intelligence
    


    This archive was generated by hypermail 2.1.5 : Tue Jun 03 2003 - 14:41:18 MDT