Re: Parallel Universes

From: Wei Dai (weidai@weidai.com)
Date: Wed Feb 12 2003 - 13:55:25 MST

  • Next message: EvMick@aol.com: "Re: Fuel Efficient Cars (was Oil Economics)"

    On Wed, Feb 12, 2003 at 10:47:11AM -0800, Hal Finney wrote:
    > I have been following the discussion between Wei and Eliezer with great
    > interest. The issue of how probability and decisions should work in the
    > context of parallel universes is quite difficult. Tegmark alludes to
    > this in the last section of his paper. I know Wei and have had several
    > discussions with him on this topic but I still have difficulty fully
    > grasping his approach.

    For people who don't know me, I have been thinking about the issue of how
    probability and decisions should work in a multiverse, ever since
    Tegmark's 1998 paper came out. I've run a mailing list about multiverse
    theories for the past 5 years where this issue has been discussed
    frequently. There's a summary of my current approach posted at
    everything-list@eskimo.com/msg03812.html">http://www.mail-archive.com/everything-list@eskimo.com/msg03812.html. The
    current exchange with Eliezer was prompted by a problem I found in my own
    post. Here's a quote from it:

    What should P be? Suppose the multiverse consists of all mathematical
    structures (as proposed by Tegmark). In that case each element in S would
    be a conjunction of mathematical statements and P would assign
    probabilities to these mathematical statements (most of which we have no
    hope of proving or disproving). How should we do that? Of
    course we already do that (e.g. computer scientists bet with each other on
    whether polytime = non-deterministic polytime, and we make use of
    mathematical facts that we don't understand the proofs for), but
    there does not seem to be a known method for doing it optimally. This is
    also where anthropic reasoning would come in.

    The problem is the last sentence. In fact anthropic reasoning should NOT
    be applied to P, because it would cause double counting. P has to come
    from deterministic computation only.

    > My question is, doesn't this apply just as well to all other factual
    > data about the universe? Take for example the first fractional bit
    > of pi. The way binary fractions work, any fraction < 1/2 has its first
    > bit as 0. Since pi's fractional part starts with .14159..., which is
    > less than 1/2, we know its first bit is 0.
    >
    > Or do we? Isn't all reasoning inherently uncertain? We don't know
    > this fact with perfect certainty. There must be some chance of error -
    > one in a trillion, one in 10^100?

    If you don't have access to a deterministic computer (such as Egan's Qusp)
    then yes you have to say that the probability of the first bit of pi being
    1 is also 1/2. I think the decision still comes out right in the end
    because you'd also think that if the first bit of pi is 1, then your
    current measure must be tiny and any bet you make on pi being 1 would lead
    to reward for only a tiny measure of observers.

    Lack of access to deterministic computers actually make things even more
    difficult than they already are, and I haven't worked out fully what
    you're supposed to do when your own brain is unreliable hardware. Let's
    assume that we do have deterministic brains to begin with. We can talk
    about non-deterministic brains later when we've worked out the easier
    case. So forget about the previous paragraph for now.

    > Given this, wouldn't Wei's Bayesian globalist have to say that the value
    > of the first fractional bit of pi was still uncertain, that there was
    > a 50-50 chance of it being 0 or 1? And similarly, wouldn't he say the
    > same thing about every bit of pi, therefore that pi's value was completely
    > uncertain? And in fact, wouldn't this reasoning apply to every fact about
    > the universe? Therefore every probability stays stuck at 0.5, or at least
    > at some a priori probability distribution that was uninformed by facts.

    The probabilities wouldn't be stuck because they would depend on the
    results of the deterministic computations you run. If run a new
    computation then your probabilities would change as a result.



    This archive was generated by hypermail 2.1.5 : Wed Feb 12 2003 - 13:58:17 MST