**Next message:**Terry W. Colvin: "The First Americans - 9,500-year-old mystery"**Previous message:**Loree Thomas: "Re: Transparency Rant"**Maybe in reply to:**Robin Hanson: "When Worlds Collide"**Next in thread:**Robin Hanson: "Re: When Worlds Collide"**Reply:**Robin Hanson: "Re: When Worlds Collide"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

Robin writes:

*> http://xxx.lanl.gov/abs/quant-ph/0108070 or
*

*> http://hanson.gmu.edu/worldhit.pdf or .ps
*

*>
*

*> When Worlds Collide: Quantum Probability From Observer Selection?
*

*> by Robin Hanson
*

*>
*

*> Deviations from exact decoherence make little difference for large-amplitude
*

*> quantum states, but can make a big difference for small-amplitude states.
*

*> This may provide a rationale for ignoring small-amplitude states when
*

*> deriving the Born measurement probabilities from the many worlds
*

*> interpretation. Interactions between large and small worlds may erase the
*

*> memory of observers in small amplitude worlds.
*

A fascinating paper (and great title!). It raises the tantalizing and

amazing possibility that we might live in one of the numerous bizarre

"small" quantum worlds where quantum probabilities fail to hold, but

due to influences from the rare but "large", sane worlds, we are unable

to perceive this. Plus, conceivably we could construct experiments

shielded from such influence which would allow us to detect this effect.

This would allow direct verification of the existence of parallel worlds,

which is generally considered impossible based on the assumption that

we live in a "large" world.

Exciting as this is, I think the more likely possibility is the suggestion

that the small worlds would be effectively eliminated by this influence,

with the existence of observers impossible as the laws of physics

essentially go crazy.

The details of the physics were somewhat over my head as I have never

studied density matrices, but I have several questions.

The paper describes a simple case with one large and one small world.

Does the math still work if there are enormously more small worlds than

large ones, as I believe is expected to be the case? Is it necessary

to consider the ratio of numbers of small to large worlds?

What about intermediate worlds? It seems that we have "large" worlds

where quantum probabilities hold, and "small" worlds where we have

drastic departure from quantum probabilities so that the amplitude of

such worlds is miniscule. Presumably there are "medium" worlds where the

probabilities are somewhat different from what they should be, worlds of

low probability (say, one in a billion) but still much higher than the

probabilities relating to environmental coherence effects from neighboring

worlds (which are presumably of astronomically low probability). Can we

understand why we are not in such a "medium" world? (Or perhaps we are,

the law of probability only being true on average?)

Everett's original paper used a continuous rather than discrete

representation of the wave function. Rather than summations over a

finite set of state vectors, he used an integration over an infinite set.

Does the math still work in the case of an infinite dimensional set?

In the case of infinite dimensions, the paradox which the paper attempts

to address does not really exist IMO. The point is that with a continuum

you can no longer count worlds because there are an infinite number in

all subsets. You can't say a priori that there are more bizarre worlds

than normal ones. Rather, in order to evaluate the relative contribution

of any subsets of worlds you need to define a probability density function

which you then multiply by whatever you are measuring in the worlds,

and integrate.

The paradox only arises if you assume the uniform density function

(and I'm not even sure whether that can be uniquely defined, depending

on decomposition). But this is an arbitrary choice. If you choose

a density function based on Born measure, then of course you get the

desired answer that small worlds make a small contribution. Given the

implicit requirement to choose a density function, there is no more

reason to suppose that the uniform density is the "right" answer than

the Born density. Hence there is no reason to expect that small worlds

would have more contributions than large ones.

In this context, Robin's paper implicity assumes a uniform density and

shows that you still get the right answer in that case. However that

choice is ultimately just as arbitrary as any other. It might still be

possible to come up with a density function where you get the wrong answer

(say, by over-weighting the "medium" worlds above).

In effect Robin's paper says that if all worlds make an equal contribution

then you can still get the Born probabilities (modulo the various caveats

and assumptions). This is a step forward as it is a simpler assumption

than others which have been made. But I am not very comfortable with

this assumption (that all worlds should be counted equally).

Another reason for my discomfort (besides its lack of naturalness when

this assumption is extrapolated to the continuous case) is that the same

issue arises in wider "all-universe" theories. In these models, by people

like Tegmark and Schmidhuber, we go beyond QM to assume that any universe

which could be represented by a mathematical structure or computer program

actually exists. But again, such models have bizarre worlds as well as

simple ones, and by simple counting arguments the bizarre worlds should be

far more numerous. Why do we see a simple universe? We need to invoke

a weighting function or measure which controls how much "contribution to

reality" a world makes. And luckily such a measure exists very naturally,

the so-called universal measure which is simply the information content

of the program or theory which creates/describes the universe. Simpler

theories produce exponentially more probable universes.

Given this philosophical precedent I am very comfortable with invoking the

existence of a weighting function for the universes in QM many-worlds.

However it is true that the particular weighting function which is used

does not seem completely natural. I would like to see a justification

for it in terms of simplicity. If the worlds where QM laws hold sensibly

and logically could be described more simply than those where bizarre

things happen, that would IMO be a good philosophical justification for

why they were more likely to be experienced.

Nevertheless, if an explanation like Robin's can work, it does seem to cut

the knot and eliminate much of the difficulty. If stable observers can

exist only in universes which match the predictions of QM, or if observers

in bizarre universes somehow nevertheless experience only sane events,

then there seems to be nothing left to explain and the issue is solved.

Hal

**Next message:**Terry W. Colvin: "The First Americans - 9,500-year-old mystery"**Previous message:**Loree Thomas: "Re: Transparency Rant"**Maybe in reply to:**Robin Hanson: "When Worlds Collide"**Next in thread:**Robin Hanson: "Re: When Worlds Collide"**Reply:**Robin Hanson: "Re: When Worlds Collide"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

*
This archive was generated by hypermail 2b30
: Fri Oct 12 2001 - 14:40:10 MDT
*