Re: does the state vector collapse?

From: David Blenkinsop (blenl@sk.sympatico.ca)
Date: Sun Sep 24 2000 - 23:14:33 MDT


> Earlier, scerir wrote:
>
> Amara Graps wrote:
> If one adopts a Bayesian approach to probability, then the
> Schroedinger wave equation simply becomes a posterior
> probability describing our incomplete information about
> the quantum system, rather than wave functions that
> collapse in reality upon our observation. It could clear up
> a lot of confusion.
> Good point!

Boy you folks are making a lot of web references, no way I can currently
spare the time for all that reading. However, Amara's reference
http://xxx.lanl.gov/abs/quant-ph/9804012, titled "Consistency,
Amplitudes and Probabilities in Quantum Theory", intrigued me. This
paper seems to confirm what I said earlier, that the Schroedinger
equation and other quantum ideas simply *can't* be derived by any sort
of regular probability theory, whether cast in Bayesian logic terms or
not! What the paper *does* say is that these quantum wave functions can
be derived from consistency principles if the basic idea of quantum
state vectors is first *assumed*. This derivation is meant as something
analogous to a similarly neat derivation of Bayesian probability theory,
but that analogy *doesn't* mean that quantum results can be completely
derived *from* ordinary probabilities! As I mentioned before, this
assumption that state vectors must be *squared* to get probabilities for
the location of a particle, *this* is something strange in any regular
probability terms, and hard to interpret, if you are wondering whether
the state vectors, and the resulting waveforms, are real or not.

> - But Goldstein asks: Does Zeilinger truly believe that ``quantum
> mechanics
> is about information''? Information is always information about
> something.
> Therefore, shouldn't quantum mechanics then be regarded as being
> about that something? And does Zeilinger really wish to deny that the
> change of the state vector that occurs during the measurement process
> is ``a real physical process,'' even when it leads to the destruction
> of the
> possibility of interference? Can quantum interference be genuinely
> understood by invoking a wavefunction that is nothing more than
> ``a representation of our knowledge''?
>

Yes, well, I'd have to agree with Goldstein, quantum interference and
superpositions and such, they aren't really covered by standard methods
of updating one's statistical knowledge, right (scerir's quotations from
Heisenberg and others notwithstanding)? The question, I suppose, is
whether we could regard the handling of quantum vectors as simply a sort
of more generalized statistics method, so that "possible positions" of a
particle can actually interfere, without the vectors that cause this
being locally real at all! In other words, the quantum state vectors are
only a particularly advanced kind of knowledge representation, a sort of
pure math structure to help us to relate to real particles and their
tricky quantum ways?

In response to this thought about quantum effects being "only advanced
statistics", it certainly occurs to me that the special quantum
properties are said to be of use (at least in theory), in performing
certain types of computations that would take far too long on any sort
of regular, "classical" kind of computer. Apparently, special circuitry
can be built to handle a great many digital numbers, all at once, all
quantum superposed on the same register! The practical disadvantage here
is that, to actually measure one number, you tend to cause all the
others to decohere or "collapse". Even so, it's said that this can be
handled in such a way as to say, efficiently do an "all at once" Fast
Fourier Transform on some sequence of numbers that might be generated
from a number theory formula of some kind. Now, I'm not trying to get
into great details, the reference here is a paper that I once
downloaded, called "Quantum Physics and Computers" by Adriano Barenco,
if anyone wants to look it up.

As I understand things, it happens that by the above mentioned methods,
one ought to be able to efficiently factor very large numbers, and even
break enciphered messages based on the difficulty of such factorization.
Does this sound like the sort of thing that would come out of a mere
"gambler's computer" of sorts, with probability analysis of a randomized
computer circuit allowing great gains over conventional computers?
Surely it sounds a lot more as though these "qubit" superpositions are
going to be a very careful handling of *real* quantum state properties,
with the quantum vectors as a *real* mechanism for doing real
computations? In particular, the tendency of the quantum vectors to
interfere at specific locations is very important, this in itself
amounts to a kind of computation, and the computation has to be
happening locally, where the vectors are located! In comparison, if one
were simply classically uncertain about the position of a particle,
let's say, there is no way that two possible different positions of the
particle would go right out to some spot and partially or totally cancel
one another! I'm being intuitive here, I suppose, much in the manner of
Goldstein's comments above -- I just don't see that classical statistics
uncertainties would amount to any such manner of local
interference-based computing.

Classically, one could add all sorts of possible positions to a density
function, but such additions wouldn't ever tend to create relatively
blank spots, or interference bands. Again, remember that in quantum
theory, one must square the magnitude of a complex vector in order to
even begin to get a probability density. So the _idea that these vectors
just represent a probability is wrong, both intuitively and
mathematically_. The quantum vectors seem to be something mechanical or
mechanistic in nature; you really have to add all the relevant vectors
together *first* then *square* the final result to get any sort of
probability for finding a particle as such. To put it another way, if
you get more *out* of the quantum computer than you put *in*, that
should surely prove that the real computing isn't all going on in your
personal Bayesian viewpoint on the matter. If the "qubit advantage" is
purely in the analysis of it, then why build a "qubit computer" at all?
Why not just *imagine* the qubits, then, and factor large numbers by
using just the imaginary Bayesian descriptions of this?

David Blenkinsop <blenl@sk.sympatico.ca>



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:52 MDT