Re: Bayes vs. LP

From: Amara Graps (amara@amara.com)
Date: Wed Jun 18 2003 - 01:02:34 MDT

  • Next message: Damien Broderick: "Re: greatest threats to survival (was: why believe the truth?)"

    (ccing extropians too, only because this topic has appeared there before)

    "bzr" <bzr@csd.net>, Sun, 15 Jun 2003:

    >However, perhaps the easiest way to see that the Bayesian framework won't do
    >as a comprehensive framework for science, and why it assuredly can't proxy
    >as the whole of a philosophy of science, is to consider this problem:

    >We have a penny. We toss it. What are the odds that we'll get heads?

    >The answer: 0

    >Zero? Yes. This is very counterintuitive, admittedly, particularly for
    >statisticians. However, the truth is there are no "odds" here at all.
    >Penny tossing is deterministic.

    >That being the case, if we are appraised of all the initial
    >conditions of the toss, and possess a complete knowledge of the laws
    >of physics, then we can predict with certainty what we will get
    >(heads,or tails, or, very rarely, a coin on edge). Even more
    >interestingly (and counterintuitively), without any knowledge of
    >statistics OR knowlege of physics we can be sure that, provided the
    >test surface is flat, we will get heads, tails, or a coin on its
    >edge.

    I think that using determinism in this way is putting up a smoke
    screen in addition to missing the large picture of how scientists
    intuitively do science. You have a real experiment, so it is
    physical, and all propositions are testable. How do you define
    determinism for this system? Your determinism is based on a model of
    some physics, is it not? No matter how 'deterministic' something may
    be, your prediction for the outcome of the coin toss is based on
    data and a model and what other information you have about that
    system. A Bayes discussion is always in the realm of epistemology,
    i.e. how we know what we know.

    Humans never know how nature _is_. All humans can do is make an
    abstract physical description of nature. Scientific studies are how
    we are able to process information in order to say some things about
    that nature. Bayesian concepts makes this process explicit. A
    Bayesian perspective of science says that any theory about reality
    can have no consequences testable by us, unless that theory can also
    describe what humans can see and know. Models, data, prior
    information, in other words.

    Note also how causality takes a side seat. A logical relationship
    between the event (and their probabilities) does not imply a causal
    (physical) relationship between the events. Sometimes Bayesians call
    this the Mind Projection Fallacy, which is behind a huge number of
    misconceptions and 'paradoxes' in mathematics (set theory,
    information theory, Fourier transform,...) physics (quantum and
    relativistic physics, potential, ...) philosophy (Bohr, Einstein,
    Bohm, Popper, Penrose, ...).

    Bayes Theorem is only a multiplication rule of probability theory,
    which shows a relationship between a posterior probability, a
    likelihood of data to model, and prior probability. The prior
    probability and posterior probability are not necessarily related in
    time. These concepts show just a different relationship to the data
    to be analyzed. The Bayesian methodologies approach the scientific
    inference from "first principles", grasping an n-parametric event
    directly with an n-dimensional posterior probability distribution.

    >The question of why statistical analysis "works" (to the extent that
    >it does, and given an initial state of ignorance), or indeed the
    >question of what conditions must pertain in order for statistical
    >analysis to be appropriate, is not itself answerable by further
    >statistical analysis.

    No.

    Some history. The Bayesian probabilistic ideas have been around
    since the 1700s. Bernoulli, in 1713, recognized the distinction
    between two definitions of probability: (1) probability as a measure
    of the plausibility of an event with incomplete knowledge, and (2)
    probability as the long-run frequency of occurrence of an event in a
    sequence of repeated (sometimes hypothetical) experiments. The
    former (1) is a general definition of probability adopted by the
    Bayesians. The latter (2) is called the "frequentist" view,
    sometimes called the "classical", "orthodox" or "sampling theory"
    view.

    Scientists who rely on frequentist definitions, while assigning
    their uncertainties for their measurements, should be careful. The
    concept of sampling theory, or the statistical ensemble, in
    astronomy, for example, is often not relevant. A gamma-ray burst is
    a unique event, observed once, and the astronomer needs to know what
    uncertainty to place on the one data set he/she actually has, not on
    thousands of other hypothetical gamma-ray burst events. And
    similarly, the astronomer who needs to assign uncertainty to the
    large-scale structure of the Universe needs to assign uncertainties
    based on _our_ particular Universe, because there are not similar
    Observations in each of the "thousands of universes like our own."

    The version of Bayes' Theorem that statisticians use today is
    actually the generalized version due to Laplace. One particularly
    nice example of Laplace's Bayesian work was his estimation of the
    mass of Saturn, given orbital data from various astronomical
    observatories about the mutual perturbations of Jupiter and Saturn,
    and using a physical argument that Saturn's mass cannot be so small
    that it would lose its rings or so large that it would disrupt the
    Solar System. Laplace said, in his conclusion, that the mass of
    Saturn was (1/3512) of the solar mass, and he gave a probability of
    11,000 to 1 that the mass of Saturn lies within 1/100 of that value.
    He should have placed a bet, because over the next 150 years, the
    accumulation of data changed his estimate for the mass of Saturn by
    only 0.63% ...

    More references that might be useful:

    General for scientists: (article)
    A.L. Graps, "Probability Offers Link Between Theory and Reality,"
    Scientific Computing World, October 1998.

    Focusing more on epistemology: (book)

    _Scientific Reasoning: The Bayesian Approach_ by Colin Howson and Peter
    Urbach, 1989, Open Court Publishing.

    Focusing on implementation: (books)

    _Bayesian Statistics_ (2nd edition) by Peter M. Lee, Oxford
    University Press, 1997.

    _Data Analysis: A Bayesian Tutorial_, Sivia, D.S., Clarendon Press:
    Oxford, 1996.

    Martz, Harry and Waller, Ray, chapter: "Bayesian Methods" in
    _Statistical Methods for Physical Science_, Editors: John L.
    Stanford and Stephen Vardeman [Volume 28 of the Methods of
    Experimental Physics], Academic Press, 1994, pg. 403-432.

    Other useful papers on the web:

    Epistemology Probabilized by Richard Jeffrey
    http://www.princeton.edu/~bayesway/

    Edwin Jaynes: Probability
    http://bayes.wustl.edu/

    "Probability in Quantum Theory",
    "Clearing up Mysteries- the Original Goal".

    "Role and Meaning of Subjective Probability: Some Comments
    on Common Misconceptions." by Giulio D'Agostini
    http://zeual1.roma1.infn.it/~agostini/prob+stat.html

    Amara

    -- 
    ********************************************************************
    Amara Graps, PhD          email: amara@amara.com
    Computational Physics     vita:  ftp://ftp.amara.com/pub/resume.txt
    Multiplex Answers         URL:   http://www.amara.com/
    ********************************************************************
    "The understanding of atomic physics is child's play compared with the
    understanding of child's play."  -- David Kresch
    


    This archive was generated by hypermail 2.1.5 : Wed Jun 18 2003 - 01:13:07 MDT