Definitions of Probability (was: Re: Clint & Robert on "Faith in

Amara Graps (
Wed, 27 Oct 1999 21:36:55 +0100

James Wetterau ( Tue, 26 Oct 1999 writes:

>So far, the only non-circular definition I have found is a
>mathematical definition claiming certainty in results iff we
>take limits as the number of experiments goes to infinity.

That's the "frequentist" definition of probability. I don't find the definition satisfying either. You may find the Bayesian perspective of probability more natural and intutive.

(Some portions of the following note I extracted from an article I wrote 1 year ago about the Bayesians and the MaxEnt'98 conference for Scientific Computing World (October 1998), copyright 1998 IOP Publishing, UK.)

The Bayesian probabilistic ideas have been around since the 1700s. Bernoulli, in 1713, recognized the distinction between two definitions of probability: (1) probability as a measure of the plausibility of an event with incomplete knowlege, and (2) probability as the long-run frequency of occurrence of an event in a sequence of repeated (sometimes hypothetical) experiments. The former (1) is a general definition of probability adopted by the Bayesians. The latter (2) is called the "frequentist" view, sometimes called the "classical", "orthodox" or "sampling theory" view.

In the sciences, I think that the concept of "sampling theory", or the "statistical ensemble" is goofy, or simply not applicable. For example, a gamma-ray burst is a unique event, observed once, and the astronomer needs to know what uncertainty to place on the one data set he/she actually has, not on thousands of other hypothetical gamma-ray burst events. And similarly, the scientist who needs to assign uncertainty to the large-scale structure of the Universe needs to assign uncertainties based on _our_ particular Universe, because there are not similar Observations in each of the "thousands of universes like our own."

The Bayesian approach to scientific inference takes into account not only the raw data, but also the prior knowledge that one has to supplement the data. That prior knowledge may be data or results from previous experiments, conservation laws or models, known characteristics of the assumed model, data filters, scientific conjecture, experience, or other objective or subjective data sources. The Bayesian approach assigns probabilities to all possible theories and to all possible evidence. Using a logical framework for prior and current information, the Bayesians infer a probabilistic answer to a well-posed question, using all of the information at one's disposal. And when one acquires new evidence, the Bayesians update their "priors" in the equation, resulting in a modified probabilistic answer that essentially reduces one's hypothesis space. Probability to the Bayesians represents a state of knowledge, conditional to some context. Bayes' Theorem encapsulates the process of learning. Bayesian Probability Theory may be the best formal theory that we have on the relationship between theory and evidence.

I'm doing my best to spread the word about Bayesian Probability Theory to other astronomers. Some small percentage (say about 10%) have heard about it. I think it would really open the door to a better way (i.e. more realistic way )for scientists to formulate their scientific problems.

In contrast to Bayes' Theorem, which is a simple, intuitive description about one's state of knowledge, the assignment of the probabilities can be difficult, so this is the usual "hangup" (I'm at this place, for example, in formulating my own scientific problems in the Bayesian way. I have some big learning yet.). There is nothing within the theory that prescribes how one should assign probabilities; it must come from outside. Bayesians will tell you that this aspect of thinking more carefully about one's data gives one a deeper understanding of one's scientific problem, and hence the extra thought involved in assigning priors is well-worth the effort.

There are a handful of scientists using Bayesian Probability Theory to clarify some fundamental scientific theories. Many scientific fields such as information theory, quantum physics, Fourier analysis, relativity, cosmology, and artificial intelligence are full of paradoxes and fallacies. What lies at the source of the confusion? The confusion may arise when one interprets probability as a real, physical property, that is, when one has failed to see a distinction between reality and our _knowledge of_ reality. In the Bayesian view, the probability distributions used for inference do not describe a real property of the world, only a certain state of information about the world. For example, for the Bayesian, the wave functions of the Schroedinger wave equation simply become a posterior probability describing our incomplete information about the quantum system, rather than wave functions that collapse in reality upon our observation. A Bayesian probabilistic approach to these fields offers some new chances to clarify many physical "paradoxes".

>Oddly, probability appears to be a concept which people
>intuitively understand but which can only be non-circularly
>defined in terms of infinite series. I have to wonder if that,
>too, begs the question, as it is an appeal to a non-testable
>hypothesis. (If only we could have infinite trials, you'd
>see!) There's something very fishy about probability.

I hope that you can see now, that there are better definitions of probability (in my opinion).

Here's where you can learn more.

The following two references are excellent introductions to Bayesian methods:

Loredo, Tom, "From Laplace to Supernova SN 1987A: Bayesian Inference in Astrophysics", as published in the Dartmouth proceedings: International Society of Entropic Research. An excellent tutorial on probability theory. A one page list of errata is included, and an index. You may download the paper from: (This Web site contains many other useful and classic Bayesian papers, including the most important reference: an unfinished book by Edwin Jaynes.)

(And you can find more of Tom Loredo's papers here:

Giulio D'Agnostini is a statistician from Rome, who is teaching statistics to high energy physicists at CERN. Two summers ago he completed teaching a course about Bayesian statistics, and you can find his 200 page book of detailed lecture material at a CERN site:

(Note: Scroll down the page to
"Bayesian Reasoning in High-Energy Physics - Principles and Applications" by G. D'Agostini, INFN, Italy on 25, 26, 27, 28 & 29 May 1998.)

William Press, of the Harvard-Smithsonian Center for Astrophysics, is one of the authors of the influential work: _Numerical Recipes_. I was not aware of Press' interest in Bayesian methods until I saw a reference in one of Tom Loredo's papers (above). Press has a very interesting article: "Understanding Data Better with Bayesian and Global Statistical Methods" that I located on the Los Alamos National Laboratory's astro-ph server:

(and then scroll down to article: 9604126). This paper is a 14 page postscript paper that includes embedded figures, given at the Unsolved Problems in Astrophysics Conference, Princeton, April 1995.

And some Non-Internet reference books:

Sivia, D.S. _Data Analysis: A Bayesian Tutorial_, Clarendon Press: Oxford, 1996.

Martz, Harry and Waller, Ray, chapter: "Bayesian Methods" in _Statistical Methods for Physical Science_, Editors: John L. Stanford and Stephen Vardeman [Volume 28 of the Methods of Experimental Physics], Academic Press, 1994, pg. 403-432.


Amara Graps                  email:
Computational Physics        vita:  finger
Multiplex Answers            URL:
     "Trust in the Universe, but tie up your camels first."
               (adaptation of a Sufi proverb)