From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Feb 01 2003 - 09:34:26 MST
I recently considered the question of proving that Bayesian observations
never actually *decrease* your knowledge. To do this, I first had to
define "knowledge". Let's say your prior knowledge is that the
probability of case A is p, and hence that the probability of case ~A is
(1 - p). In this case, in p cases you would score p points and in (1 - p)
cases you would score (1 - p) points. For example, say that in a sample
group of 100 patients, 20% have HIV - this is your prior knowledge and
it's all the prior knowledge you have. So for any given patient, your
sole knowledge about them is that they have a 20% chance of being infected
with HIV. For 20 patients, or 20% of the group, the truth is that they
have HIV; you score the 20% probability you assigned to this outcome. For
80% of the group, the truth is that they don't have HIV; you score the 80%
probability you assigned to that outcome. And your total knowledge is
.2*.2 + .8*.8 = .68.
The question I was considering was what a Bayesian observation does to
"knowledge" as thus defined, and whether it's possible to prove that a
Bayesian observation can never decrease your knowledge.
Given Bayes' Theorem, with prior probability p, and conditional
probabilities m and n:
The prior knowledge is p^2 + (1 - p)^2
The posterior knowledge after making the observation is:
[(pm)^2 + ((1-p)n)^2]/[pm + (1 - p)n] +
[(p(1 - m))^2 + ((1 - p)(1 - n))^2]/[p(1 - m) + (1 - p)(1 - n)]
And the increase in knowledge is:
2[p(1 - p)(m - n)]^2/[pm + (1 - p)n][p(1 - m) + (1 - p)(1 - n)]
I worked out this result last night and would like to know what
probability theorists call it, and what term they use for "knowledge".
The equation for change in knowledge is very beautiful, since it behaves
exactly as expected, and I'm posting it here in the hopes that someone
recognizes it immediately.
In other words, I'm looking for someone trained in probability math who
can look at this and say, "Oh, that's just <Google keyword>."
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sun Feb 02 2003 - 21:26:09 MST