**From:** Eliezer S. Yudkowsky (*sentience@pobox.com*)

**Date:** Tue Jul 08 2003 - 20:38:41 MDT

**Previous message:**Eliezer S. Yudkowsky: "Re: More Hard Problems Using Bayes' Theorem, Please"**In reply to:**Lee Corbin: "RE: More Hard Problems Using Bayes' Theorem, Please"**Next in thread:**Dan Fabulich: "Re: More Hard Problems Using Bayes' Theorem, Please"**Reply:**Dan Fabulich: "Re: More Hard Problems Using Bayes' Theorem, Please"**Reply:**Lee Corbin: "RE: More Hard Problems Using Bayes' Theorem, Please"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Oh, wait, apparently Lee *did* post this to the Extropians list, instead

of just to me... no wonder I couldn't find it in my inbox. OK, I'll try

and straighten this out on-list.

Lee Corbin wrote:

*>
*

*>>Anyway, problem number three is hopefully my long sought
*

*>>Rosetta Stone that does not require talking of distributions
*

*>>to see Bayesianism rear its head. (I do have one smaller
*

*>>and much easier problem that lies bare the differences, but
*

*>>this here might be much hotter stuff.)
*

*>
*

*> Well, now it seems to me that in this particular problem, it's
*

*> easy to think, wrongly, that an answer is possible. So I apologize
*

*> for putting it on the list (at the time, I thought that I just
*

*> wasn't quick enough.) I still maintain that there is not enough
*

*> information in it. But now I believe that that goes for
*

*> Bayesians as well as for everyone else.
*

*>
*

*>>>3. The probability that a newborn will have deformities
*

*>>> traceable to a sickness of its mother during pregnancy is 1%.
*

*>>> If a child is born healthy and normal, the probability that
*

*>>> the mother had rubella during her pregnancy is 10%. If a
*

*>>> child is born with deformities that can be traced to a
*

*>>> sickness of the mother, the probability that the mother had
*

*>>> rubella during her pregnancy is 50%. What is the probability
*

*>>> that a child will be born with deformities if its mother had
*

*>>> rubella during her pregnancy?
*

*>
*

*> In this stupid problem, the authors badly misstate at least one
*

*> of the premises, in my opinion. If you draw a picture of the
*

*> problem, perhaps, you may be less likely to misread it than if
*

*> you plug the numbers into a formula.
*

No, Lee, it's perfectly straightforward to solve the problem from these

premises.

Observe:

p(deformity) = 0.01

p(~deformity) = 0.99

p(rubella|~deformity) = 0.1

p(rubella|deformity) = 0.5

p(rubella&~deformity) = p(r|~d)p(~d) = .1*.99 = .099

p(rubella&deformity) = p(r|d)p(d) = .5*.01 = .005

p(rubella) = p(r&d) + p(r&~d) = .104

p(deformity|rubella) = p(rubella&deformity)/p(rubella) = .005/.104 = .048

*> And the heresy P(AB) is not equal to P(A|B)*P(B) appears in this
*

*> quote from the above paper:
*

*>
*

*> In Germany, every expectant mother must have an
*

*> obligatory test for rubella infection because
*

*> children born to women who have rubella while
*

*> pregnant are often born with terrible deformities.
*

*> The following information is at your disposal:
*

*>
*

*> The probability that a newborn will have deformities
*

*> traceable to a sickness of its mother during pregnancy is 1%.
*

*>
*

*> If a child is born healthy and normal, the probability
*

*> that the mother had rubella during her pregnancy is 10%.
*

*>
*

*> If a child is born with deformities and it can be traced
*

*> to some sickness of the mother, the probability that the
*

*> mother had rubella during her pregnancy is 50%.
*

*>
*

*> What is the probability that a child will be born with
*

*> deformities if its mother had rubella during her pregnancy?
*

*>
*

*> The Bayesian solution p (H | D) is .048. But participants
*

*> who use one of two non-Bayesian algorithms, computing
*

*> p (H&D) = .005 or picking p (H) = .01, will produce
*

*> estimates that lie in the interval of ±5 percentage points
*

*> around the Bayesian solution.
*

*>
*

*> But why shouldn't p(H&D) --- that is, p(defect & rubella) be equal
*

*> to .005? What other conclusion can one come to from the first
*

*> statement together with the third statement?
*

p(H&D) DOES come to .005. It's just this is the WRONG SOLUTION to the

problem. Sedlmeier and Gigerenzer are not saying that p(H&D) != 0.005,

they're saying that p(H&D) != p(H|D).

*> Unless, they fail to believe that p(H&D) = p(H|D)*p(D). Sure
*

*> enough, go back a couple of paragraphs in that paper, and find
*

*>
*

*> The most frequent non-Bayesian algorithms they identified
*

*> include computing p (H&D) by multiplying p (H) and p (D | H);
*

*>
*

*> No wonder I can't grasp Bayesianism. ;-)
*

It may not be a good idea to read academic papers about Bayes before

having grasped Bayes, since the authors are speaking to an audience that

they need not fear confusing. When they speak of non-Bayesian algorithms

they are NOT talking about formal statistical methods employed by

self-declared frequentists; they are talking about naive subjects

computing a wrong answer using a bad visualization of the problem. That

is, "non-Bayesian" is here a synonym for "incorrect", not "frequentist".

They are not talking about the frequentist/Bayesian controversy at all.

If you look at the surrounding context, you will see that the problem they

are addressing is that some incorrect answers computed according to common

incorrect strategies are numerically close to the correct answer, and

might be confused with a "Bayesian" answer by some common experimental

techniques that accept answers as "Bayesian" if they are within a given

range of the correct answer.

*> Of course, it is a libel to say that Bayesians would be guilty of
*

*> believing that p(H&D) was not equal to p(H|D)*p(D). But I was
*

*> hoping that it would turn out to be the case that in some problems
*

*> they do not---now I simply believe that the authors are full of shit.
*

*> And I think that their study is deeply flawed too, as a predictable
*

*> result of inflicting this problem on a lot of innocent victims.
*

Lee, Gigerenzer is extremely unlikely (prior probability) to be full of

shit. I haven't heard of Sedlmeier, but I've heard of Gigerenzer. If you

are confused about a mathematical subject and you find yourself

disagreeing with a pro mathematician it is diamonds to doughnuts that you

are the one in the wrong.

I would (as stated more briefly in my first reply) recommend that you read

through the "Intuitive Explanation" from start to finish without skipping

anything; and then, if you are still interested in understanding the

difference between Bayesians and frequentists (which is something the

intro does not address), I would recommend reading the E.T. Jaynes

lectures given in the "Further Reading" section at the end of the intro.

Jaynes gives specific examples of cases where frequentist methods are both

more complicated than and inferior to Bayesian methods.

-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence

**Next message:**Eliezer S. Yudkowsky: "OOPS: More Hard Problems Using Bayes' Theorem, Please"**Previous message:**Eliezer S. Yudkowsky: "Re: More Hard Problems Using Bayes' Theorem, Please"**In reply to:**Lee Corbin: "RE: More Hard Problems Using Bayes' Theorem, Please"**Next in thread:**Dan Fabulich: "Re: More Hard Problems Using Bayes' Theorem, Please"**Reply:**Dan Fabulich: "Re: More Hard Problems Using Bayes' Theorem, Please"**Reply:**Lee Corbin: "RE: More Hard Problems Using Bayes' Theorem, Please"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Tue Jul 08 2003 - 20:49:23 MDT
*