Earlier, I wrote:
>
>
>
> On a more positive note, my original inspiration for reading about
> Bayesian statistics came from my May 1996 issue of Discover magazine,
> and yes, you *can* get the full text of the article ...
> called "The Mathematics of
> Making Up Your Mind", by Will Hively.
>
Just thought I'd add a brief "erratum" of sorts to my earlier comment, since I appear to have made a small mistake interpreting the details of the above mentioned _Discover_ mag article.
>
> If that seems a bit obscure, read the Discover article that I mentioned,
> with special attention to the real life drug testing scenario described
> there. In this scenario, you have "standard" analysts proclaiming the
> superiority of one drug over another, because the survival rate of
> patients in a trial came out 1 percentage point better on that drug --
> and handily, one percent just happened to be the arbitrary number that
> the analysts picked beforehand as what it would take to cause someone to
> jump from the slightly *worse* survival rate to assuming a "real" or
> predictably *better* survival rate . . . if the original analysts had
> decided on a smaller "hurdle",
> for one drug to beat another, like a tenth of a percent or something,
> they might still have been correct to conclude that the observed
> difference was very significant in those terms. From what I can see,
> this would have been very difficult for them to justify, since it would
> seem to imply an absurdly high degree of precision in such results, such
> as might be represented by figuring out the standard deviation in the
> data, for instance.
OK, saying that the article's much discussed "clinical superiority level" of 1% of the sample of patients studied, would somehow be a measure of precision or standard deviation in the statistics, *that* was my mistake! Further consideration of this would surely indicate that when the article talks about "clinical superiority cutoff", they are really talking about an *economics* inspired benchmark of how different the two heart drugs in question should test before the use of the considerably more expensive drug would be justified. In other words, a somewhat arbitrary economic judgment call was made in the initial study, saying that the more expensive drug should give a 1% better performance before the greater expense would be justified. For their part, the critics of the report seem willing to go along with this 1% greater survival rate as the clinical benchmark or significance cutoff, inasmuch as justifying costs in more depth than this would be relatively more complicated, and open the "can of worms" of direct economic tradeoffs, or maybe sacrificing one life to save another more cheaply, etc.
In these terms, the article's theory debate is really about how to approach or interpret this clinical "economy" cutoff, especially in the case described, where the actual trial study finds that the "better, more costly" drug just *barely* passes the previously suggested cutoff. Anyway, aside from getting the exact nature of this cutoff level wrong, the rest of my earlier comments still make sense, I think. OK, then, that's my "erratum", now everyone, go back to whatever it was you were doing :^) !
David Blenkinsop <blenl@sk.sympatico.ca>