Re: evolution and diet (was: FITNESS: Diet and Exercise)

From: gts (gts_2000@yahoo.com)
Date: Sat Apr 19 2003 - 11:35:57 MDT

  • Next message: Eliezer S. Yudkowsky: "Re: evolution and diet (was: FITNESS: Diet and Exercise)"

    Concerning the important role of laboratory testing in
    helping us understand proper diet:
     
    "Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:

    > Or rather, it doesn't matter what the null
    > hypothesis or Bayesian prior or whatever
    > *was*, because there are now enough specific
    > cases of modern diets being detrimental because
    > of violating ancestral invariants that I would,
    > indeed, tend to take as the *new* working assumption
    > that the ancestral diet is better until proven
    > otherwise.

    Actually that is what I mean by the paleodiet
    being the null or default hypothesis. To those like
    you and me who see validity to the idea that the
    healthiest foods are paleothic foods because we are
    best adapted to those foods, the paleodiet appears to
    be a good working hypothesis, and should be considered
    the null hypothesis which must be rejected in any
    statistical test about diet. This would mean that the
    the "burden of proof" would indeed be on those
    researchers who would like to formulate and test any
    hypothesis that deviates from the default paleodiet
    working hypothesis.

    This is just basic Stat 101. Perhaps Harvey
    understands the idea also, and is objecting to the
    "burden of proof" concept simply because it might seem
    offensive in a verbal debate with someone who has no
    knowledge of statistics.

    > That's not playing burden-of-proof tennis,

    Burden-of-proof tennis is exactly what scientific
    researchers do. It is what fuels scientific progress.
    Most scientists are not paid well; other than the
    simple pursuit of knowledge their motivation in life
    is to obtain prestige amongst their peers in academia.
    Prestige comes from publishing papers that establish
    themselves as researchers who successfully advance
    scientific knowlege by formulating, testing, and
    validating new competing hypotheses that reject old
    working (null) hypotheses. The most successful and
    reputable researchers glady take the statistical
    burden-of-proof upon their shoulders while also
    remaining *experimentally* unbiased and objective.

    An exception to hypothesis-testing research is
    "exploratory research," which is not an attempt to
    support or reject any particular hypothesis. Much of
    the medical research we see in databases like Medline
    is exploratory research. People who've forgotten or
    who never took Stat 101 commonly misinterpet such
    studies. Probably you and Harvey know the difference
    but for those who don't know I'll make up a simple
    idealized example of exploratory research:

    Researcher Jones wonders what adding large amounts of
    Nutrient A to the diet of rats will do to 20 various
    blood parameters related to health, and decides to do
    some exporatory research. For a period of 30 days he
    adds large amounts of Nutrient A to the diet of 50
    rats and the same amount of a placebo substance to the
    diets of 50 control rats. At the end of 30 days he
    measures 20 blood parameters of interest in all 100
    rats. Parameter #4 is found to have increased by a
    statistically significant degree in the rats on the
    high Nutrient A diet, such that the difference in
    Parameter #4 vs the control rats is different with
    statistical significance p < .05 (meaning that there
    is only a 5% or lower probability that the means of
    the parameter in the two populations from which the
    samples were taken are actually the same, i.e., that
    we can say with 95% confidence that the seemingly
    large difference is not just due to sampling error).
    He then reports this seemimgly amazing result in a
    major medical journal.

    A common mistake is to conclude from Jones' research
    that adding Nutrient A to the diet of rats is likely
    to cause an increase in Parameter #4.

    As any statistician knows, no such conclusion can be
    made, because the exploratory researcher did not FIRST
    explicity define both 1) a null hypothesis, and 2) a
    competing hypothesis concerning the effects of
    Nutrient A on Parameter #4.

    To draw any conclusions, another study must be
    performed in which these two hypothesis are defined
    explicitly prior to the experiment. For example
    reseacher Smith, a subscriber to that same medical
    journal, might read about reseacher Jones' exploratory
    research concerning Nutrient A. Intrigued by Jones'
    results he might then decide to find grant money to
    perform the critical experiment to determine if
    Nutrient A actually causes an increase in Parameter
    #4. He will perform an experiment similar to Jones'
    experiment, with the critical difference being that he
    will first define precisely the hypothesis he is
    testing. He must define two hypotheses:

    The Null Hypothesis:
    "Nutrient A has no effect on Parameter #4."

    and

    The Competing Hypothesis:
    "Nutrient A causes Parameter #4 to increase."

    His study will be hideously biased unless he defines
    both these hypotheses explicitly *before* conducting
    the experiment.

    The reason we cannot rely on Jones' exploratory
    research is that he measured 20 parameters without
    first defining the null and competing hypotheses for
    any particular one of them. Statistically speaking if
    the 20 parameters are more or less independent random
    variables then the probability is that about 1 of the
    20 parameters (5% of them) should have deviated enough
    to have been found statistically significant at the
    95% confidence level just by pure random chance alone.

    -gts



    This archive was generated by hypermail 2.1.5 : Sat Apr 19 2003 - 11:48:56 MDT