Questioning the Methods; Questioning the Results

The headlines tend to be sexy, eye-catching, definitive, and nearly always misleading. Some statisticians claim that observational studies are unreliable and not supported by replicable data.

A year ago, a study published by the Proceedings of the Royal Society B surveyed 740 pregnant women on what they ate before and during pregnancy. Of those who consumed the most calories, 56 percent gave birth to boys. Of those who consumed the least calories, 45 percent gave birth to boys. Breakfast cereal was found to be linked to baby boys out of the 132 foods included in the survey.

Conclusion: Women who eat cereal have boys?

A statistician in North Carolina reanalyzed the data and countered the studies findings as pure chance. The researchers are standing behind their findings.

Melinda Beck of The Wall Street Journal wrote:

Behind the cereal squabble lies a deep divide between statisticians and epidemiologists about the nature of chance in observational studies in which researchers track peoples’ habits and look for associations with their health but don’t intervene at all.

Stan Young of the National Institutes of Statistical Sciences has provided a set of questions to assist in finding out whether claims from observational (medical) studies are true.

1. Is the trial a randomized clinical trial? (In general, reliable.)

  • Non-FDA – 80% likely OK. (Ask if the trial is replicated; Ask how many questions are in the primary analysis. Are claims made for secondary analysis?
  • FDA approved – over 95% OK for primary claim (Data is checked. Analysis is checked. Analysis is pre-data collection approved.)
2. Is the trial an observational study? In general, very unreliable. Of claims tested, over 90% of the claims fail to replicate.
  • Is the data publicly available?
  • Is the analysis code available?
  • How many questions are at issue?
  • Has the data set been independently re-analyzed?
  • Have the claims been independently replicated?
  • What was the cost of the study and who funded the study?
  • If there is a proposed biological mechanism, is there independent experimental evidence to support it? Was the mechanism proposed after-the-fact?
  • Is “cause and effect” being claimed?
  • What is their opinion: if someone replicated the study, how confident are they that they would find a very similar result?

Young offered Effect of Selenium and Vitamin E on Risk of Prostate Cancer and Other Cancers. JAMA. 2009;301 (1) “as a study that attempted to replicate claims coming from observational studies.”

MP3 download of Young talking about misuse of statistics in epidemiological studies: