Skip to main content
The BMJ logoLink to The BMJ
letter
. 2003 Sep 27;327(7417):752. doi: 10.1136/bmj.327.7417.752-a

Statistical interpretation can also bias research evidence

Lorne Basskin 1
PMCID: PMC200842  PMID: 14512496

Editor—Kaptchuk discussed the effect of interpretive bias on research evidence.1 Let me add one more example. Studies are designed to determine whether “a statistically significant difference” exists between the outcomes of two alternative treatments. If no difference is discovered the temptation for authors is to conclude that the treatment under investigation is “just as good” as the gold standard. To make such a statement, the study needs to have adequate statistical power, ensuring the chance of a type II error (incorrectly accepting the null hypothesis) is sufficiently small.

Since power can generally be increased by enlarging the sample size, it has become popular for researchers who do not have sufficient power to speculate in a way that makes the actual power meaningless. For example, such a typical speculative statement might read: “While the study failed to have sufficient power to confirm the findings that the drugs were not different, had the sample size been increased from 10 to 180, then the power would have been sufficient to conclude no difference exists.” In this way, the researcher implies that it's only a statistical convention is preventing him or her from stating that no difference exists between the two drugs. In reality, had the sample size been so increased, there is no guarantee as to what the researchers may have found.

For those who cannot resist such hypothetical conclusions may I suggest (tongue in cheek) you skip the study, examine one patient, report the results, and speculate that whatever you find could be of greatest statistical significance if only the study had been conducted with more people.

Competing interests: None declared.

References


Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES