Papers citing original RCTs that investigated the effect of SSB reduction on weight exaggerated the extent of the evidence supporting a beneficial effect |
In the scientific peer-reviewed literature |
Cope and Allison (14) documented in a quantitative analysis of the literature that this exaggerated reporting was the norm rather than an exception. |
This is not a criticism of the original RCTs, but rather the manner in which subsequent authors cite them. |
Association studies are described by using language that indicates a cause and effect relationship has been found |
In the scientific peer-reviewed literature [e.g.,(70)], in government-sponsored newsletters [e.g., (71)], and in mass media articles (72–74) |
“A new study…suggests a key way to reduce childhood obesity could be to limit your child’s salt intake. The study looked at 4,000 children in Australia and found kids who ate more salt also had more cravings for sugary-sweetened drinks like soda and juice.” (73) |
“Cravings” were not mentioned in the published study (75), which did not overstate the findings, but were exaggerated in media coverage. This misleading practice is common in the obesity field overall (76). |
Public statements that contradict available evidence |
Communications from public health agencies |
In 2010, the New York Times ran an article in which, through e-mails obtained under the Freedom of Information Act, they showed that the NYC Department of Health was knowingly making exaggerated statements about the amount of weight gain expected from drinking SSBs. Even after this expose (September 2012), the NYC Department of Health made even more exaggerated and evidence-contradicted statements about the amount of weight gain expected from drinking SSBs. |
See Figure 11 for details and specific references. |
Changing what is considered the primary endpoint or analysis in an RCT |
In the scientific peer-reviewed literature |
An example (34) of this occurred in an RCT published in the American Journal of Clinical Nutrition (AJCN). In the paper, the authors state “Our main aim was to test the hypothesis that sucrose-sweetened cola increases ectopic fat including VAT4, total body fat accumulation, and metabolic risk factors…,” whereas the registration in ClinicalTrials.gov states “Primary Outcome Measures: Body Weight; MR spectroscopy; MRI; DEXA scan.” Similarly, in ClinicalTrials.gov, the title of the trial is “Effect of Carbonated Soft Drinks on the Body Weight,” whereas in the article the title is “Sucrose-sweetened beverages increase fat storage in the liver, muscle, and visceral fat depot: a 6-mo randomized intervention study.” The fact that there was no significant effect on weight was not mentioned in the abstract of the paper. |
This does not conform to the CONSORT guidelines for publishing RCTs to which authors publishing in AJCN are expected to adhere. |
Conclusion statements from paper do not match the results |
In peer-reviewed papers, press releases, and mass media interviews |
An example from the peer-reviewed literature occurred in a paper in AJCN (35) in which the results section of the abstract stated “Mean (±SEM) weight losses at 6 mo were −2.5 ± 0.45% in the DB group, −2.03 ± 0.40% in the Water group, and −1.76 ± 0.35% in the AC group; there were no significant differences between groups.” Yet, the conclusion section of the abstract stated “Replacement of caloric beverages with non-caloric beverages as a weight-loss strategy resulted in average weight losses of 2% to 2.5%.” Given the nonsignificant result, it does not seem justifiable to state there is any weight loss as a result of the treatment. Even if point estimates were being provided in a merely descriptive manner, the unbiased estimates of treatment effects in an RCT are the control-subtracted means, not the raw means in the treatment group. Examples from press releases and media interviews can be found in (14, 77) and in these sources (78, 79). |
Though a trained scientist carefully reading the original papers will understand the results, journalists, regulators, clinicians, and scientists who only rapidly read an abstract are likely to be misled. |
Publication bias |
In the scientific peer-reviewed literature |
Cope and Allison (14) showed that in observational epidemiologic studies of the association of SSB consumption and obesity, a standard test of publication bias was significant, suggesting that investigators are more likely to publish positive statistically significant findings than to publish null findings. |
This is why we wrote earlier in this paper that the observed magnitude of association is likely biased upwards. Interestingly, Cope and Allison found that this publication bias seemed to occur among non-industry-funded authors and not among industry-funded authors. |