Skip to main content
The BMJ logoLink to The BMJ
editorial
. 2018 Oct 15;363:k4309. doi: 10.1136/bmj.k4309

Open science prevents mindless science

Marcus R Munafò 1,, Gareth J Hollands 2, Theresa M Marteau 2
PMCID: PMC6193470  PMID: 30322848

Abstract

Lessons from a case of academic misconduct


On 19 September 2018, three JAMA journals retracted publications of six studies that had Brian Wansink, John Dyson professor of marketing at Cornell University, as an author.1 This adds to seven previous retractions of his work, making a total of 13 articles retracted as of 10 October2; others may follow. Many of the problems in the retracted studies came to light when a small group of researchers carried out detailed analysis of several articles coauthored by Wansink.3 A subsequent investigation by Cornell found that Wansink committed academic misconduct, including “misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results, and inappropriate authorship.”4 This raises concerns of—at the least—endemic sloppiness and—at worst—a disproportionate focus on creating compelling narratives rather than establishing firm observations.

Such academic misconduct damages trust in science and scientists, and this may be particularly marked in the Wansink case given his profile outside academia. His work, encapsulated in the book Mindless Eating, influenced wider scientific, policy, and public discourse about how the food environment can be altered to change our behaviour. He was executive director of the USDA Center for Nutrition Policy and Promotion from 2007 to 2009, where he led revision of the 2010 Dietary Guidelines for America, and the Smarter Lunchrooms strategies he devised are currently being used in over 29 000 schools.

His studies have been included in countless narrative and systematic reviews, raising the question of the extent to which the retraction of 13 of his studies undermines evidence in this field. For example, two Cochrane reviews for which two of us (GJH and TMM) are authors, one on the effect of portion, package, and tableware size5 and the other on nutritional labelling,6 include studies by Wansink. None of the included studies has been retracted, leaving the reviews’ results and conclusions unchanged. But the retractions raise questions about the veracity of other studies Wansink has authored.

It is tempting to treat this as an isolated case, but retractions are not uncommon and are increasing in frequency, although whether this reflects increased detection or prevalence is unknown.7 Unfortunately, the responses of institutions to cases such as that of Wansink tend to focus on allocating blame, avoiding litigation, and ultimately hoping that the whole thing will blow over. What is lacking is serious reflection by institutions—universities, funders, and journal editors—on the systemic aspects of their cultures and practices that might have contributed to the problem in the first place. This is in stark contrast to the aviation industry, which has an enviable safety record: when a disaster occurs, it is forensically examined not only to identify its proximal cause but also the wider system that allowed it to happen.

The NHS is beginning to learn from the aviation industry,8 but universities are lagging behind. Some of the institutional processes that may contribute to questionable research practices, which in extreme cases can extend to outright fraud, are clear. For example, institutions may shape the behaviour of scientists in ways that have unintended consequences by disproportionately rewarding publication and grant income over robust observations.

What can we do? Open research practices provide the research community—including researchers, universities, funders, and journal editors—with a framework for minimising academic misconduct of the kind perpetrated by Wansink.7 Individual researchers can protect themselves against their own enthusiasm, and the incentives to discover something, by preregistering their study protocols and analysis plans. Journals and funders could also require preregistration of study protocols and mandate data sharing, not just for clinical trials but across research areas and methods. There may be cases where this is not appropriate, but in general, making research workflows transparent and subject to potential scrutiny should serve as a quality control measure, including ensuring data are thoroughly checked. Institutions could encourage open research practices by including these as criteria in job descriptions, supported by training in these approaches.

Ultimately what is required is a cultural change. None of Wansink’s retracted articles was single authored. Should the coauthors bear some responsibility for not identifying and highlighting concerns? The recommendations of the International Committee of Medical Journal Editors state that all authors should be “accountable for all aspects of the work.”9 In practice, however, this is complicated by the realities of long distance international collaborations, specific technical contributions of individual authors, and, critically, hierarchies and power structures that can make it difficult for junior researchers to speak out if they feel concerned about the work of more senior authors.

There is no “one size fits all” solution that will apply across disciplines, but the culture of many parts of the research ecosystem needs to change if biomedical science is to have a record for reliability closer to that of the aviation industry.

Competing interests: We have read and understood BMJ policy on declaration of interests and declare that MRM has prepared reports for Jisc, a provider of digital products for higher education.

Provenance and peer review: Commissioned; not externally peer reviewed.

References


Articles from The BMJ are provided here courtesy of BMJ Publishing Group

RESOURCES