Abstract
The public often turns to science for accurate health information, which, in an ideal world, would be error free. However, limitations of scientific institutions and scientific processes can sometimes amplify misinformation and disinformation. The current review examines four mechanisms through which this occurs: (1) predatory journals that accept publications for monetary gain but do not engage in rigorous peer review; (2) pseudoscientists who provide scientific-sounding information but whose advice is inaccurate, unfalsifiable, or inconsistent with the scientific method; (3) occasions when legitimate scientists spread misinformation or disinformation; and (4) miscommunication of science by the media and other communicators. We characterize this article as a “call to arms,” given the urgent need for the scientific information ecosystem to improve. Improvements are necessary to maintain the public’s trust in science, foster robust discourse, and encourage a well-educated citizenry.
Keywords: misinformation, predatory journals, pseudoscience, scientific fraud, health
Science is the only institution where public confidence has remained stable since the 1970s (Funk et al. 2019). Even during the COVID-19 pandemic, trust in scientists has remained extremely high (Luna, Bering, and Halberstadt 2021). Given that the public often turns to science for accurate health information, in an ideal world, science would be error free. However, many instances exist where limitations of scientific institutions and processes can amplify misinformation, presenting a major hurdle for the public. We define misinformation as information that is contrary to the current scientific consensus and disinformation as having the added attribute of being spread deliberately to gain money, power, or reputation (Swire-Thompson and Lazer 2020; see Southwell et al., this volume, for a review).
This article examines four mechanisms that foster health misinformation in science. we first focus on predatory journals that accept publications for monetary gain but do not conduct checks for academic merit. we subsequently discuss pseudoscientists, who provide scientific-sounding information but whose advice is inaccurate, unfalsifiable, or inconsistent with the scientific method. Next, we examine occasions when legitimate scientists spread misinformation and disinformation, including scientific misconduct and fraud, and whether retracting articles is a sufficient remedy. Finally, we conclude with a discussion of scientific miscommunication by the media and other communicators to lay audiences.
We characterize this review as a “call to arms” given that legitimate scientists, science educators, scientific journals, technology companies, research databases, universities, funders, and journalists must all assist in making changes to the current information ecosystem (Hopf et al. 2019). Due to the public’s uniquely high level of trust, scientific institutions have a responsibility to ensure that that information is as accurate as possible, not only to maintain trust, but also to foster healthy discourse and a well-educated citizenry.
Predatory Journals
Although the open access movement has been positive for science and society, an unfortunate biproduct has been a proliferation of predatory journals (see Bartholomew 2014). Open access journals require a publication fee from authors, allowing the articles to be freely available to the public. Predatory journals exploit the open access model, publish purely for profit, and do not conform to the peer review process (Beall 2012). Although the peer review process is far from perfect (Smith 2006), having multiple domain-specific experts evaluate manuscripts prior to publication promotes quality control (Parsi and Elster 2018). Predatory journals borrow the credibility of legitimate scientific journals, just as “fake news” domains borrow the established credibility of the real news industry. These dubious outlets might publish true information at times but do not employ the underlying editorial and peer review processes to minimize the publication of misinformation and disinformation. Given that predatory journals are focused on accepting as many publications as possible rather than screening for quality, it has become a significant avenue for health misinformation and low-quality content. As a field, we are only just beginning to understand the dangers and implications of predatory journals, with 89 percent of scholarly articles written about them having been published since 2016 (Mertkan, Aliusta, and Suphi 2021).
In recent years, predatory journals have become extremely numerous. In fact, for fields such as anesthesiology, research has estimated that there are twice as many predatory journals as there are legitimate journals (Cortegiani et al. 2019). Unfortunately, most predatory journals appear to focus on health-related topics, with 25 percent in medicine and 37 percent in the life sciences (Seethapathy, Kumar, and Hareesha 2016). Motives for publishing in predatory journals are varied. For instance, legitimate scientists may publish in predatory journals because they are unaware that the journal is predatory; often predatory and reputable publishers have no apparent differences. Other motives include the pressure to “publish or perish,” a perceived lack of research proficiency, and social identity threat (e.g., scientists from the developing world might fear that legitimate journals would view them as inferior due to their country of origin; Kurt 2018). Indeed, scientists publishing in predatory journals are more likely to be inexperienced researchers from developing countries (Xia et al. 2015; Demir 2018; although see Bagues, Sylos-Labini, and Zinovyeva 2019). Finally, predatory journals have become an avenue for pseudoscientists to appear more legitimate, given that they have little to no peer review. Indeed, Eriksson and Helgesson (2018) call for an end of the term predatory journals, as some individuals equally take advantage of the journals, using them to spread disinformation on behalf of their own agenda.
Content from predatory journals might not be such a concern if the public did not (or could not) engage with this information. However, papers from predatory journals have leaked onto Google Scholar, PubMed, PubMed Central, MEDLINE, SCOPUS, and Web of Science (Duc et al. 2020; Manca et al. 2018). The potential impact of predatory journals on the scientific information ecosystem is massive. Papers containing disinformation on critical topics (such as cancer treatment; Grudniewicz et al. 2019) are being mistaken by lay audiences as legitimate. These papers are also being included in systematic reviews (Ross-White et al. 2019) and inflating important scientific metrics such as the h-index (Cortegiani et al. 2020).
Several steps can be taken to combat predatory journals. First, legitimate journals should continue to become open access; having quality science locked behind a paywall is unhelpful, while predatory journal articles are readily available. Second, universities could decree that no predatory publication will count toward promotions, awards, funding, or degrees (Darbyshire et al. 2020). Third, science educators could inform the public that predatory journals exist and train people to identify them. For instance, predatory journal homepages are more likely to have spelling errors, distorted images, and endorse a fake impact metric called the “Index Copernicus Value” (Shamseer et al. 2017). One drawback with this strategy, however, is that it once again puts the onus of responsibility onto the user.
Systematic solutions are likely to have a much larger impact. For instance, one of the most effective measures would be for online databases to have better safeguards against predatory journals, prohibiting their articles from appearing in search results. For this to occur, an accurate list of predatory journals, hosted by a reputable source, must exist. Such a resource would also be highly beneficial for scientists when deciding where to submit and for the public to check when reading a suspicious article. Although experts in the field might find it obvious that the Journal of Clinical Oncology is an extremely reputable journal, while Clinics in Oncology is a predatory journal, the lay audience should have a searchable method to tease them apart. The most well-known list of predatory journals, “Beall’s list,” ceased to operate in 2017. The list was maintained by University of Colorado librarian Jeffrey Beall, on his blog. While not without controversy (see Teixeira da Silva and Kimotho 2021), since the list’s closure, the scientific community has not had an easy way to validate predatory journals (Kendall 2021).1
An easily searchable blacklist, maintained by an official body such as the U.S. National Science Foundation or the National Institutes of Health, that uses transparent criteria for inclusion would be highly beneficial. A point system similar to that used by NewsGuard—an organization that ranks online news quality—could be implemented, with a route for appeal if improvements are made. For instance, points could be allocated for whether the journal (1) provides peer review, (2) does not republish content without permission, (3) provides information about the publisher and editors’ identities, (5) clarifies errors, (6) does not systematically publish papers that are flawed or inaccurate, and (7) has an advertised journal location that matches the actual geographic location (i.e., the American Journal of Medical and Dental Sciences should not be hosted in Pakistan; Bohannon 2013). In sum, enabling lay audiences to correctly identify predatory journals, and reducing the likelihood that they engage with them at all, would be a significant improvement. we next consider a subset of individuals who publish in predatory journals, pseudoscientists.
Pseudoscientists
Pseudoscientists are individuals who provide scientific-sounding information but whose advice is inaccurate, unfalsifiable, or inconsistent with the scientific method (Lilienfeld, Lynn, and Ammirati 2015); although a clear distinction between pseudoscience and legitimate science can often be difficult (known as the demarcation problem; Laudan 1983). Fraudulent claims of expertise are not new. For example, in 1984 a U.S. House of Representatives subcommittee found the issue pervasive, with one in fifty doctors practicing medicine with fraudulent or questionable medical credentials (US House 1985). However, since the internet, the act of becoming an “expert” has become far easier. While many individuals buy degrees online—one diploma mill in Pakistan made $51 million in 2018 alone (Clifton, Chapman, and Cox 2018)—fake experts can simply proclaim themselves a PhD or medical professional and curate an online presence.
Unfortunately, gaining professional scientific currency without having the necessary expertise appears to be extremely easy. For instance, in 2015, Burkhard Morgenstern submitted the name and credentials of fake applicants to the boards of several journals. He succeeded in having Hoss Cartwright—the fictional character from the TV show Bonanza—appointed to the editorial board of numerous journals, including the Journal of Agricultural and Life Sciences (Morgenstern 2020). As of 2021, Hoss Cartwright is still listed as a member of the journal editorial board. Indeed, pseudoscientists appear to be more legitimate when on the board of a journal, and predatory journals appear to be more legitimate when they have “scientists” on their board.
A rich literature exists regarding pseudoscience, yet pseudoscientists themselves are difficult to study. This is because people rarely think of themselves as such; to be called a pseudoscientist is more likely to be an accusation made by a third party, rather than an identity (Hect 2018). In other words, many pseudoscientists are not necessarily malicious but genuinely believe that they are doing scientifically valid work (see Hermes [2019] for a first-person account of naturopathy). One further complication is that expertise is a continuous scale. While a PhD candidate may have less expertise than a full professor with 40 years of experience in the field, they have more knowledge than an established scientist in a different field.
Cases of legitimate scientists or medical professionals making pseudoscientific claims on topics outside of their fields is not unheard of. For instance, Dr. Linus Pauling was a Nobel Prize–winning chemist who started to promote the unsubstantiated claim that vitamin C was an effective treatment for cancer, the common cold, and other maladies (Cameron and Pauling 1979). Responses to his publications from the scientific community were candid. For instance, one colleague’s book review in the Journal of the American Medical Association stated, “The many admirers of Linus Pauling will wish that he had not written this book. Here are found, not the guarded statements of a philosopher or scientist seeking truth, but the clear, incisive sentences of an advertiser with something to sell” (Bing 1971, 1). Regardless, his popular science books sold well, and his work helped to cement the place of Vitamin C around the world (Thielking 2015). Thus, scientific expertise is extremely domain specific, and people who appear to have expertise can often do the most harm.
Technological solutions could include creating symbols on social media to indicate domain-specific scientific expertise. This way, if a cardiologist makes recommendations about climate change, the audience can see that this is an opinion rather than expert advice. The Twitter blue-check verification has already been used to symbolize an authentic account (and to signal a COVID-19 expert), and thus it might be possible to extend this to other types of experts. More generally, whether official bodies such as the National Institutes of Health should also host a list of well-known pseudoscientists is an open question. What is clear is that it is all too easy for individuals to have a fake degree, be a professor at a fake online university, publish in predatory journals, serve on the board of predatory journals, have an impressive h-index on Google Scholar, and sell scientific-sounding books on Amazon, with few consequences. We now turn to scientific fraud and misconduct and whether simply retracting academic papers is a sufficient remedy.
Legitimate Scientists
Unfortunately, legitimate scientists can also spread misinformation within their own field of expertise. Larson (2018) posits that the most damaging misinformation comes from those with medical credentials who stoke unfounded fears, those who see disinformation as an opportunity for financial gain, or those who use it as a political opportunity to polarize society. Several steps have been taken to mitigate the spread of misinformation from legitimate sources. For instance, U.S. medical certifying boards have warned doctors that if they spread COVID vaccine misinformation, they risk losing their certification and license (Doshi 2021). Another tool is to retract published articles. Approximately 67 percent of journal retractions in science are attributable to misconduct and fraud (with 21 percent due to error, 14 percent duplicate publications, and 10 percent plagiarism; Fang, Steen, and Casadevall 2012). Setting the record for most retracted papers is the anesthesiologist Yoshitaka Fujii, who fabricated at least 172 scientific papers over a 19-year period. Thankfully, scientific fraud appears to be somewhat concentrated, with 1.6 percent of scientists accounting for 25 percent of journal retractions (Brainard and You 2018). Some argue that scientific fraud should be considered a crime, given the potential for detrimental impact on society (Bhutta and Crane 2014).
The most infamous example of scientific disinformation is when The Lancet published an article suggesting a link between the measles, mumps, and rubella (MMR) vaccine and autism (Wakefield et al. 1998). A substantial vested interest existed for the lead author, Andrew wakefield, whose research was funded by lawyers in legal battles against MMR manufacturers, and who had lodged a patent for a new vaccine (Eggertson 2010). This study had many issues, including substantial inconsistencies between the cases reported and the children’s medical records (e.g., out of the twelve children in the study, five already had autistic symptoms prior to vaccination and three never received a diagnosis of autism at all; Deer 2011). It is also worth discussing the systematic failures that accompanied the paper. First, despite the small sample size, uncontrolled design, and speculative conclusions, the article was not rejected during the peer review process. Second, even after ten of the original twelve authors retracted their support for the study and its interpretation, The Lancet reported being satisfied that there was appropriate ethical scrutiny of the paper and exonerated the authors of scientific misconduct (Murch et al. 2004; Horton 2004). The Lancet did fully retract the paper in 2010, but this was only after the UK General Medical Council ruled that wakefield’s actions were dishonest and misleading and found him guilty of professional misconduct (General Medical Council 2010).
If journal retractions were effective, harm from scientific disinformation might be ameliorated. Thus, we now turn to the question of retraction efficacy as a means to combat health misinformation and disinformation in science. While the rate of retractions has dramatically increased over recent years (largely due to improved journal oversight; Brainard and You 2018), the process of retracting articles appears to be slow. Trikalinos, Evangelou, and Ioannidis (2008) found that the median time from publication to retraction is 28 months (79 months for senior researchers vs. 22 months for junior researchers). Findings have been mixed as to whether retractions impact citation rate, with some studies reporting up to a 48 percent reduction in citations (Mott, Fairhurst, and Torgerson 2019) but others finding no impact at all (Candal-Pedreira et al. 2020). Unfortunately, when retracted papers are cited, only 5 percent of articles mention the retraction, with the remaining studies either explicitly or implicitly endorsing the findings (Bar-Ilan and Halevi 2017). Indeed, 18 percent of authors themselves self-cite retracted work, with only 10 percent of them mentioning the retraction (Madlock-Brown and Eichmann 2015).
Apart from the authors themselves, papers are likely to be cited post-retraction because other scientists are not aware of the retraction status (Teixeria da Silva and Bornemann-Cimenti 2017). Retracted articles should be algorithmically demoted by databases such that they are not as easily accessible or, at the very least, labelled as being retracted. While some progress has been made with citation management systems labelling retracted articles—for instance, Zotero now collaborates with Retraction Watch to flag retracted papers (Oransky 2019)—online research databases remain highly varied. While some consistently label retracted journals (for instance, PubMed labels 100 percent), others do not (ProQuest PsycINFO labels 4 percent; Suelzer et al. 2021). Furthermore, retracted articles are not currently labelled in Google Scholar. To highlight this issue, as of November 2021, if a lay individual diligently enters the search terms “coronavirus” and “5G” into Google Scholar, a retracted paper called 5G Technology and Induction of Coronavirus in Skin Cells is the top article retrieved (Fioranelli et al. 2020). This paper claims that skin cells act like antennas to absorb 5G waves and produce COVID-19, yet the article has no retraction label. Fraudulent and fabricated science continues to have an impact on the information ecosystem post-retraction. Better safeguards must ensure that the public accesses accurate scientific information when searching research databases.
Although scientific fraud is extremely important to remedy, a final consideration is that questionable research practices conducted by legitimate scientists are far more common than clear cases of misconduct (Artino, Driessen, and Maggio 2019). Bishop (2019) labelled a subset of these practices the “four horsemen of the reproducibility apocalypse.” These included P-hacking (conducting a multitude of different analyses, then only reporting the significant findings), HARKing (hypothesizing after results are known), publication bias (when researchers are less likely to write up null results and journal editors are less likely to publish them), and low statistical power. These issues are greatly detrimental to the scientific information ecosystem and scratch the surface regarding approaches to improve replicability (i.e., using reliable measures; Swire-Thompson et al. 2021). Potential solutions include journal policies that strongly encourage preregistration and open data practices and specific outlets for the easy posting of null findings.
Inaccurately Communicating Science to the Public
For a well-informed society, it is not sufficient that only the domain-specific experts comprehend the difference between what is fact and fiction; the general public must also be able to make this distinction. The media, thus, play an integral intermediary role in the communication of both scientific consensus and recent scientific progress regarding health information. However, this process breaks down in many instances, and inaccuracies are spread or causal links exaggerated. Cooper et al. (2012) examined health advice from the top ten selling newspapers in the UK over the course of a week. They found that misreported health advice was widespread, and 65 percent of health claims made had insufficient evidence. In a similar vein, Haber et al. (2018) examined the fifty most highly shared media articles on Facebook and Twitter and the scientific articles associated with them. The authors found that 48 percent of media articles used causal language that was too strong, with 58 percent inaccurately reporting the question, intervention, population, or results of the study. However, the media are not always the root of the problem. For instance, the exaggeration of causal language can begin in the scientific article itself (Haber et al. 2018), or in the press release, which scientists themselves approve. Sumner et al. (2016) analyzed 534 press releases and found that 23 percent had more direct or explicit advice than the original article. Naturally, these exaggerations in press releases led to exaggerations in the news, with the news 2.4 times more likely to exaggerate the outcomes when the press release did so. An important aspect of this study is that it found no difference in journalistic uptake, regardless of whether the causal language was exaggerated in the press release. In other words, scientists need not exaggerate their claims with the hope of attracting more media attention to their work. Furthermore, scientists should be explicitly taught that the press release is ultimately under their control, and it is acceptable to communicate to their press office that they wish to decrease the sensationalism of the proposed release.
Journalistic exaggeration is often done in an attempt to simplify complicated health information for the lay audience. However, Gustafson and Rice (2020) found that people are quite capable of interpreting scientific nuance. For instance, when papers communicated uncertainty regarding their findings (due to measurement error, modeling approximations, and statistical assumptions), it increased perceived credibility, belief in the information, and intentions to follow the recommendations. By contrast, the authors found that communicating uncertainty regarding the consensus of the finding, such as disagreement or conflict in the scientific discipline, had the opposite effect, reducing perceived credibility and belief in the information. This highlights the danger of “bothsidesism,” where journalists present an issue as being balanced between two opposing views, when the true balance or scientific consensus tips largely, or even overwhelmingly, in one direction. For example, giving MMR vaccine researchers and antivaccination pseudoscientists equivalent space or airtime in the media and framing it as a balanced argument is not appropriate.
Given that science journalism’s current coverage of science follows journalistic rather than scientific norms, a benefit exists in further blending the two (Dunwoody 2021). For instance, science journalists could always cite the primary article, allowing the audience to easily access and evaluate it themselves. This example highlights why open access science is so important, as the public cannot view the primary articles if they are behind a paywall. Furthermore, science journalists could add more detail about how the research was conducted (which is frequently omitted), assisting the audience in evaluating study design and method, and interpreting the findings within the context of the previous literature (Dimopoulos and Koulaidis 2002). Finally, more congruity is necessary when covering health information; for instance, journalists rarely revisit previously covered material when scientific papers are disconfirmed by meta-analyses (Dumas-Mallet et al. 2017). Health advice should be updated as it is discovered, even if these findings are null effects.
Conclusion
Predatory journals, pseudoscientists, scientific fraud, and journalistic miscommunication are all hurdles for the effective use of science by the public. Simple steps such as demoting and labelling retracted articles in databases, having a well-maintained blacklist of predatory journals hosted by a reputable source, and always citing original sources in media articles could go a long way toward improving the scientific information ecosystem. However, the current review is far from comprehensive. Future literature should consider more nuanced instances that are both beneficial and harmful to the scientific information ecosystem. For instance, preprints or “grey literature” have both substantial advantages (such as dramatically reducing the time that useful research can be read by peers) and substantial drawbacks (such as releasing non-peer-reviewed and potentially inaccurate information into the ecosystem, where it is cited and treated as a terminal publication). In sum, while we may feel as if society is in a “post-truth” era, the public still looks to science for accurate information. If we do not improve the scientific information ecosystem, people will reduce trust in all science, regardless of quality. Thus, this article is a call to arms for all scientific institutions—and those who work alongside them—to strive for the scientific information ecosystem to be as accurate as possible.
Acknowledgments
The current article was funded by a National Institutes of Health Pathway to Independence Award (1K99CA248720-01A1) to Briony Swire-Thompson.
Biographies
Briony Swire-Thompson is a senior research scientist and director of the Psychology of Misinformation Lab at Northeastern University. She is a cognitive psychologist who investigates what drives belief in inaccurate information and how corrections can be designed to maximize impact.
David Lazer is a professor of political science and computer sciences at Northeastern University. His scholarship focuses on computational social science and social networks, with a particular focus on misinformation and political communication. He is colead of the COVID states project, which has charted public opinion in all fifty states through the pandemic.
Footnotes
While there are alternative lists, they are either anonymously managed or require an institutional subscription (i.e., Cabell International; Teixeira da Silva and Kimotho 2021).
Contributor Information
Briony Swire-Thompson, senior research scientist and director of the Psychology of Misinformation Lab at Northeastern University.
David Lazer, professor of political science and computer sciences at Northeastern University.
References
- Artino Anthony R. Jr., Driessen Erik W., and Maggio Lauren A.. 2019. Ethical shades of gray: International frequency of scientific misconduct and questionable research practices in health professions education. Academic Medicine 94 (1): 76–84. [DOI] [PubMed] [Google Scholar]
- Bagues Manuel, Sylos-Labini Mauro, and Zinovyeva Natalia. 2019. A walk on the wild side: Predatory journals and information asymmetries in scientific evaluations. Research Policy 48:462–77. [Google Scholar]
- Bar-Ilan Judit, and Halevi Gali. 2017. Post retraction citations in context: A case study. Scientometrics 113:547–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bartholomew Robert E. 2014. Science for sale: The rise of predatory journals. Journal of the Royal Society of Medicine 107 (10): 384–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beall Jeffrey. 2012. Predatory publishers are corrupting open access. Nature 489:179. [DOI] [PubMed] [Google Scholar]
- Bhutta ZA, and Crane J. 2014. Should research fraud be a crime? BMJ 349 (1): 4532. [DOI] [PubMed] [Google Scholar]
- Bing Franklin C. 1971. Vitamin C and the common cold. JAMA 215 (9): 1506. [Google Scholar]
- Bishop Dorothy. 2019. Rein in the four horsemen of irreproducibility. Nature 568 (7753): 435–36. [DOI] [PubMed] [Google Scholar]
- Bohannon John. 2013. Who’s afraid of peer review? Science 342 (6154): 60–65. [DOI] [PubMed] [Google Scholar]
- Brainard J, and You J. 2018. What a massive database of retracted papers reveals about science publishing’s “death penalty.” Science 25 (1): 1–5. [Google Scholar]
- Cameron Ewan, and Pauling Linus Carl. 1979. Cancer and Vitamin C: A discussion of the nature, causes, prevention, and treatment of cancer with special reference to the value of Vitamin C. Corvallis, OR: Linus Pauling Institute of Science and Medicine. [Google Scholar]
- Candal-Pedreira Cristina, Ruano-Ravina Alberto, Fernández Esteve, Ramos Jorge, Campos-Varela Isabel, and Pérez-Ríos Mónica. 2020. Does retraction after misconduct have an impact on citations? A pre–post study. BMJ Global Health 5:1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clifton Helen, Chapman Matthew, and Cox Simon. 16 January 2018. “Staggering” trade in fake degrees revealed. BBC News. [Google Scholar]
- Cooper Benjamin E. J., Lee William E., Goldacre Ben M., and Sanders Thomas A. B.. 2012. The quality of the evidence for dietary advice given in UK national newspapers. Public Understanding of Science 21:664–73. [DOI] [PubMed] [Google Scholar]
- Cortegiani Andrea, Longhini Federico, Sanfilippo Filippo, Raineri Santi Maurizio, Gregoretti Cesare, and Giarratano Antonino. 2019. Predatory open-access publishing in anesthesiology. Anesthesia & Analgesia 128:182–87. [DOI] [PubMed] [Google Scholar]
- Cortegiani Andrea, Manca Andrea, Lalu Manoj, and Moher David. 2020. Inclusion of predatory journals in Scopus is inflating scholars’ metrics and advancing careers. International Journal of Public Health 65:3–4. [DOI] [PubMed] [Google Scholar]
- Darbyshire Philip, Hayter Mark, Frazer Kate, Ion Robin, and Jackson Debra. 2020. Hitting rock bottom: The descent from predatory journals and conferences to the predatory PhD. Journal of Clinical Nursing 29:1425–28. [DOI] [PubMed] [Google Scholar]
- Deer Brian. 2011. How the case against the MMR vaccine was fixed. British Medical Journal 342:77–82. [DOI] [PubMed] [Google Scholar]
- Demir Selcuk Besir. 2018. Predatory journals: Who publishes in them and why? Journal of Informetrics 12:1296–1311. [Google Scholar]
- Dimopoulos Kostas, and Koulaidis Vasilis. 2002. The socio-epistemic constitution of science and technology in the Greek press: An analysis of its presentation. Public Understanding of Science 11:225–41. [Google Scholar]
- Doshi Peter. 2021. Covid-19: Spreading vaccine “misinformation” puts license at risk, uS boards tell physicians. BMJ 375:1–2. [DOI] [PubMed] [Google Scholar]
- Duc Nguyen Minh, Dang Vinh Hiep Pham Minh Thong, Zunic Lejla, Zildzic Muharem, Donev Doncho, Jankovic Slobodan M., Hozo Izet, and Masic Izet. 2020. Predatory open access journals are indexed in reputable databases: A revisiting issue or an unsolved problem? Medical Archives 74:318–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dumas-Mallet Estelle, Smith Andy, Boraud Thomas, and Gonon François. 2017. Poor replication validity of biomedical association studies reported by newspapers. PloS ONE 12 (2). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dunwoody Sharon. 2021. Science journalism: Prospects in the digital age. In Routledge handbook of public communication of science and technology, eds. Bucchi Massimiano and Trench Brian, 14–32. Abingdon: Routledge. [Google Scholar]
- Eggertson Laura. 2010. Lancet retracts 12-year-old article linking autism to MMR vaccines. Canadian Medical Association Journal 182:199–200. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eriksson Stefan, and Helgesson Gert. 2018. Time to stop talking about “predatory journals.” Learned Publishing 31:181–83. [Google Scholar]
- Fang Ferric C., Steen R. Grant, and Casadevall Arturo. 2012. Misconduct accounts for the majority of retracted scientific publications. Proceedings of the National Academy of Sciences 109:17028–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fioranelli M, Sepehri A, Roccia MG, Jafferany M, Yu Olisova O, Lomonosov KM, and Lotti T. 2020. 5G technology and induction of coronavirus in skin cells. Journal of Biological Regulators & Homeostatic Agents 34:3–10. [DOI] [PubMed] [Google Scholar]
- Funk Cary, and Kennedy Brian. 27 August 2020. Public confidence in scientists has remained stable for decades. Washington DC: Pew Research Center. Available from pewresearch.org. [Google Scholar]
- General Medical Council. 2010. Fitness to practice panel hearing. Available from https://cdn.factcheck.org/UploadedFiles/gmc-charge-sheet.pdf.
- Grudniewicz Agnes, Moher David, Cobey Kelly D., Bryson Gregory L., Cukier Samantha, Allen Kristiann, Ardern Clare, Balcom Lesley, Barros Tiago, Berger Monica, et al. 2019. Predatory journals: No definition, no defense. Nature 576:210–12. [DOI] [PubMed] [Google Scholar]
- Gustafson Abel, and Rice Ronald E.. 2020. A review of the effects of uncertainty in public science communication. Public Understanding of Science 29:614–33. [DOI] [PubMed] [Google Scholar]
- Haber Noah, Smith Emily R., Moscoe Ellen, Andrews Kathryn, Audy Robin, Bell Winnie, Brennan Alana T., et al. 2018. Causal language and strength of inference in academic and media articles shared in social media (CLAIMS): A systematic review. PLOS ONE 13:1–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hect David. 2018. Pseudoscience and the search for truth. In Pseudoscience: The conspiracy against science, eds. Kaufman Allison B. and Kaufman James C., 3–20. Cambridge, MA: MIT Press. [Google Scholar]
- Hermes Britt. 19 December 2019. Why I left naturopathy. Naturopathic Diaries. [Google Scholar]
- Hopf Henning, Krief Alain, Mehta Goverdhan, and Matlin Stephen A.. 2019. Fake science and the knowledge crisis: Ignorance can be fatal. Royal Society Open Science 6 (5). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Horton Richard. 2004. A statement by the editors of The Lancet. The Lancet 363:820–21. [DOI] [PubMed] [Google Scholar]
- Kendall Graham. 2021. Beall’s legacy in the battle against predatory publishers. Learned Publishing 34:379–88. [Google Scholar]
- Kurt Serhat. 2018. Why do authors publish in predatory journals? Learned Publishing 31:141–47. [Google Scholar]
- Larson Heidi J. 2018. The biggest pandemic risk? Viral misinformation. Nature 562:309. [DOI] [PubMed] [Google Scholar]
- Laudan Larry. 1983. The demise of the demarcation problem. In Physics, philosophy and psychoanalysis, eds. Salmon Wesley C, Rescher Nicholas, Laudan Larry, Hempel Carl G., and Cohen Robert S., 111–27. New York, NY: Springer. [Google Scholar]
- Lilienfeld Scott O., Lynn Steven Jay, and Ammirati Rachel J.. 2015. Science versus pseudoscience. In The encyclopedia of clinical psychology, eds. Cautin Robin L. and Lilienfeld Scott O., 1–7. Hoboken, NJ: John Wiley & Sons, Inc. [Google Scholar]
- Luna Daniel Silva, Bering Jesse M., and Halberstadt Jamin B.. 2021. Public faith in science in the united States through the early months of the COVID-19 pandemic. Public Health in Practice 2:100–103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Madlock-Brown Charisse R., and Eichmann David. 2015. The (lack of) impact of retraction on citation networks. Science and Engineering Ethics 21:127–37. [DOI] [PubMed] [Google Scholar]
- Manca Andrea, Moher David, Cugusi Lucia, Dvir Zeevi, and Deriu Franca. 2018. How predatory journals leak into PubMed. CMAJ 190 (35): E1042–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mertkan Sefika, Aliusta Gulen Onurkan, and Suphi Nilgun. 2021. Knowledge production on predatory publishing: A systematic review. Learned Publishing 34:407–13. [Google Scholar]
- Morgenstern Burkhard. 2020. Fake scientists on editorial boards can significantly enhance the visibility of junk journals. In Gaming the metrics: Misconduct and manipulation in academic research, eds. Biagioli Mario and Lippman Alexandra, 201–11. Cambridge, MA: MIT Press. [Google Scholar]
- Mott Andrew, Fairhurst Caroline, and Torgerson David. 2019. Assessing the impact of retraction on the citation of randomized controlled trial reports: An interrupted time-series analysis. Journal of Health Services Research & Policy 24:44–51. [DOI] [PubMed] [Google Scholar]
- Murch Simon H., Anthony Andrew, Casson David H., Malik Mohsin, Berelowitz Mark, Dhillon Amar P., Thomson Michael A., Valentine Alan, Davies Susan E., and Walker-Smith John A.. 2004. Retraction of an interpretation. The Lancet 363:750. [DOI] [PubMed] [Google Scholar]
- Oransky Ivan. 12 June 2019. Want to check for retractions in your personal library—and get alerts—for free? Now you can. Retraction Watch. [Google Scholar]
- Parsi Kayhan, and Elster Nanette. 2018. Peering into the future of peer review. American Journal of Bioethics 18 (5): 3–4. [DOI] [PubMed] [Google Scholar]
- Ross-White Amanda, Godfrey Christina M., Sears Kimberley A., and Wilson Rosemary. 2019. Predatory publications in evidence syntheses. Journal of the Medical Library Association: JMLA 107 (1): 57–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seethapathy Gopalakrishnan Saroja, Santhosh Kumar JU, and Hareesha AS. 2016. India’s scientific publication in predatory journals: Need for regulating quality of Indian science and education. Current Science 111:1759–64. [Google Scholar]
- Shamseer Larissa, Moher David, Maduekwe Onyi, Turner Lucy, Barbour Virginia, Burch Rebecca, Clark Jocalyn, Galipeau James, Roberts Jason, and Shea Beverley J.. 2017. Potential predatory and legitimate biomedical journals: Can you tell the difference? A cross-sectional comparison. BMC Medicine 15:1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith R. 2006. Peer review: A flawed process at the heart of science and journals. Journal of the Royal Society of Medicine 99 (4): 178–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Southwell Brian G., Brennen J. Scott Babwah, Paquin Ryan, Boudewyns Vanessa, and Zeng Jing. 2022. Defining and measuring scientific misinformation. The ANNALS of the American Academy of Political and Social Science (this volume). [Google Scholar]
- Suelzer Elizabeth M., Deal Jennifer, Hanus Karen, Ruggeri Barbara E., and Witkowski Elizabeth. 2021. Challenges in identifying the retracted status of an article. JAMA Network Open 4:1–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sumner Petroc, Vivian-Griffiths Solveiga, Boivin Jacky, Williams Andrew, Bott Lewis, Adams Rachel, Venetis Christos A., Whelan Leanne, Hughes Bethan, and Chambers Christopher D.. 2016. Exaggerations and caveats in press releases and health-related science news. PLOS ONE 11:1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Swire-Thompson Briony, and Lazer David. 2020. Public health and online misinformation: Challenges and recommendations. Annual Review of Public Health 41:433–51. [DOI] [PubMed] [Google Scholar]
- Swire-Thompson Briony, Miklaucic Nicholas, Wihbey John, Lazer David, and DeGutis Joseph, 2022. The backfire effect after correcting misinformation is strongly associated with reliability. Journal of Experimental Psychology: General, online ahead of print. doi: 10.1037/xge0001131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Teixeira da Silva Jaime A., and Bornemann-Cimenti H. 2017. Why do some retracted papers continue to be cited? Scientometrics 110 (1): 365–70. [Google Scholar]
- Teixeira da Silva Jaime A., and Kimotho Stephen Gichuhi. 2021. Signs of divisiveness, discrimination and stigmatization caused by Jeffrey Beall’s “predatory” open access publishing blacklists and philosophy. Journal of Academic Librarianship. Available from 10.1016/j.acalib.2021.102418. [DOI] [Google Scholar]
- Thielking Megan. February 2015. How Linus Pauling duped America into believing Vitamin C cures colds. Vox. [Google Scholar]
- Trikalinos Nikolaos A., Evangelou Evangelos, and Ioannidis John P. A.. 2008. Falsified papers in high-impact journals were slow to retract and indistinguishable from nonfraudulent papers. Journal of Clinical Epidemiology 61:464–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
- U.S. House Select Committee on Aging Subcommittee on Health and Long-Term. 1985. Fraudulent credentials: Joint Hearing before the Subcommittee on Health and Long-Term Care, House of Representatives, 99th Congress, 1st Session, December 11, 1985. Washington, DC: Government Printing Office. [Google Scholar]
- Wakefield AJ, Murch SH, Anthony A, Linnell J, Casson DM, Malik M, Berelowitz M, Dhillon AP, Thomson MA, and Harvey P, et al. 1998. RETRACTED: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. The Lancet 351:637–41. [DOI] [PubMed] [Google Scholar]
- Xia Jingfeng, Harmon Jennifer L., Connolly Kevin G., Donnelly Ryan M., Anderson Mary R., and Howard Heather A.. 2015. Who publishes in “predatory” journals? Journal of the Association for Information Science and Technology 66:1406–17. [Google Scholar]