Skip to main content
Journal of the American Medical Informatics Association: JAMIA logoLink to Journal of the American Medical Informatics Association: JAMIA
. 2023 Jan 27;30(4):752–760. doi: 10.1093/jamia/ocad005

A scoping review of digital health interventions for combating COVID-19 misinformation and disinformation

Katarzyna Czerniak 1, Raji Pillai 2, Abhi Parmar 3, Kavita Ramnath 4, Joseph Krocker 5, Sahiti Myneni 6,
PMCID: PMC10018269  PMID: 36707998

Abstract

Objective

We provide a scoping review of Digital Health Interventions (DHIs) that mitigate COVID-19 misinformation and disinformation seeding and spread.

Materials and Methods

We applied our search protocol to PubMed, PsychINFO, and Web of Science to screen 1666 articles. The 17 articles included in this paper are experimental and interventional studies that developed and tested public consumer-facing DHIs. We examined these DHIs to understand digital features, incorporation of theory, the role of healthcare professionals, end-user experience, and implementation issues.

Results

The majority of studies (n =11) used social media in DHIs, but there was a lack of platform-agnostic generalizability. Only half of the studies (n =9) specified a theory, framework, or model to guide DHIs. Nine studies involve healthcare professionals as design or implementation contributors. Only one DHI was evaluated for user perceptions and acceptance.

Discussion

The translation of advances in online social computing to interventions is sparse. The limited application of behavioral theory and cognitive models of reasoning has resulted in suboptimal targeting of psychosocial variables and individual factors that may drive resistance to misinformation. This affects large-scale implementation and community outreach efforts. DHIs optimized through community-engaged participatory methods that enable understanding of unique needs of vulnerable communities are urgently needed.

Conclusions

We recommend community engagement and theory-guided engineering of equitable DHIs. It is important to consider the problem of misinformation and disinformation through a multilevel lens that illuminates personal, clinical, cultural, and social pathways to mitigate the negative consequences of misinformation and disinformation on human health and wellness.

Keywords: COVID-19, misinformation, disinformation, technology

INTRODUCTION

Misinformation (and disinformation) is so pervasive that it has been listed by the World Economic Forum as a global risk for human wellness.1,2 As engagement with online platforms for health information grows,3–5 so has the spread of unfounded information that impedes the adoption of positive health behaviors that protect against disease.6–11 With the current climate of distrust in scientific and medical institutions as a source of reliable information,11–14 the dissemination of misinformation and disinformation presents a significant public health challenge. As has been apparent in the context of the SARS-CoV-2 (COVID-19) pandemic, inaccurate information, considered a “second pandemic of misinformation,” can lead to harmful health behaviors and impede the effectiveness of population health interventions and legitimate risk-related information.4,8,15 In particular, the number of online platforms such as social media has been increasing, especially in low- and middle-income countries (LMICs).16 Social media also has become a predominant source of scientific and medical news in the United States as well as in European nations.17,18 Although convenient, the search process for health information in social media makes it difficult for users to discern facts from opinions.19 Further, online platforms have allowed the rapid spread of questionable information posing a threat to individuals’ well-being20,21 and creating unexpected challenges for public health agencies.15

To mitigate harm from health misinformation and disinformation during the COVID-19 pandemic, health agencies have called for a global movement to deliver evidence-based COVID-19 information.22 In response, several specialized Digital Health Interventions (DHIs) have been developed around the globe to address such misinformation and disinformation.23 As defined by the World Health Organization (WHO), a DHI provides a discrete functionality of digital technology that is applied to achieve health objectives and is implemented within digital health applications and information systems, including communication channels such as text messages.24 Several other definitions exist for DHIs, and in the context of this paper, DHIs are defined as interventions that utilize digital tools (eg, mobile apps, chatbots, online gaming, websites) to enable an end user to resist and rebut COVID-19 misinformation and disinformation.25,26 To this end, we conduct a scoping review of DHIs to examine their development methods (specifically, the incorporation of theory and stakeholder engagement), digital technology utilization, end-user experience, and implementation barriers and facilitators.

BACKGROUND AND SIGNIFICANCE

Fox (1983) conducted the first work on misinformation, defining it as false information, while subsequent studies expanded its description to include uncertain, vague, and ambiguous information.27,28 Disinformation is defined as information that “includes all forms of false, inaccurate, or misleading information designed, presented, and promoted to intentionally cause public harm or for profit.”29,30 Notably, misinformation about critical health issues, which can lead to mistrust of the clinical and scientific community, has the potential to impose a threat to the value of medical advances such as vaccines.1,31–36

Different strategies to address the circulation of misinformation, also known as a misinfodemic, have become urgently needed during the COVID-19 pandemic. A misinfodemic stems from an infodemic, which is defined as an overabundance of information, only some of which is accurate, that makes it difficult for people to find trustworthy sources and reliable guidance.37 During the COVID-19 pandemic, the high virality of the large quantity of misleading information led to a misinfodemic, which WHO noted as a highly significant phenomenon that undermined public health responses to the pandemic around the globe.38,39

Several online tools have been developed in the general domain, ranging from educational sources, bot-detection software to misinformation-detection algorithms, propagation models, and credibility assessments.39–48 Some tools rely on statistical patterns and machine learning algorithms while others depend on nonautomatic methods such as crowdsourcing and human implementation or manual fact-checking.49 A total of 90 tools have been identified by the RAND corporation and other organizations,49 98% of which are free of charge and 83%, fully operational. The main types of tools are presented in Table 1.

Table 1.

Tools for addressing misinformation and disinformation

Tool Area(s) of focus Implementation method Method
Content verification Content Manual Fact-checking through manual review
Standard implementation
Information literacy, education, and training
Disinformation tracking Process Automated Machine learning
Whitelisting
Bot detection Content and process Manual and automated Blockchain or crowdsourcing
Information and source credibility scoring

Building on the general domain tools/strategies shown in Table 1, several specialized DHIs that target COVID-19 misinformation and disinformation have been developed and evaluated.23,50–54 Our understanding of these DHIs, however, is limited in terms of their: (1) methodological underpinnings, (2) facilitators and barriers concerning large-scale implementation efforts, and (3) end-user experience.

OBJECTIVE

The objective of this paper is to provide a scoping review and summary of research studies that develop and evaluate DHIs to mitigate COVID-19 misinformation and disinformation seeding and spread. We examine whether and how interventions: (1) utilize digital technologies; (2) incorporate cognitive frameworks and behavioral theory to guide intervention development; (3) leverage contributions from healthcare professionals (eg, clinicians, community health workers); (4) measure outcomes to establish an intervention’s efficacy; (5) capture end-user experiences; and (6) provide barriers and facilitators to the implementation of DHIs.

MATERIALS AND METHODS

Overview

This scoping review was conducted using the methodological framework for scoping reviews proposed by Arksey and O’Malley and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).55,56 The 5 stages proposed by Arksey and O’Malley include: (1) identifying the research question(s); (2) identifying relevant studies; (3) selecting the study; (4) charting (or tabulating) the data; and (5) collating, summarizing, and reporting the results.

Literature search

After identifying our objectives, we conducted a comprehensive search on 3 electronic databases: PubMed, PsychINFO, and Web of Science. Numerous keywords and MeSH terms were used to build our queries (see the search strategy in the Supplementary File). All articles retrieved from the search are available in the public domain and were uploaded into Endnote 20 and consolidated into a single list.

Screening procedure

The complete list of articles was screened by 6 reviewers. Articles first were divided into 3 equal clusters, after which one of the 3 teams of 2 reviewers each reviewed a cluster. Within each team, the reviewers first independently screened the title and abstract using the eligibility criteria and then came together to compare the results of their independent screening and resolve any disagreements. Each team of 2 reviewers then screened the full texts and extracted all relevant information from the final list onto an Excel spreadsheet, after which the teams came together to combine their data into a comprehensive table that was reviewed by all to ensure inter-rater reliability.

Inclusion/exclusion criteria

Certain eligibility criteria were used to determine the final list of articles included in our review. We selected research studies focused on technology-based interventions or tools designed to mitigate COVID-19-related misinformation or disinformation. We included only those articles available in English that were published from 2019 to the present that described or evaluated such an intervention. We excluded all scoping reviews, meta-analyses, and systematic reviews and based our review on experimental studies as well as descriptive studies that mentioned a DHI.

RESULTS

Our search queries generated a total of 1666 articles. Of these, 626 articles were retrieved from PubMed, 97 from PsychInfo, 937 from Web of Science, and 6 through snowballing (Figure 1). After removing duplicates, the remaining 1159 titles/abstracts were initially screened, and the 73 articles that met our criteria were then charted in an Excel database and the full texts were reviewed by the entire team. During this phase, 54 articles were excluded because they described machine learning techniques for identifying false news, rumors, or misinformation that had been tested for functionality and acceptability but had not been incorporated into interventions for combating COVID-19 misinformation or disinformation. An additional 2 articles were excluded due to a lack of access to the full text. Of the 1159 articles screened and reviewed, 17 were included in the final analysis.57–73 (see Supplementary Materials). A summary of the DHIs in these 17 studies and their methodological underpinnings is provided below. Figure 2 provides an overview of the DHIs that employed theoretical constructs and the data foundation to guide feature development.

Figure 1.

Figure 1.

The PRIMSA diagram explains the identification and screening process applied to our literature review. Out of the original 1159 articles that were screened, 17 were included in the final analysis.

Figure 2.

Figure 2.

This Venn diagram shows the classification of DHIs included in the final analysis in terms of data and theory. Only 6 of the 17 DHIs included in the final analysis were both data-driven and theory-based. DHIs: Digital Health Interventions.

Summary of digital features

Eleven studies utilized some aspects of social media in their interventions, such as platforms for data collection, group creation, or network modeling.57,60,61,63–67,70,71,73 These studies, however, are limited to 1 or 2 platforms, such as creating Facebook groups. Two studies created groups on a social media platform that allowed 2-way communication between members and moderators to share information,57,63 but 2 other studies only pushed information (a 1-way channel) to the group.60,61 One study organized town hall meetings about COVID-19 and then disseminated the recordings on Facebook, which makes it a hybrid, utilizing digital and nondigital channels.63 Three studies made chatbots to help address misinformation and added support to include specific languages that are prevalent in their regions.61,68,72 Three studies examined the best practices to debunk misinformation through messaging.65,67,73 One study focused on how to stop rumors from spreading; however, it is limited to a single community on Sina Weibo and used a 2SI2R epidemic model to examine how to prevent misinformation spread.66 Another 2 studies addressed the effects of misinformation-sharing intentions at the individual level.58,69 One study investigated a market tool built to identify practitioners who spread misinformation and label them as noncompliant.64 One study used a browser game and infographics to enable individuals to identify misinformation and mitigate its spread.59

Incorporation of theory

Table 2 presents the transdisciplinary theoretical frameworks used in the studies. Of the 17 articles included in the final analysis, approximately half (52%; n =9) specified and described a theory, framework, or model used in the development and implementation of the DHI.57–59,62,65–68,72 The theories, frameworks, and models that were noted came from a variety of disciplines, including behavioral science, psychology, sociology, communication, mathematics, epidemiology, computer science, and artificial intelligence. The most commonly used theories and models were psychosocial or computer science-based.

Table 2.

Summary of theoretical underpinnings

Author(s) (year) Theory/framework/model Behavioral outcomes measured
Albrecht et al. (2022) Risk reduction model, interdisciplinary science communication, and a 2-way model of communication No
Amin et al. (2021) Visual selective attention theory Yes
Basol et al. (2021) Prebunking, inoculation theory Yes
Ghaleb et al. (2022) Machine learning, natural language processing No
Königand Breves (2021) Content-source integration model, theory of epistemic authority Yes
Liu and Qi (2022) I2S2R interactive infection model of multiple rumor engagers No
Mourali and Drake (2022) Extant theory Yes
Pandey et al. (2022) World Health Organization’s identify-simplify-amplify-quantify model No
Siedlikowski et al. (2021) Unsupervised machine learning Q&A model No

Less than half of the studies (47%; n =8) included in the final analysis measured and evaluated a behavioral construct. And, not all studies specified a theory, model, or framework for those constructs.58–60,64,65,67,70,73 The most common behavioral and psychosocial constructs targeted by the DHIs aimed at combating COVID-19 misinformation or disinformation were knowledge, attitudes, risk perceptions, and intentions. Intentions were measured in terms of inclination toward information dissemination (eg, intent to share social media posts) or adoption of preventive behaviors (eg, intent to wear a mask).65,67 Many studies specified behavioral or psychosocial constructs when describing the development of the DHI, however, did not evaluate the effects of the DHI on users. The most common reason cited for the lack of evaluation was constraints due to the time-sensitive and evolving nature of the COVID-19 pandemic.

Role of healthcare professionals

More than half of the final studies (53%; n =9) utilized the help of healthcare professionals in various roles to combat COVID-19-related misinformation and disinformation.57,61–65,69,71,72 The majority mentioned receiving help from an interdisciplinary team that consisted of medical doctors, nurses, epidemiologists, and public health practitioners to develop the content and features of DHIs. Medical professionals were involved in the development and use of chatbots to disseminate accurate COVID-19-related information to the public.62,72 Their contribution was valuable in selecting relevant information about each topic, testing the accuracy of the content, providing feedback on major upgrades, and evaluating the efficiency and efficacy of the technology. In addition, healthcare professionals were directly involved in communicating COVID-19-related information to lay audiences via social media to thwart misinformation or disinformation spread.57,61,63,71 They actively participated in online question-and-answer sessions,57,61 contributed through town hall meetings,63 managed newsletters and websites,57 and corrected misinformation on social media.71

Summary of study outcomes

The design and study outcomes of the 17 articles are described in Supplementary Table S2. We report 12 descriptive studies, 1 observational study, and 4 experimental studies. Many investigations that evaluate online content were designed to target populations from local areas based on the language. Languages utilized outside of English included Spanish,57 Arabic,62 Hindi,68 French,72 Danish,61 and Chinese.66,73 The variety of languages involved in these investigations has reflected a global effort to abrogate the spread of misinformation related to COVID-19. Sixteen studies considered the dissemination of educational information or the identification/rebuttal of misinformation. Five studies provided descriptive information regarding user preferences and/or psychosocial dynamics related to DHI design.65,66,68,72,73

End-user experience

Only one study evaluated user awareness and satisfaction among a sample of 308 users who resided in Saudi Arabia. Ghaleb et al62 found that their chatbot’s accurate responses to inquiries led to a significant effect on user satisfaction (B =0.799, P =0.000) and that the chatbot positively and significantly increased users’ awareness of the DHI as a method to fight the infodemic (B =0.567, P =0.000). The remaining 16 studies did not provide any details about user experience and/or perceptions. Four studies were descriptive, summarizing the content and implementation of a DHI.57,61,63,71 Two studies described the restrictions that affected the conduct of rigorous evaluations.68,72 Siedlikowski et al72 stated that, due to the emergency caused by the evolving pandemic, they lacked the time and resources needed to evaluate their chatbot’s implementation and measure its acceptability and impact on users. Pandey et al68 described how they were forced to remove their machine learning-based smartphone application (app) from the Google Play store in the middle of their study. The authors were ordered to revamp the app content so that it met new guidelines for COVID-19-related apps put in place by the clearinghouse, which severely cut the study timeframe. Finally, 5 social media-based studies lacked data on user perceptions due to simulated data/settings,65–67,70,74 and 3 due to retrospectively collected data.64,66,73 As such, the studies included in this review are unable to provide insight into user experiences.

Examination of barriers to and facilitators of implementation

After assessing the limitations of the studies included in this review, 3 common themes were observed. First, 8 studies indicated bias due to limitations in the study sample and demonstrated that the findings are not necessarily representative of the greater population.58,63,65,67,69–71,73 For example, Pennycook et al70 studied social media COVID-19 misinformation in the United States, and, therefore, their results could be quite different from those of other areas in the world. Second, 10 studies analyzed only one social platform, which potentially limits the generalizability of the findings.57,59,60,62,63,65,67,68,71,72 Finally, the most common shortcoming in these studies was the lack of methods that facilitate community engagement, which is essential to address the specialized needs of vulnerable groups, including minorities, older adults and seniors, and individuals with disabilities and other pre-existing conditions (such as Type 2 diabetes) that increase their risk for severe health consequences from COVID-19.

General strengths and facilitators also were assessed. We noted regular updates to the DHIs in real-time due to the pandemic, which required just-in-time informational updates. Given the need for social distancing, connectivity tools such as Zoom teleconferences and Facebook groups were optimized, as seen in Jayawardena et al and Furstrand et al, respectively, which proved to be effective in their implementation.61,63 Overall, these studies demonstrated a user onboarding phase, which is associated with a steep learning curve highlighting the need for DHIs that are naturalistic and capable of leveraging our built-in cognitive awareness.

DISCUSSION

With the digitization of information dissemination and community outreach activities through online social media, we have a unique opportunity to capture, monitor, examine, and evaluate the components operationalized in trending misinformation (including semantics, syntax, framing, and behavioral constructs) that are permeating public health risk communications and diffusing in vulnerable communities globally. Several studies have conducted infoveillence of online social media interactions to monitor and examine the etiology of COVID-19 misinformation and disinformation.75–77 The translation of data-based insight from large-scale observational studies into empirical technology evaluations, however, is limited.

Utilization of digital features and technologies

Evaluation of the literature on DHIs to combat COVID-19 misinformation reveals several common themes relevant to public health endeavors. It is clear that modern avenues of communication, including social media posts, websites, and text messages, are capable of impressive dissemination of information even on an international level.57,61,63,68,69,71 Attempts at effective public health outreach should concentrate on leveraging these channels of communication. DHIs that mitigate misinformation and disinformation generally fall into 3 categories: those that attempt to identify incorrect information,59,70,72,78 promote accurate information,57,61–63,65,68,69,71 or counter misinformation directly.60,64,66,67,73,79

A realistic intervention to combat misinformation will likely require the use of automation given the high volume of inaccurate information freely available to the public, a situation that is exacerbated by the willingness to spread information without confirmation of content accuracy.70 Fortunately, early attempts to identify relevant information with automated methods have shown impressive accuracy. Pandey et al68 report that this strategy can double the relevance of information in as little as 45 days. These early results show promise for further large-scale applications. Although identification of misinformation may help users to better evaluate information, Wang et al73 and Kreps and Kriner79 indicate that the more effective approach appears to involve direct rebuttal in the form of accurate information.

Role of theory

Most DHIs being developed are not informed by behavioral theory despite their goal to target users’ intentions to share information. For such interventions to be able to change human behavior, future research should aim to integrate evidence-based theories that seek to understand and influence human behavior centered around intention (or motivation), such as the integrated behavior model.80 Such integration is key to increasing effectiveness and impact and can be accomplished by using an approach such as Intervention Mapping, a planning framework that provides a systematic process for developing behavior change interventions.81

Contributions of healthcare professionals and community engagement

The value of information dissemination directly from clinical experts will likely be the key to ongoing efforts to quell the spread of misinformation and disinformation related to COVID-19, as professional recommendations are considered more trustworthy by the public.65 The United Nations has recommended the appointment of volunteers on social media as “digital first responders” to correct health misinformation.82 This need is currently being addressed in medical education and preliminary analysis suggests that social media training for medical students propagates accurate information on social media.71 This trend can be observed in the attempted monitoring of clinical providers’ social media accounts to ensure accurate postings.64 Further, enabling clinicians to comprehend the terrain of COVID-19 misinformation and disinformation is essential to empower their patients with evidence-based clinical recommendations. Such clinical readiness is imperative for leveraging and implementing shared decision-making during patient encounters to promote COVID-19 testing, vaccines, and other infection prevention strategies. Some of the barriers encountered by health professionals actively involved in the development, implementation, and improvement of various technologies to fight COVID-19-related misinformation and disinformation include lack of time, lack of demonstrability of positive outcomes, avoidant behaviors, harassment and bullying, lack of social media training, and lack of organizational support.83

Community engagement and participatory research methods are lacking in the development and evaluation phases of the majority of the DHIs considered in our review. Without insight into end-user acceptance, it is difficult to design and diffuse community outreach programs that leverage implementation science methodologies to counter health misinformation. In addition, a lack of community engagement leads to representative bias and suboptimal DHIs, which cannot meet the unique needs of vulnerable communities.24

Implications for user engagement and experience

The studies included in our review indicate that the packaging, timing, and personalization of accurate information may be as important as the content. For example, the viewing and sharing of Twitter posts are more likely when content is in lower-case letters.65 Alternative formatting, including false information indicators, are less effective,79 whereas interventions implemented in the period of psychosocial resistance may augment the impact of the intervention.66 Culturally competent DHIs are important to enhance user experience and ensure continued engagement. For example, a study of a Canadian application that provides educational information regarding COVID-19 found that responses to questions were more accurate in English than in French.72 Similarly, user engagement with a Hindi version of an Indian educational application was found to be higher than that in English.68 This implies that the investment of resources for the contextual translation of educational interventions may be required for optimal user engagement. In addition, studies have found increased usage of educational bot software and increased knowledge gains following exposure to educational WhatsApp messages in females compared to males.60,62 As such, personalized risk communications may be necessary for successful technology-based public health interventions related to COVID-19. None of the DHIs, however, employed methodologies that enabled examination of the needs of underserved populations who may have limited clinical access, or minority groups with tight-knit communities, which may be more vulnerable to online and offline misinformation.84–87

Limitations

Our scoping review has certain limitations. We selected studies to review based on the experimental design. The evidence of how to support people in interacting with misinformation, spot misinformation, and stop its spread, however, also may be generated from questionnaires, behavioral data analysis, observational studies, or qualitative inquiry.74,88–90 Although we used 3 well-established transdisciplinary repositories to collect articles, it may be possible that we omitted a few relevant publications not indexed in these databases. We reviewed articles published in peer-reviewed scholarly journals which means that potentially relevant evidence reported in formats more oriented toward practitioners and policymakers could have been overlooked. Future scoping reviews can present a more comprehensive view of the research area by exploring domain areas, search terms, and DHIs outside the domain of COVID-19. Although critical appraisal and meta-synthesis of articles are not required in scoping reviews, we performed an initial summarization. Future works, however, should consider such analyses so that they can be utilized to promote novel research questions, identify domain gaps, and improve quality in the literature.

CONCLUSION

This paper contributes a summary of a majority of the latest efforts to combat COVID-19 misinformation and disinformation around the globe. Due to the evolving nature of the pandemic researchers face many resource and time constraints. Nevertheless, they are able to develop and implement creative technological solutions for mitigating the impact of the misinfodemic. A lack of evidence-based and theory-informed interventions, however, prevents conclusions from being drawn regarding the effectiveness of such interventions. Future research efforts are needed to evaluate the impact of these DHIs on users in real-time situations. Institutions should take the initiative to provide social media training and domain summarization to healthcare professionals, to enable them to interact professionally through online platforms, as well as support their interactions with patients during clinical encounters. The internet has democratized information access, however, the general public within a country or throughout the globe may not have the same level of awareness, skills, or tools to mitigate their vulnerability to misinformation and disinformation exposure and its negative consequences on preventive behaviors. Although all individuals are exposed and susceptible to the adverse effects of COVID-19 misinformation and disinformation, populations that experience health disparities may face elevated harm, and equitable DHIs can play an important role in addressing this issue.

Supplementary Material

ocad005_Supplementary_Data

Contributor Information

Katarzyna Czerniak, Department of Health Promotion and Behavioral Sciences, School of Public Health, University of Texas Health Science Center at Houston, Houston, Texas, USA.

Raji Pillai, Cizik School of Nursing, University of Texas Health Science Center at Houston, Houston, Texas, USA.

Abhi Parmar, School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA.

Kavita Ramnath, School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA.

Joseph Krocker, Department of Surgery, McGovern Medical School, Center for Translational Injury Research, University of Texas Health Science Center at Houston, Houston, Texas, USA.

Sahiti Myneni, School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA.

FUNDING

The work reported in this publication was supported by the National Library of Medicine of the National Institutes of Health under Award Numbers 1R01LM012974-01A1 and 3R01LM012974-02S1. This work was supported by the National Institute of General Medical Sciences of NIH (5T32GM008792). The content is the sole responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

AUTHOR CONTRIBUTIONS

All authors contributed equally to the study design, literature synthesis, text generation, and review of the manuscript. KC and SM led the development of the manuscript structure and theme identification for analysis.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

CONFLICT OF INTEREST STATEMENT

None declared.

DATA AVAILABILITY

The underlying data for this paper are available in the paper and in the online Supplementary Material.

REFERENCES

  • 1. Chou W-YS, Oh A, Klein WM.. Addressing health-related misinformation on social media. JAMA 2018; 320 (23): 2417–8. [DOI] [PubMed] [Google Scholar]
  • 2.Secondary. https://www3.weforum.org/docs/WEF_GRR18_Report.pdf. Accessed January 25, 2023.
  • 3. Tasnim S, Hossain MM, Mazumder H.. Impact of rumors and misinformation on COVID-19 in social media. J Prev Med Public Health 2020; 53 (3): 171–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. van der Meer TG, Jin Y.. Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source. Health Commun 2020; 35 (5): 560–75. [DOI] [PubMed] [Google Scholar]
  • 5. Swire-Thompson B, Lazer D.. Public health and online misinformation: challenges and recommendations. Annu Rev Public Health 2020; 41: 433–51. [DOI] [PubMed] [Google Scholar]
  • 6. De Choudhury M, Morris MR, White RW. Seeking and sharing health information online: comparing search engines and social media. In: Proceedings of the SIGCHI conference on human factors in computing systems; April 26, 2014: 1365–76; Toronto, Canada. [Google Scholar]
  • 7. Zhao Y, Zhang J.. Consumer health information seeking in social media: a literature review. Health Info Libr J 2017; 34 (4): 268–83. [DOI] [PubMed] [Google Scholar]
  • 8. McNab C. What social media offers to health professionals and citizens. SciELO Public Health 2009; 87 (8): 566. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Fernández-Luque L, Bau T.. Health and social media: perfect storm of information. Healthc Inform Res 2015; 21 (2): 67–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Attai DJ, Sedrak MS, Katz MS, et al. Social media in cancer care: highlights, challenges & opportunities. Future Med 2016; 12 (13): 1549–52. [DOI] [PubMed] [Google Scholar]
  • 11. Kabat GC. Taking distrust of science seriously: to overcome public distrust in science, scientists need to stop pretending that there is a scientific consensus on controversial issues when there is not. EMBO Rep 2017; 18 (7): 1052–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Kienhues D, Jucks R, Bromme R.. Sealing the gateways for post-truthism: reestablishing the epistemic authority of science. Educ Psychol 2020; 55 (3): 144–54. [Google Scholar]
  • 13. Bauchner H. Trust in health care. JAMA 2019; 321 (6): 547. [DOI] [PubMed] [Google Scholar]
  • 14. Singh JA. COVID-19: science and global health governance under attack. S Afr Med J 2020; 110 (6): 445–6. [DOI] [PubMed] [Google Scholar]
  • 15. Larson HJ. Blocking information on COVID-19 can fuel the spread of misinformation. Nature 2020; 580 (7803): 306–7. [DOI] [PubMed] [Google Scholar]
  • 16. Poushter J, Bishop C, Chwe H.. Social media use continues to rise in developing countries but plateaus across developed ones. Pew Res Center 2018; 22: 2–19. [Google Scholar]
  • 17. Besley JC, Hill D.. Science and Technology: Public Attitudes, Knowledge, and Interest. Science and Engineering Indicators 2020. Alexandra, VA: National Science Foundation; 2020. [Google Scholar]
  • 18. National Science Board. Science and Engineering Indicators 2018. Alexandria, VA: National Science Foundation; 2018. [Google Scholar]
  • 19. Brossard D. New media landscapes and the science information consumer. Proc Natl Acad Sci USA 2013;110 (Suppl 3: ):14096–101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Wang Y, McKee M, Torbica A, Stuckler D.. Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med 2019; 240: 112552. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Calisher C, Carroll D, Colwell R, et al. Statement in support of the scientists, public health professionals, and medical professionals of China combatting COVID-19. Lancet 2020; 395 (10226): e42–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. World Health Organization. Call for Action: Managing the Infodemic. Secondary Call for Action: Managing the Infodemic. 2020. https://www.who.int/news/item/11-12-2020-call-for-action-managing-the-infodemic. Accessed January 25, 2023. [Google Scholar]
  • 23. Janmohamed K, Walter N, Nyhan K, et al. Interventions to mitigate COVID-19 misinformation: a systematic review and meta-analysis. J Health Commun 2021; 26 (12): 846–57. [DOI] [PubMed] [Google Scholar]
  • 24. World Health Organization. Monitoring and Evaluating Digital Health Interventions: A Practical Guide to Conducting Research and Assessment. Geneva, Switzerland: WHO Press; 2016. [Google Scholar]
  • 25. Värri A. What is digital health? Review of definitions. Integrated citizen centered digital health and social care: citizens as data producers and service co-creators. Stud Health Technol Inform 2020; 275: 67. [DOI] [PubMed] [Google Scholar]
  • 26. World Health Organization. Classification of Digital Health Interventions v1. 0: A Shared Language to Describe the Uses of Digital Technology for Health. Geneva, Switzerland: World Health Organization; 2018. [Google Scholar]
  • 27. Fox C. Information and Misinformation: An Investigation of the Notions of Information, Misinformation, Informing, and Misinforming. United States: ABC-CLIO; 1983. [Google Scholar]
  • 28. Karlova NA, Lee JH.. Notes from the underground city of disinformation: a conceptual investigation. Proc Am Soc Info Sci Tech 2011; 48 (1): 1–9. [Google Scholar]
  • 29. de Cock Buning M. A Multi-Dimensional Approach to Disinformation: Report of the Independent High Level Group on Fake News and Online Disinformation. Luxembourg: Publications Office of the European Union; 2018. [Google Scholar]
  • 30. Freelon D, Wells C.. Disinformation as political communication. Polit Commun 2020; 37 (2): 145–56. [Google Scholar]
  • 31. Al Khaja KA, AlKhaja AK, Sequeira RP.. Drug information, misinformation, and disinformation on social media: a content analysis study. J Public Health Policy 2018; 39 (3): 343–57. [DOI] [PubMed] [Google Scholar]
  • 32. Broniatowski DA, Jamison AM, Qi S, et al. Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. Am J Public Health 2018; 108 (10): 1378–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Fihn SD, Perencevich E, Bradley SM.. Caution needed on the use of chloroquine and hydroxychloroquine for coronavirus disease 2019. JAMA Netw Open 2020; 3 (4): e209035. [DOI] [PubMed] [Google Scholar]
  • 34. Ghenai A, Mejova Y.. Fake cures: user-centric modeling of health misinformation in social media. Proc ACM Hum Comput Interact 2018; 2 (CSCW): 1–20. [Google Scholar]
  • 35. Oyeyemi SO, Gabarron E, Wynn R.. Ebola, Twitter, and misinformation: a dangerous combination? BMJ 2014; 349: g6178. [DOI] [PubMed] [Google Scholar]
  • 36. Glowacki EM, Lazard AJ, Wilcox GB, Mackert M, Bernhardt JM.. Identifying the public’s concerns and the Centers for Disease Control and Prevention’s reactions during a health crisis: an analysis of a Zika live Twitter chat. Am J Infect Control 2016; 44 (12): 1709–11. [DOI] [PubMed] [Google Scholar]
  • 37. Zarocostas J. How to fight an infodemic. Lancet 2020; 395 (10225): 676. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. World Health Organization. An Ad Hoc WHO Technical Consultation Managing the COVID-19 Infodemic: Call for Action, 7–8 April 2020. Geneva, Switzerland: WHO Press; 2020. [Google Scholar]
  • 39. Eysenbach G. How to fight an infodemic: the four pillars of infodemic management. J Med Internet Res 2020; 22 (6): e21820. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Allcott H, Gentzkow M, Yu C.. Trends in the diffusion of misinformation on social media. Res Politics 2019; 6 (2): 2053168019848554. [Google Scholar]
  • 41. Vosoughi S, Roy D, Aral S.. The spread of true and false news online. Science 2018; 359 (6380): 1146–51. [DOI] [PubMed] [Google Scholar]
  • 42. Torabi Asr F, Taboada M.. Big data and quality data for fake news and misinformation detection. Big Data Soc 2019; 6 (1): 2053951719843310. [Google Scholar]
  • 43. Zhang C, Gupta A, Kauten C, Deokar AV, Qin X.. Detecting fake news for reducing misinformation risks using analytics approaches. Eur J Oper Res 2019; 279 (3): 1036–52. [Google Scholar]
  • 44. Kim J, Tabibian B, Oh A, Schölkopf B, Gomez-Rodriguez M. Leveraging the crowd to detect and reduce the spread of fake news and misinformation. In: Proceedings of the eleventh ACM international conference on web search and data mining; February 2, 2018: 324–32; Los Angeles, CA.
  • 45. Kinsora A, Barron K, Mei Q, Vydiswaran VV. Creating a labeled dataset for medical misinformation in health forums. In: 2017 IEEE International Conference on Healthcare Informatics (ICHI); IEEE; August 23, 2017: 456–61; Park City, UT. [Google Scholar]
  • 46. Ghenai A, Mejova Y. Catching Zika fever: application of crowdsourcing and machine learning for tracking health misinformation on Twitter. In: 2017 IEEE International Conference on Healthcare Informatics (ICHI); 2017: 518; Park City, UT. doi: 10.1109/ICHI.2017.58. [DOI]
  • 47. Wu L, Morstatter F, Carley KM, Liu H.. Misinformation in social media: definition, manipulation, and detection. SIGKDD Explor 2019; 21 (2): 80–90. [Google Scholar]
  • 48. Castillo C, Mendoza M, Poblete B. Information credibility on twitter. In: Proceedings of the 20th international conference on World wide web; March 28, 2011: 675–84; Hyderabad, India. [Google Scholar]
  • 49. RAND Corporation. Tools That Fight Disinformation Online. Secondary Tools That Fight Disinformation Online. 2022. https://www.rand.org/research/projects/truth-decay/fighting-disinformation/search.html. Accessed January 25, 2023. [Google Scholar]
  • 50. Keesara S, Jonas A, Schulman K.. COVID-19 and health care’s digital revolution. N Engl J Med 2020; 382 (23): e82. [DOI] [PubMed] [Google Scholar]
  • 51. Sarbadhikari S, Sarbadhikari SN.. The global experience of digital health interventions in COVID-19 management. Indian J Public Health 2020; 64 (6): 117. [DOI] [PubMed] [Google Scholar]
  • 52. Mardani A, Saraji MK, Mishra AR, Rani P.. A novel extended approach under hesitant fuzzy sets to design a framework for assessing the key challenges of digital health interventions adoption during the COVID-19 outbreak. Appl Soft Comput 2020; 96: 106613. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Amiri P, Karahanna E.. Chatbot use cases in the COVID-19 public health response. J Am Med Inform Assoc 2022; 29 (5): 1000–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54. Vraga EK, Bode L.. Correction as a Solution for Health Misinformation on Social Media. Am J Public Health2020; 110 (S3): S278–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Arksey H, O’Malley L.. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005; 8 (1): 19–32. [Google Scholar]
  • 56. Moher D, Liberati A, Tetzlaff J, et al. ; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 2009; 151 (4): 264–9, W64. [DOI] [PubMed] [Google Scholar]
  • 57. Albrecht SS, Aronowitz SV, Buttenheim AM, et al. Lessons learned from dear pandemic, a social media-based science communication project targeting the COVID-19 infodemic. Public Health Rep. 2022; 137 (3): 449–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58. Amin Z, Ali NM, Smeaton AF.. Visual selective attention system to intervene user attention in sharing COVID-19 misinformation. Int J Adv Comput Sci Appl 2021; 12 (10): 36–41. [Google Scholar]
  • 59. Basol M, Roozenbeek J, Berriche M, Uenal F, McClanahan WP, Linden S.. Towards psychological herd immunity: cross-cultural evidence for two prebunking interventions against COVID-19 misinformation. Big Data Soc 2021; 8 (1): 20539517211013868. [Google Scholar]
  • 60. Bowles J, Larreguy H, Liu S.. Countering misinformation via WhatsApp: preliminary evidence from the COVID-19 pandemic in Zimbabwe. PLoS One 2020; 15 (10): e0240005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61. Furstrand D, Pihl A, Orbe EB, Kingod N, Søndergaard J.. Ask a doctor about coronavirus: how physicians on social media can provide valid health information during a pandemic. J Med Internet Res 2021; 23 (4): e24586. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62. Ghaleb M, Almurtadha Y, Algarni F, et al. Mining the Chatbot brain to improve COVID-19 Bot response accuracy. Comput Mater Contin 2022; 70 (2): 2619–38. [Google Scholar]
  • 63. Jayawardena ADL, Romano S, Callans K, Fracchia MS, Hartnick CJ.. Family-centered information dissemination: a multidisciplinary virtual COVID-19 town hall. Otolaryngol Head Neck Surg 2020; 163 (5): 929–30. [DOI] [PubMed] [Google Scholar]
  • 64. Kawchuk G, Hartvigsen J, Innes S, Simpson JK, Gushaty B.. The use of internet analytics by a Canadian provincial chiropractic regulator to monitor, evaluate and remediate misleading claims regarding specific health conditions, pregnancy, and COVID-19. Chiropr Man Ther 2020; 28 (1): 24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65. Konig L, Breves P.. Providing health information via Twitter: professional background and message style influence source trustworthiness, message credibility and behavioral intentions. J Sci Commun 2021; 20 (4): A04. [Google Scholar]
  • 66. Liu J, Qi J.. Online public rumor engagement model and intervention strategy in major public health emergencies: from the perspective of social psychological stress. Int J Environ Res Public Health 2022; 19 (4): 1988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67. Mourali M, Drake C.. The challenge of debunking health misinformation in dynamic social media conversations: online randomized study of public masking during COVID-19. J Med Internet Res 2022; 24 (3): e34831. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68. Pandey R, Gautam V, Pal R, et al. A machine learning application for raising WASH awareness in the times of COVID-19 pandemic. Sci Rep 2022; 12 (1): 810. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69. Pattison AB, Reinfelde M, Chang H, et al. Finding the facts in an infodemic: framing effective COVID-19 messages to connect people to authoritative content. BMJ Global Health 2022; 7 (2): e007582. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70. Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG.. Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci 2020; 31 (7): 770–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71. Quadri NS, Thielen BK, Erayil SE, Gulleen EA, Krohn K.. Deploying medical students to combat misinformation during the COVID-19 pandemic. Acad Pediatr 2020; 20 (6): 762–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72. Siedlikowski S, Noël LP, Moynihan SA, Robin M.. Chloe for COVID-19: evolution of an intelligent conversational agent to address infodemic management needs during the COVID-19 pandemic. J Med Internet Res 2021; 23 (9): e27283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73. Wang X, Chao F, Yu G.. Evaluating rumor debunking effectiveness during the COVID-19 pandemic crisis: utilizing user stance in comments on Sina Weibo. Front Public Health 2021; 9: 770111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74. Wang X, Li Y, Li J, Liu Y, Qiu C.. A rumor reversal model of online health information during the COVID-19 epidemic. Inf Process Manag 2021; 58 (6): 102731. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75. Al-Ahmad B, Al-Zoubi AM, Abu Khurma R, Aljarah I.. An evolutionary fake news detection method for covid-19 pandemic information. Symmetry 2021; 13 (6): 1091. [Google Scholar]
  • 76. Choudrie J, Banerjee S, Kotecha K, Walambe R, Karende H, Ameta J.. Machine learning techniques and older adults processing of online information and misinformation: a COVID 19 study. Comput Hum Behav 2021; 119: 106716. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77. Kaliyar RK, Goswami A, Narang P. A Hybrid Model for Effective Fake News Detection with a Novel COVID-19 Dataset. In: 13th International Conference on Agents and Artificial Intelligence, Volume 2; January 1, 2021: 1066–72; Vienna, Austria. [Google Scholar]
  • 78. Amin Z, Mohamad Ali N, Smeaton AF. Attention-based design and selective exposure amid COVID-19 misinformation sharing. In: Kurosu M, ed. Human-Computer Interaction. Design and User Experience Case Studies. HCII 2021. Lecture Notes in Computer Science, vol. 12764; Cham: Springer; 2021: 501–10.
  • 79. Kreps SE, Kriner DL.. The COVID-19 infodemic and the efficacy of interventions intended to reduce misinformation. Public Opin Q 2022; 86 (1): 162–75. [Google Scholar]
  • 80. Fishbein M. An integrative model for behavioral prediction and its application to health promotion. In: DiClemente RJ, Crosby RA, Kegler MC, eds. Emerging Theories in Health Promotion Practice and Research. San Francisco: Jossey-Bass/Wiley; 2009: 215–34. [Google Scholar]
  • 81. Fernandez ME, Ruiter RAC, Markham CM, Kok G.. Intervention mapping: theory- and evidence-based health promotion program planning: perspective and examples. Front Public Health 2019; 7: 209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82. United Nations. UN launches new initiative to fight COVID-19 misinformation through ‘digital first responders’. UN News: Global Perspective Human Stories.2020. https://news.un.org/en/story/2020/05/1064622.
  • 83. Bautista JR, Zhang Y, Gwizdka J.. US physicians’ and nurses’ motivations, barriers, and recommendations for correcting health misinformation on social media: qualitative interview study. JMIR Public Health Surveill 2021; 7 (9): e27715. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84. Dasgupta N, Lazard A, Brownstein JS.. COVID-19 vaccine apps should deliver more to patients. Lancet Digital Health 2021; 3 (5): e278–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85. Benjamin GC. Ensuring health equity during the COVID-19 pandemic: the role of public health infrastructure. Rev Panam Salud Publica. 2020; 44: e70. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86. Nana-Sinkam P, Kraschnewski J, Sacco R, et al. Health disparities and equity in the era of COVID-19. J Clin Transl Sci 2021; 5 (1): e99. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87. Wang ML, Behrman P, Dulin A, et al. Addressing inequities in COVID-19 morbidity and mortality: research and policy recommendations. Transl Behav Med 2020; 10 (3): 516–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88. Roozenbeek J, Schneider CR, Dryhurst S, et al. Susceptibility to misinformation about COVID-19 around the world. R Soc Open Sci 2020; 7 (10): 201199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89. Hauer MK, Sood S.. Using social media to communicate sustainable preventive measures and curtail misinformation. Front Psychol 2020; 11: 2779. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90. Myneni S, Cuccaro P, Montgomery S, et al. Lessons learned from interdisciplinary efforts to combat COVID-19 misinformation: development of agile, integrative methods from behavioral science, data science, and implementation science. JMIR Infodemiol 2022; doi: 10.2196/40156. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocad005_Supplementary_Data

Data Availability Statement

The underlying data for this paper are available in the paper and in the online Supplementary Material.


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES