Abstract
Background
Misleading information about medical products on social media may cause overuse.
Objectives
Explore interventions targeting the problem of misleading medical information and marketing on social media, with a focus on preventing medical overuse including overdiagnosis.
Eligibility criteria
We included peer-reviewed studies with original data on an intervention targeting misleading medical information on social media and governmental/institutional responses with and without evaluation. We excluded responses relating to COVID-19.
Sources of evidence
four electronic databases: MEDLINE/PubMed, PsycINFO, Academic Search Complete and Web of Science, and searches of grey literature on Google and Google Scholar. Search date: 9 June 2025.
Data charting
We used prespecified data forms populated in duplicate by two reviewers.
Results
We identified 27 peer-reviewed articles and 25 organisational and governmental responses (grey literature). 20 (74%) of the peer-reviewed interventions targeted the consumer to enhance ‘media literacy’, support decision-making or warn about misinformation trends. Approaches included education, such as videos or information materials, to improve detection of misinformation, as well as correcting misinformation and rebutting claims. Only two (7.4%) of the peer-reviewed approaches were sensitive to the problem of medical overuse: a risk-of-deception tool and an informed decision-making service. The grey literature about government and organisational responses chiefly comprised general advertising regulations and other educational resources for consumers to identify and navigate misinformation. The advertising regulations ranged from self-regulatory codes of practice to mandatory regulations, requiring pre-approval of social media marketing material. Most regulations stated advertising should be truthful, presenting both benefits and harms and not be misleading. Most of the grey literature (64%) was sensitive to medical overuse, though none referred explicitly to the problem.
Conclusions
Current efforts to address misleading medical marketing on social media often overlook the critical issue of medical overuse and fail to provide sufficient consumer protections in this rapidly evolving digital landscape of social media, such as the speed of dissemination, reach and the role of third-party advertising. These gaps in research, regulation and practice present significant opportunities to strengthen evidence-based policies and public health responses.
Trial registration details
Keywords: Evidence-Based Practice, Overdiagnosis, Policy, PUBLIC HEALTH
WHAT IS ALREADY KNOWN ON THIS TOPIC.
Recent evidence suggests that much of the medical information on social media is misleading and may lead to overuse.
WHAT THIS STUDY ADDS
We identified a range of different, potentially useful, interventions for combating misleading medical marketing on social media, ranging from educational efforts to improve detection of misinformation to advertising legislation.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
Continued presence of misinformation on social media is testament to the fact that regulations are inadequate in responding to the unique and evolving setting of social media, including speed of spread, reach and third-party advertising.
We need multidisciplinary collaborations and effective means to enforce or implement interventions that account for the diverse nature of advertising on social media, specific to the issue of overuse.
Introduction
With the rise of social media, medical and health information has become more accessible, yet this accessibility comes with unique challenges.1 2 Social media platforms are designed to maximise user engagement by promoting content that captures attention, often favouring information with sensational or eye-catching features.3 Consequently, medical information that is misleading—whether it includes false, unverified, unbalanced or exaggerated claims—is often more likely to reach larger audiences.4–6 This challenge is compounded by social media’s rapid and widespread dissemination of information without verification or authentication processes.
While enhanced access to information can be seen as a positive and democratising advance for consumers, misleading medical information is particularly concerning because of its potential impact on health decisions.7 Decisions about health and medicine are deeply personal, affected by fear, societal norms and resources. However, commercial interests frequently exploit this dynamic to promote medical products, leveraging platform algorithms to target specific groups with personalised ads.8–11 Health-related content, when inaccurate, may lead consumers to make uninformed choices, delay essential treatments or even pursue harmful actions that can lead to overtesting, overdiagnosis and overtreatment.12 13 A recent study of almost 1000 posts on TikTok and Instagram about medical tests found the overwhelming majority of posts were misleading, failing to mention potential harms of the tests, including overdiagnosis and overuse.14 This strengthens the evidence base about medical misinformation on social media.3 4 15–17 In response, it is time to research and develop effective strategies to address misinformation.18
Researchers have suggested platform-targeted measures to tackle misinformation, including fact-checking, removal of misinformation, correction and warning labels.19 20 A review on combatting conspiracy theories has identified inoculation messaging and media literacy interventions as promising countermeasures.21 However, the changing and ambiguous nature of misleading or misinformation is outpacing evidence and WHO is currently campaigning against health-related misinformation.
This scoping review aims to identify interventions and strategies that address misleading medical information on social media. A secondary objective is to focus on how these proposed strategies deal with preventing overuse of medical resources. In the face of the seemingly overwhelming problem of misleading medical information, this review identifies responses to this challenge, for policymakers, health professionals, researchers and citizens interested in safe and sustainable use of medical resources.
Methods
This scoping review is conducted in accordance with Joanna Briggs Institute standards and reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Scoping Review extension.22 23 A protocol was developed and published prospectively on Open Science Forum.24
Eligibility criteria
We included both peer-reviewed citations and grey literature as these are seen as equally important sources for combating misleading information; for example, grey literature such as regulations can be powerful tools for controlling advertising. As described in detail in our protocol,24 eligibility was assessed based on three criteria outlined below and was not restricted by language.
Responses/solutions to misleading medical information on social media: we included responses to misleading medical information, including misinformation and disinformation.6 Drawing on the work of El Mikati and colleagues,6 for the purposes of this review, we define misinformation as ‘false information that may or may not be intentional’ and disinformation as ‘intentional dissemination of false information’. We use ‘misleading’ to describe medical information giving a false impression—such as exaggerating benefits and/or ignoring harms—whether disseminated intentionally or not. We excluded generic responses not specific to medicine, as medical and health-related misinformation often has distinct financial motives and impacts compared with other topics (eg, political or environmental misinformation). COVID-19-specific responses were also excluded due to extensive existing reviews and the distinct event-related features of a global pandemic.19 25–27
Social media: we included responses to misleading medical information, and related terms, on social media.28
Citations from academic databases had to be peer-reviewed and include original data. From grey literature searches, we included legal reports and institutional or governmental actions, including those without original data. As institutional and government reports and strategies often rely on evidence synthesis, inclusion of original data was not a requirement for grey literature.
Search strategy
We performed the search strategy in the following electronic databases: MEDLINE/PubMed, APA PsycINFO, Academic Search Complete and Web of Science. The search strategy was initially developed for PubMed/MEDLINE and then adapted for other databases, with adjustments to controlled vocabulary and subject headings as needed.24 This strategy was created in collaboration with two information specialists from the University of Copenhagen and Bond University, respectively (online supplemental file 1).
bmjebm-30-6-s001.pdf (81.9KB, pdf)
Grey literature searches were performed in Google and Google Scholar, websites of influential governments and health organisations and grey literature databases.24 Key terms were searched in each platform or website noting the number of hits. At least the first 100 hits were reviewed, relying on relevance at the top of the search. Screening continued in case of relevance (in six cases more than 100 hits were screened). The stopping rule was based on relevance, for example, searching Google, we reviewed 240 hits and continued until at least two pages did not provide any hits that were related to the topic.
The academic database searches were performed on 15 August 2024 and grey literature was searched on 19 August 2024. Searches were updated on 9 June 2025. The original search was performed in Australia and the updated search in Denmark.
Selection process
Two authors independently assessed the eligibility of title, abstract and full-text level articles in Covidence. Disagreements were resolved through discussion with involvement of a third author in case of non-consensus. Prior to formal selection, two authors screened a selection of citations until agreement was >75%.22 Inter-rater reliability rates of agreement were 97.65% on title and abstract level and 81.01% on full text level.
We screened reference lists of identified relevant reviews and included studies.
We contacted all corresponding authors of included citations for input on relevant organisational or governmental strategies, specifically inquiring about responses from their respective countries.
As per protocol, we did not pose any language restrictions; if non-English papers were identified, the title was translated using online translating services.24 If the title seemed relevant the full citation was then reviewed by a native speaker. This only became necessary for Chinese citations.
Data extraction
Data charting was performed in duplicate by two reviewers. Calibrated forms had been checked and piloted by four members of the review team before their use. Official data extraction started when pilot extractions had 75% or greater agreement.29 We extracted data about the type of response and if these were sensitive to the problem of medical overuse. We judged sensitivity to overuse if the interventions specifically included parts that were targeted at this problem or if they discussed the development of the intervention as a response to the problem. We defined overuse as a service or type of care that causes harm with little to no accompanying benefit including related terms such as medicalisation, low-value care, overdiagnosis and overtreatment.30
Data synthesis
Data was extracted using a comprehensive list of predetermined items. These items were developed by the author group which has expertise spanning medicine, public and global health, journalism, sociology and consumer insights. The tabularised data was reviewed and discussed within the author group, whereafter EGG drafted an analysis including reporting frequencies, which was then discussed within the author group. As per protocol, the responses and key characteristics were mapped based on an inductive content analysis.31 First, the data was extracted from all citations, and then based on extracted data including description of intervention, aims, target group, the responses were openly coded allocating characteristics into overall categories.31 Categorisations are presented in a figure with visualisation of codes as well as in text.
Since organisational and governmental strategies, including regulation, do not necessarily present an evaluation of the intervention, these were reported in a separate table to allow for different key features of the types of responses. Some organisations presented multiple strategies, if the organisational responses were similar or guidance for actors were based on the same law, these were presented collated in the analysis. All URLs (uniform resource locators) for collated responses are provided.
Results
Selection of citations
We included 27 peer-reviewed articles and 45 grey literature reports. The selection process is visualised in figure 1.
Figure 1.

Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow chart.
The majority of the excluded peer-reviewed literature at full-text level was due to the lack of original data (figure 1). For grey literature, citations were primarily excluded because they did not pertain specifically to health.
Characteristics and results of included peer-reviewed literature
Table 1 presents the characteristics of included peer-reviewed literature. Supporting evidence on definitions, study design, funding and conflicts of interest is presented in online supplemental file 2.
Table 1.
Characteristics of interventions responding to misleading medical information on social media; peer-reviewed literature
| Author, year, country* | Aim of intervention | Description of intervention | Targeted | Target group | Design/development | Target outcomes | Overuse† |
| Alsaad and AlDossary, 2024, Saudi Arabia32 | Help consumers detect misinformation | Educational video intervention based on WHO’s advice on how to navigate and identify misinformation. Concepts included analysing facts, checking links, assessing photos and videos | Health information on WhatsApp | Consumers | Inoculation and the message interpretation process theory | Ability to identify and knowledge about misinformation | No |
| Au et al, 2021, China50 | Prevent spread | An experiment using financial incentives and legislation, punishing misinformation spread | Healthcare misinformation | Consumers | Evidence synthesis and expert Delphi | Intention to share misinformation | No |
| Bode and Vraga, 2017, USA 40 | Correct misinformation | An intervention on correction techniques: peer correction and algorithmic correction through the related stories function on Facebook | Misinformation on Zika virus on Facebook | Consumers | Evidence synthesis | Misperceptions and perceived credibility | No |
| Braga et al, 2025, Portugal97 | Monitor and identify misinformation | An online social listening intervention to identify emerging misinformation trends | Misinformation about non-communicable diseases on Twitter | Health workers, public | Evidence synthesis, expert consultation and analysis of social media data | Performance of keywords to detect misinformation | No |
| Brooks et al, 2023, USA57 | Address health misinformation | A multidisciplinary virtual centre aiming to detect misinformation on social media using platform algorithms, evidence reviews and public communication | Infodemics across a range of platforms and blogs | Consumer, health workers, ministries | Interdisciplinary expert groups and social media analysis | Ability to detect misinformation | No |
| Byrne et al, 2024, Ireland37 | To support informed decision-making | iHealthFacts solicits questions from the public about health claims, conducts evidence reviews and presents the findings in an accessible way | Unreliable health information | Consumers | Evidence synthesis, public involvement and organisation collaboration | Critical thinking, informed-decision making | Yes |
| Di Sotto and Viviani, 2022, Multiple54 | Help platforms detect misinformation | An algorithm for detection of misinformation using machine learning | Health misinformation on Twitter | Technology | Social media analysis and evidence synthesis | N/A | No |
| Elhariry et al, 2024, India and UK33 | Train medical students to create evidence-based information for social media dissemination | Developing evidence-based education videos for social media | Misinformation about PCOS and thyroid disorders across social media | Public | Evidence synthesis and expert and patient panels | Outreach and audience engagement on social media and participant experience | No |
| Erim et al, 2025, Nigeria38 | Train participants and communities in identifying and addressing health information | Establishing a health misinformation fellowship programme training participants to combat health misinformation through community-based approaches | Health misinformation | Public | Stakeholder and community engagement | Reach and impact of the activities and the confidence in identifying and mitigating misinformation | No |
| Fridman et al, 2025, USA55 | Identify misinformation | Developing a framework of linguistic characteristics of misinformation that could be easily observed by users to help them identify and flag misinformation on social media | Misinformation about unproven cancer treatments on Twitter | Consumers, algorithms | Evidence synthesis and social media data mining | Performance of linguistic characteristics/prediction of misinformation labels | No |
| Garrett et al, 2019, Canada53 | Help healthcare professionals educate public on health scams | A risk-of-deception tool to detect potential internet health scams/assess the likelihood of deceptive content | Health scams | Health workers | Evidence synthesis and Delphi approach, with health professionals | Deceptiveness of posts | Yes |
| Gesser-Edelsburg et al, 2018, Israel47 | Correct misinformation on measles vaccine | An intervention on corrective information sensitive to the emotional domain of health information (voicing health concerns) | Health misinformation on Facebook | Consumers | Literature | Self-efficacy, reliability, behavioural intentions | No |
| Ku et al, 2025, China41 | Correct misinformation | Using logic or fact-based corrections combined with hashtags (inclusivity vs health literacy framings) | Misinformation about Mpox on Instagram | Consumers | Amended from Vraga et al 98 | Misperceptions, attitudes, correction sharing likelihood | No |
| Lazard and Queen, 2025, USA42 | Encourage intervening and prevent spread | Using social cue prompts (content warning labels) to encourage flagging of misinformation to reduce sharing and increase post removal by platforms | Misinformation about cancer treatment | Consumers | Evidence synthesis | Willingness to intervene, sharing intentions and perceived responsibility, empathy and acceptability | No |
| Leininger et al, 2022, USA39 | Educate public and infodemic management | A set of core communication principles: listening, empathy, engage, transparency; communication strategies including prioritising graphic design, using examples, analogies and narratives | Health misinformation | Consumers | Expert panels | N/A | No |
| McPhedran et al, 2023, UK34 | Decrease engagement | An educational inoculation intervention with ‘threat’ and ‘pre-bunking’ components and information about dangers of misinformation, resources for identification and a source reliability checklist | Health misinformation | Consumers | Inoculation theory | Engagement with misinformation content | No |
| Mende et al, 2023, USA35 | Warn consumers | A conceptual model that offers a lens through which to evaluate warning labels and disclosures | Misinformation and disinformation | Consumers | Evidence synthesis | Negative impacts of infodemics | No |
| Nazarnia et al, 2023, Iran36 | Improve media health literacy | A mobile-based educational intervention on ‘media health literacy’ across multiple domains | Health misinformation | Consumers | Evidence synthesis, health literacy theory and a team of app designers | Media health literacy | No |
| Ozturk et al, 2015, USA51 | Reduce spread | An intervention on counter statements and warnings accompanying misinformation | Health rumours on Twitter-like platforms | Consumers | Evidence synthesis | Intention to share misinformation | No |
| Sun and Pan, 2025, China43 | Reduce spread | An intervention testing ways of correction (simple rebuttal and factual elaboration) and correction source (AI and human fact-checking) | Health misinformation on Weibo | Consumers | Evidence synthesis and Medical Fact-Checks on Snopes.com | Intention to share misinformation and perceived credibility of source | No |
| Upadhyay et al, 2023, Italy56 | Help platforms detect misinformation | A deep-learning model for automatic detection from structural and content-based characteristics, in context of a multilayer architecture | Health misinformation | Technology | Machine learning using source, content and design features | Genuineness of health information | No |
| Vraga and Bode, 2017, USA44 | Correct misperceptions | An intervention with corrections posted by either social media users or governmental institutions | Misinformation about Zika virus on Twitter | Consumers | Evidence synthesis | Misperceptions credibility trustworthiness | No |
| Vraga and Bode, 2018, USA45 | Correct misperceptions | Social correction feedback from peers testing both the type of correction and the platform | Misinformation about Zika virus on Facebook and Twitter | Consumers | Evidence synthesis | Misperceptions credibility/trustworthiness | No |
| Vraga et al, 2019, USA46 | Correct misinformation | An intervention combining exposure to weak forms of misinformation after exposure to the original misinformation (inoculation) and corrections | Misinformation about HPV‡ vaccine on Twitter | Consumers | Inoculation theory and evidence-synthesis | Credibility misperceptions | No |
| Vraga et al, 2022a, USA48 |
Correct misinformation, enhance news literacy | Two solutions pairing news literacy messages with corrective responses to health misinformation | Misinformation on Twitter | Consumers | Evidence synthesis | Credibility misperceptions perceived news literacy | No |
| Vraga et al, 2022b, USA52 |
Correct misinformation about skin cancer | An intervention combining peer correction and news literacy video messages that warn audiences about misleading content on social media and the need to be sceptical (educational inoculation) | Misinformation about skin cancer on Facebook | Consumers | Evidence synthesis and social media observations | Misperceptions Intention to wear sunscreen | No |
| Vraga and Bode, 2025, USA49 |
Mitigating impact of exposure to misinformation | A correction intervention using truth signals (whether the person posting the story says it is true, whether the replies to the story say it is true, or whether the story itself is actually true) | Health misinformation | Consumers | Evidence synthesis and Medical Fact-Checks on Snopes.com | Perceptions of story veracity, attitude alignment with the content and related behavioural intentions | No |
*Country of study conduct.
†Sensitive to the problems of overuse.
‡
§If no social media platform is mentioned, the intervention target social media generically.
AI, Artificial Intelligence; HPV, Human Papilloma Virus; PCOS, Polycystic Ovary Syndrome.
20 out of 27 (74%) interventions presented interventions targeted at the consumer. These consumer-targeted interventions were primarily targeting misinformation through educational means such as videos or information material32–36 or through evidence reviews and communication.37–39 These interventions aimed to improve health media literacy, support decision-making or warn consumers about misinformation trends. Other consumer-targeted interventions aimed to correct misinformation, rebutting claims either to lower credibility or address misperceptions. These interventions included corrections introduced by peers,40–45 pre-exposing consumers to truthful information (inoculation),46 accounting for fear of disease,47 educational messages48 and truth signals.49 Others aimed to reduce the spread of misleading information through financial incentives and punishment50 or warning labels.51 52
One intervention targeted health professionals, presenting a tool to help assess the likelihood that specific social media health content is deceptive.53 There were algorithmic responses to detect misinformation using machine learning for content moderation on platforms.54–56 One multi-targeted and multidisciplinary intervention used automatic artificial intelligence (AI)-generated alerts about misinformation, reviewed and communicated evidence and proposed these were circulated by government ministries.57 Most responses targeted medical misinformation in general, while a few targeted misinformation specifically about Zika virus,40 44 45 human papillomavirus vaccines,46 skin cancer52 or measles vaccine.47 Identified responses relied primarily on literature reviews and evidence synthesis for developing interventions.
Characteristics and results of included grey literature
Table 2 presents the characteristics of the included grey literature. We identified 45 independent websites which were collated into 25 responses.
Table 2.
Characteristics of interventions responding to misleading medical information on social media; organisational and governmental responses
| Organisation | Country | Year | Description of response | Target group | Overuse* |
| Therapeutic Goods Administration, Department of Health and Aged Care99–101 | Australia | 2023 | Regulation for industry and social media influencers on advertising therapeutic goods | Industry and social media influencers | Yes |
| The Australian Health Practitioner Regulation Agency60 | Australia | 2020 | Industry guidance clarifying legal responsibilities in advertising regulated health services | Health workers | Yes |
| Therapeutic Goods Administration, Department of Health and Aged Care58 | Australia | 2020 | Online resource on how to spot a dodgy health product ad | Consumers | Yes |
| The Digital Industry Group and the Australian Communications and Media Authority66 67 102 | Australia | 2024 | The code of practice conduct and a bill supporting self-regulation code on disinformation and misinformation on social media platforms | Platforms | No |
| Health Canada103 104 | Canada | 2002 | Regulation on advertising health products | Industry | Yes |
| Health Canada, Government of Canada65 | Canada | 2020 | Proactively monitoring drug and device marketing to enforce advertising regulation, request cessation of illegal activity, use regulatory sanctions and partnerships to encourage using voluntary code | Industry, health workers, organisations | Yes |
| Health Canada, Government of Canada59 | Canada | 2019 | Online resource with information about misleading marketing and current trends to beware of, industry guidance and reporting tool | Consumer, health workers | Yes |
| Chinese Government63 64 | China | 2015 | Regulation for industry on audio-visual advertising of health foods and medical devices on social media | Industry | Yes |
| Danish Ministry of Health105–107 | Denmark | 2013 | Regulation on advertising healthcare products, medical devices and drug | Industry/advertisers | Yes |
| European Parliament108–110 | Europe | 2025 | Overview of actions designed to strengthen cooperation between the EU member states including exchanging information on misinformation and orchestrate coordinated responses. Upcoming initiatives also include the Consumer Agenda 2025–2030 and the Digital Fairness Act to strengthen consumer protection against harmful online practices, complementing the existing EU digital regulations | All sectors of society | Yes |
| European Commission111 | Europe | 2021 | The EUDAMED database providing an overview of the lifecycle of medical devices available in European Union, to improve access to credible information and enhance coordination across governments | Industry, governments | No |
| European Commission112–114 | Europe | 1992/2006 | A regulatory regime on management and advertising of prescription drugs and medical devices to ensure safety while supporting innovation and competition | Industry | No |
| European Commission68 | Europe | 2018 | Code of practice on disinformation promoting self-regulation | Platforms and industry/advertisers | No |
| Maharashtra Medical Council, Mumbai115 116 | India | 2024 | The Medical Council is taking action against misleading advertising on social media by registered practitioners | Health workers | Yes |
| Dubai Health Authority117 | UAE | 2022 | Regulation on advertising medical products on social media | Health workers | Yes |
| Medicines and Healthcare products Regulatory Agency118 | UK | 2022 | Regulation on advertising medicines and health products | Industry | Yes |
| US Department of Health and Human Services119–121 | USA | 2024 | Regulatory guidance for all sectors of society including the US Surgeon General’s Advisory | All sectors of society | No |
| National Academy of Medicine122 | USA | 2021 | Online resource with guidance for identification of credible health information on social media | Platforms and consumers | No |
| Food and Drug Administration123 | USA | 2023 | Online information about health scams, identification of health scams and how consumers can protect themselves | Consumers | Yes |
| Food and Drug Administration124–126 | USA | 2025 | The Bad Ad Programme reviews prescription drug ads for misleading content, risk-based surveillance with a growing focus on social media influencers and telehealth companies. A new bill, the ‘Protecting Patients from Deceptive Drug Ads Act’ (introduced February 2025), aims to expand FDA oversight to include telehealth firms, pending its passage into law | Industry | Yes |
| Food and Drug Administration61 | USA | 2014 | Industry guidance clarifying legal responsibilities in third-party misinformation about drugs and medical devices on social media | Industry | Yes |
| Food and Drug Administration62 | USA | 2024 | Industry guidance clarifying legal responsibilities presenting risk and benefit information for drugs and devices on social media | Industry | Yes |
| WHO127–130 | 2022 (up-dated 2024) | WHO collaborates with tech companies and expert panels to identify and flag health misinformation, helping prevent its spread and work with policy teams on guidelines for content providers, including efforts with YouTube, Google, Facebook and NewsGuard to remove misinformation and promote science-based health information | Consumers, technology, industry | No | |
| WHO and European Parliament131 | 2022 | Make recommendations on collaborative action across sectors to better protect people from misinformation and mitigate the harms | Public, industry, regulators | No | |
| WHO132 | 2020 | WHO facilitates the Fides network to unite health professionals and help them deliver evidence-based recommendations for health targeting prevalent misinformation on social media | Consumer and health workers | No |
*Sensitive to the problems of overuse.
EU, European Union; EUDAMED, European database on medical devices ; FDA, Food and Drug Administration.
16/25 (64%) were targeted at the industry or advertisers, including advertising regulations or codes of practice, 8/25 (32%) were educational resources for consumers, industry and health professionals and the rest were collaborations or programmes to monitor advertising activities (table 2).
Educational responses included online resources for consumers to detect and avoid online health scams58 59 and resources for health professionals to educate the public or how to comply with regulation.60–62 The identified regulations were employed to protect the public from misinformation and potential downstream harms. Identified regulations varied across countries but almost all—except the USA—forbid the advertising of prescription drugs to the public. The identified legislation did not have separate rules for advertising on social media, but encompassed social media along with other forms of media, such as television, radio and billboards. China had regulations specifically for online advertising including that medical or health advertising should receive prior approval from health authorities.63 64 All included regulations stated that content of medical advertising should be truthful, should present both benefits and harms and not be misleading.
Liability for content is different across the included regulations; in Australia, the USA and China, the advertiser is responsible for the accuracy of content, which means that third-party distributors can be liable. In Denmark and the rest of the European Union (EU), it is primarily the company sponsoring the advertiser who is responsible for the compliance of advertisement. In most of the included regulatory regimes, there are no oversight programmes in place besides so-called ‘claim-based sites’, where consumers and the public can flag potential non-compliant advertising to the government, who then choose whether to investigate and/or pursue legal enforcement. Health Canada has a monitoring programme that oversees advertising activities on social media.65 The monitoring programme targets illegal advertising by enforcing compliance, applying sanctions and recommending charges when necessary and emphasises education through optional preclearance, accrediting preclearance agencies and interest-holder training.65 We identified self-regulations and voluntary codes for Canada, Australia and the EU.66–70 These codes involve voluntary preclearance processes for advertisers, commitments of social media platforms to perform content moderation, disclose paid advertising and make use of fact-checking services.
Perspectives on medical overuse
The vast majority of the peer-reviewed responses focused on how to avoid misinformation that may cause delays in seeking appropriate healthcare or foregoing relevant services. In other words, they were framed in some way around preventing information which may drive ‘underuse’. Only 2 of the 27 peer-reviewed responses (7.4%) were considered sensitive in some way to the problems of medical overuse37 53 (table 1), although neither explicitly mentioned overuse. For example, Byrne and colleagues present an educational tool to encourage critical thinking, specifically about unproven and potentially harmful medical treatments37 and Garret and colleagues describe one for detecting health scams defined as ‘… products (that) are sold based on exaggerated claims or falsehoods—and the use of mass media to facilitate scams (…)’ to avoid the public being misled to take up harmful or non-evidence-based services and treatments.53
Of the organisational or governmental responses analysed, none of them explicitly referred to medical overuse or related terms such as overdiagnosis, but 16 out of 25 (64%) demonstrated some form of sensitivity to these problems (table 2). These responses included contributions from the FDA (Food and Drug Administration, USA), TGA (Therapeutic Goods Administration, Australia), Health Canada, as well as authorities in Dubai, China and Denmark, all explicitly emphasising the objective to protect patients and the public by preventing engagement with potentially harmful services. In contrast, the WHO and EU primarily focused on mitigating harm caused by patients forgoing effective treatments due to misinformation and did not address the risk of misinformation leading to the uptake of harmful or unnecessary services or general overuse.
Thematic map of responses
Informed by existing frameworks,71 72 we iteratively categorised identified responses. We identified four distinct categories: educational, debunking, algorithmic and regulatory responses. The educational responses were interventions applying information or learning targeted at improving detection, preventing spread, as well as improved awareness, literacy and decision-making (n=18). Debunking responses exposed misinformation through corrections or warning labels with the aim to prevent their spread and improve misperceptions (n=13). Algorithmic solutions included development or employment of a set of computational instructions for platforms to help detect misinformation (n=3). Regulatory responses were legally-binding laws or codes made and maintained by authorities (n=17). These sought to protect citizens by upholding ethical standards, safety, public health, guidance on compliant advertising and monitoring. As can be seen in the figure, these categorisations were not clear-cut and characteristics of interventions were overlapping.
The categorisation displayed in figure 2 communicates the interconnectedness of public health goals, intervention strategies and their applications at different levels of society.
Figure 2.
Categorisation of responses with accompanying aims and levels they target. AHSSAQA, Australian Health Service Safety and Quality Accreditation; EU, European Union;FDA, Food and Drug Administration; NBER, National Bureau of Economic Research; NHS, National Health Service; SM, Social Media; WHO, World Health Organisation.
Discussion
Through a scoping review of peer-reviewed and grey literature, we identified four categories of responses to misleading health information on social media: educational, debunking, algorithms and regulation. Solutions proposed in peer-reviewed literature were primarily educational and debunking targeted at consumers, aiming to improve awareness and ability to detect misinformation through public information material, corrections and warning labels. Among grey literature, we identified advertising regulations which all required truthful, balanced content while mostly banning public advertising of prescription drugs. However, few regimes had social media specific regulations or monitoring programmes in place to account for the special features of social media advertisement and to oversee the large and growing amount of advertisement. Only 7.4% of peer-reviewed interventions were sensitive to problems related to medical overuse, compared with 64% of organisational responses, with most of these primarily aiming to prevent the uptake of harmful services.
Our review has important limitations. The inter-reliability in the screening process was 97.65% on title-abstract level and 81.01% on full-text level pointing to some degree of selection inconsistency. We have employed an inclusive approach meaning opting for inclusion if in doubt combined with poor reporting (eg, framing as a health/medical problem but using political misinformation examples to develop strategy), both contributing to low inter-reliability. While some national regulations on health advertising were included, our approach was not exhaustive as we did not seek out all national laws. Additionally, as per standard scoping review methodology, we did not assess the effectiveness or quality of the included interventions and thus cannot conclude which interventions are more effective. We excluded expert opinions and academic literature without original data, which might offer valuable insights and potential solutions. We did, however, analyse opinion pieces identified in our systematic search, summarised their recommendations and considered their insights in this discussion.
The majority of identified responses focused on consumer-targeted interventions relying on the consumer’s capacities; for example, to be more aware of misinformation, improve detection or change behaviour. This is similar to other reviews mapping countermeasures to fake information or conspiracy theories on social media.19 21 However, other recent reviews73 74 and experts are calling for collaboration including multi-targeted and interdisciplinary approaches.75–79 Our included studies primarily rely on literature reviews and evidence synthesis to design and develop interventions to combat misinformation and might be lacking important perspectives from consumers and organisations. Experts suggest that relying on the capacities of the consumer alone to try and combat misinformation might be overly simplifying extremely complex problems, and is not commensurate with the challenges posed by the highly misleading information and advertising currently flooding social media.73 Further, consumer-targeted interventions such as educational responses might require large amounts of resources and will likely only reach people with higher socioeconomic status and health literacy. Combined with increased susceptibility to misinformation among people with lower socioeconomic status, such interventions might be ineffective or produce further inequity in health.80 Further, spending resources on improving health literacy in the context of the promotion of tests or treatments that do not have evidence of benefits and thus should not be advertised in the first place seems non-sensical. Researchers, organisations, editors and grant-providers should work together to develop and communicate efficient, evidence-based strategies that resonate with policymakers and stakeholders.71 81 These should focus on testing larger-scale collaborative, multidisciplinary and system-level interventions to combat misleading information and marketing on social media rather than focusing on responses that posit responsibility solely with individual consumers. Further, experts are pointing to the need for interventions targeting the problems surrounding celebrity endorsement, pervasive use of anecdotal evidence, influencer marketing and peer-to-peer advertising.82–86
We found that almost all of the peer-reviewed interventions focused primarily on addressing the harms of medical ‘underuse’, while being largely insensitive to the challenges posed by overuse and overdiagnosis. This reflects the wider fact that concerns and debates about medical misleading information on social media have to date focused almost exclusively on misinformation which may deter or delay appropriate care, rather than misinformation designed to drive overuse of unnecessary care—often in the context of conflicts of interest. In our view, this has created an evidence gap, with very limited exploration of social media promotion which may be driving overdiagnosis and overuse. The problem of overuse, and associated harm and waste, is increasingly recognised as a major challenge taking resources away from addressing underuse of appropriate care.87 88 Practitioners struggle to deliver evidence-based and compassionate care to patients who need it within the time available to them.89 Moreover, overuse contributes to environmental harms, including the growing carbon footprint of healthcare.90 Thus, we argue there is a need for more consideration of the harms of overuse when designing and evaluating the impact of interventions aimed at mitigating misleading medical information.
The included national advertising legislation, which is far from an exhaustive list, shows that regulation is in place to target misleading advertising and points towards a greater awareness of the problem associated with overuse and industry conflicts of interest. However, there seems to be less engagement within national regulations, with the unique and growing problems posed by social media such as speed of spread, reach and third-party advertising. Current engagement with these problems might offer inspiration for new regulatory strategies effective in combating the harms from misleading medical advertising on social media. For example, Denmark is currently enforcing an approach where all influencers are to be considered independent companies, and where all claims, sponsored or not, should comply with the National advertisement code.91 Concurrently, China has banned celebrities from publicly endorsing or advertising health products,63 64 although allegedly enforced to promote socialist values.92 Regulators may benefit from employing regulations that are sensitive to social media, as seen in other areas such as cosmetic surgery.93 Since we did not assess the effectiveness of the included interventions, we are hesitant to give advice, but these examples might provide inspiration for future directions.
Studies show that health misinformation is booming on social media, even in the context of current regulations.4 15–17 Although we did not map the entire regulatory landscape, it is clear that current regulatory frameworks are inadequate and that the dissemination of medical misinformation has outpaced the ability of governments and regulatory bodies to create, develop and enforce effective countermeasures.14 Health Canada has established a dedicated effort to oversee and enforce advertising laws on social media, which might be one example of a positive way forward. There seems to be a willingness to update current regulatory regimes. For example, New Zealand was previously one of the only nations allowing direct-to-consumer advertising of prescription drugs, but it will be revisited in a new bill in 2025.94 Closely analysing different regulatory regimes would be helpful to inform and inspire guidelines and policies and valuable for the work of policymakers who are engaged in designing or evaluating responses. Further, evaluating the effectiveness and compliance with these responses would be helpful for directing future resources.95 96
Our searches were performed in Australia and Denmark, which ought to have little to no effect on searches in academic databases; this might impact results from grey literature searches due to geographical filters and algorithms, ensuring broader representativeness. We did not pose any language restrictions to the eligibility criteria but performed searches using English words, potentially down prioritising non-English pages, which might be the reason for the lacking perspectives from the Global South.
Conclusion
Current efforts to address misleading medical marketing on social media commonly overlook critical aspects related to the unique challenges posed by social media sharing mechanisms and medical overuse. Future strategies could benefit from interdisciplinary and structural interventions that consider the potential harms of overuse and focus on a comprehensive evaluation of social media advertising practices including celebrity endorsement, influencer marketing and rapid sharing of content. These insights could provide valuable guidance for policymakers aiming to regulate and mitigate the impact of misleading health marketing on social media.
Acknowledgments
We would like to acknowledge Justin Clarke and Bjørn Christian Arleth Viinholt for valuable input on the search strategy.
Footnotes
Contributors: Conceptualisation: EGG, RM, LH, BN. Methodology: EGG, RM, PS, LH, LA, EA, BN. Software: EGG. Validation: RM, CS, BN. Analysis; EGG, RM, PS, TC, EA, BN. Investigation: All authors. Data curation: EGG, RM, CS. Writing: EGG, RM, BN. Writing—Review and Editing: TC, PS, EA, LA, CS, LH. Supervision: RM, BN. Project administration: EGG. Funding acquisition: BN. EGG is the guarantor.
Funding: Australian National Health and Medical Research Council Investigator Grant (1194108).
Competing interests: BN and TCAC are supported by an Australian National Health and Medical Research Council Investigator Grant (1194108 and 2009419 respectively). EGG and BN are members of the scientific committee for Preventing Overdiagnosis.
Patient and public involvement: Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details.
Provenance and peer review: Not commissioned; externally peer reviewed.
Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information. All data is available in the article or supplementary files.
Ethics statements
Patient consent for publication
Not applicable.
Ethics approval
Not applicable.
References
- 1. Househ M, Borycki E, Kushniruk A. Empowering patients through social media: the benefits and challenges. Health Informatics J 2014;20:50–8. 10.1177/1460458213476969 [DOI] [PubMed] [Google Scholar]
- 2. Chou W-Y, Gaysynsky A, Trivedi N, et al. Using Social Media for Health: National Data from HINTS 2019. J Health Commun 2021;26:184–93. 10.1080/10810730.2021.1903627 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Allen J. Misinformation amplification analysis and tracking dashboard. Integrity Institute; 2022. [Google Scholar]
- 4. Afful-Dadzie E, Afful-Dadzie A, Egala SB. Social media in health communication: A literature review of information quality. Health Inf Manag 2023;52:3–17. 10.1177/1833358321992683 [DOI] [PubMed] [Google Scholar]
- 5. Trethewey SP. Medical Misinformation on Social Media: Cognitive Bias, Pseudo-Peer Review, and the Good Intentions Hypothesis. Circulation 2019;140:1131–3. 10.1161/CIRCULATIONAHA.119.041719 [DOI] [PubMed] [Google Scholar]
- 6. El Mikati IK, Hoteit R, Harb T, et al. Defining Misinformation and Related Terms in Health-Related Literature: Scoping Review. J Med Internet Res 2023;25:e45731. 10.2196/45731 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Powell J, Pring T. The impact of social media influencers on health outcomes: Systematic review. Soc Sci Med 2024;340:116472. 10.1016/j.socscimed.2023.116472 [DOI] [PubMed] [Google Scholar]
- 8. Persaud S, Al Hadidi S, Anderson TS, et al. Industry Payments to Physicians Endorsing Drugs and Devices on a Social Media Platform. JAMA 2024;331:2131–4. 10.1001/jama.2024.7832 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. The Lancet . Direct-to-consumer medical testing: an industry built on fear. The Lancet 2024;404:991. 10.1016/S0140-6736(24)01924-X [DOI] [PubMed] [Google Scholar]
- 10. Copp T, Pickles K, Smith J, et al. Marketing empowerment: how corporations co-opt feminist narratives to promote non-evidence based health interventions. BMJ 2024;384:e076710. 10.1136/bmj-2023-076710 [DOI] [PubMed] [Google Scholar]
- 11. DeAndrea DC, Vendemia MA. How Affiliation Disclosure and Control Over User-Generated Comments Affects Consumer Health Knowledge and Behavior: A Randomized Controlled Experiment of Pharmaceutical Direct-to-Consumer Advertising on Social Media. J Med Internet Res 2016;18:e189. 10.2196/jmir.5972 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Chen J, Wang Y. Social Media Use for Health Purposes: Systematic Review. J Med Internet Res 2021;23:e17917. 10.2196/17917 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Gram EG, Copp T, Ransohoff DF, et al. Direct-to-consumer tests: emerging trends are cause for concern. BMJ 2024;387:e080460. 10.1136/bmj-2024-080460 [DOI] [PubMed] [Google Scholar]
- 14. Nickel B, Moynihan R, Gram EG, et al. Social Media Posts About Medical Tests With Potential for Overdiagnosis. JAMA Netw Open 2025;8:e2461940. 10.1001/jamanetworkopen.2024.61940 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Melchior C, Oliveira M. Health-related fake news on social media platforms: A systematic literature review. New Media & Society 2022;24:1500–22. 10.1177/14614448211038762 [DOI] [Google Scholar]
- 16. Suarez-Lledo V, Alvarez-Galvez J. Prevalence of Health Misinformation on Social Media: Systematic Review. J Med Internet Res 2021;23:e17187. 10.2196/17187 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Wang Y, McKee M, Torbica A, et al. Systematic Literature Review on the Spread of Health-related Misinformation on Social Media. Soc Sci Med 2019;240:112552. 10.1016/j.socscimed.2019.112552 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. The Lancet . Health in the age of disinformation. The Lancet 2025;405:173. 10.1016/S0140-6736(25)00094-7 [DOI] [PubMed] [Google Scholar]
- 19. Rodrigues F, Newell R, Rathnaiah Babu G, et al. The social media Infodemic of health-related misinformation and technical solutions. Health Policy Technol 2024;13:100846. 10.1016/j.hlpt.2024.100846 [DOI] [Google Scholar]
- 20. Kbaier D, Kane A, McJury M, et al. Prevalence of Health Misinformation on Social Media-Challenges and Mitigation Before, During, and Beyond the COVID-19 Pandemic: Scoping Literature Review. J Med Internet Res 2024;26:e38786. 10.2196/38786 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Kisa A, Kisa S. Health conspiracy theories: a scoping review of drivers, impacts, and countermeasures. Int J Equity Health 2025;24:93. 10.1186/s12939-025-02451-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Aromataris E, Munn Z, eds. JBI manual for evidence synthesis . In: JBI. 2020. Available: https://jbi-global-wiki.refined.site/space/MANUAL [Google Scholar]
- 23. Tricco AC, Lillie E, Zarin W, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med 2018;169:467–73. 10.7326/M18-0850 [DOI] [PubMed] [Google Scholar]
- 24. Gram EG, Moynihan R, Copp T, et al. Responses to misleading health marketing on social media - A scoping review protocol. OSF Registries 2024. 10.17605/OSF.IO/2NJSH [DOI] [Google Scholar]
- 25. Joseph AM, Fernandez V, Kritzman S, et al. COVID-19 Misinformation on Social Media: A Scoping Review. Cureus 2022;14:e24601. 10.7759/cureus.24601 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Skafle I, Nordahl-Hansen A, Quintana DS, et al. Misinformation About COVID-19 Vaccines on Social Media: Rapid Review. J Med Internet Res 2022;24:e37367. 10.2196/37367 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Gabarron E, Oyeyemi SO, Wynn R. COVID-19-related misinformation on social media: a systematic review. Bull World Health Organ 2021;99:455–463A. 10.2471/BLT.20.276782 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Aichner T, Grünfelder M, Maurer O, et al. Twenty-Five Years of Social Media: A Review of Social Media Applications and Definitions from 1994 to 2019. Cyberpsychol Behav Soc Netw 2021;24:215–22. 10.1089/cyber.2020.0134 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Joanna Briggs Institute . Source of Evidence Selection. JBI GLOBAL WIKI, Available: https://jbi-global-wiki.refined.site/space/MANUAL/355862749/10.2.6+Source+of+evidence+selection [accessed 14 Nov 2024]. [Google Scholar]
- 30. Braithwaite J, Glasziou P, Westbrook J. The three numbers you need to know about healthcare: the 60-30-10 Challenge. BMC Med 2020;18:102. 10.1186/s12916-020-01563-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Pollock D, Peters MDJ, Khalil H, et al. Recommendations for the extraction, analysis, and presentation of results in scoping reviews. JBI Evid Synth 2022;520. 10.11124/jbies-22-00123 [DOI] [PubMed] [Google Scholar]
- 32. Alsaad E, AlDossary S. Educational Video Intervention to Improve Health Misinformation Identification on WhatsApp Among Saudi Arabian Population: Pre-Post Intervention Study. JMIR Form Res 2024;8:e50211. 10.2196/50211 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Elhariry M, Malhotra K, Goyal K, et al. A SIMBA CoMICs Initiative to Cocreating and Disseminating Evidence-Based, Peer-Reviewed Short Videos on Social Media: Mixed Methods Prospective Study. JMIR Med Educ 2024;10:e52924. 10.2196/52924 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. McPhedran R, Ratajczak M, Mawby M, et al. Psychological inoculation protects against the social media infodemic. Sci Rep 2023;13:5780. 10.1038/s41598-023-32962-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Mende M, Ubal VO, Cozac M, et al. Fighting Infodemics: Labels as Antidotes to Mis- and Disinformation?! Journal of Public Policy & Marketing 2024;43:31–52. 10.1177/07439156231184816 [DOI] [Google Scholar]
- 36. Nazarnia M, Zarei F, Roozbahani N. A mobile-based educational intervention on media health literacy: A quasi-experimental study. Health Promot Perspect 2023;13:227–36. 10.34172/hpp.2023.28 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Byrne P, Daly A, Mac Loughlin D, et al. iHealthFacts: a health fact-checking website for the public. BMJ Evid Based Med 2024;29:415–8. 10.1136/bmjebm-2023-112611 [DOI] [PubMed] [Google Scholar]
- 38. Erim A, Oko S, Biose S, et al. Tackling infectious disease outbreak and vaccination misinformation: a community-based strategy in Niger State, Nigeria. BMC Health Serv Res 2025;25:513. 10.1186/s12913-025-12683-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Leininger LJ, Albrecht SS, Buttenheim A, et al. Fight Like a Nerdy Girl: The Dear Pandemic Playbook for Combating Health Misinformation. Am J Health Promot 2022;36:563–7. 10.1177/08901171211070956 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Bode L, Vraga EK. See Something, Say Something: Correction of Global Health Misinformation on Social Media. Health Commun 2018;33:1131–40. 10.1080/10410236.2017.1331312 [DOI] [PubMed] [Google Scholar]
- 41. Ku KYL, Li J, Luo Y, et al. Correction approaches and hashtag framing in addressing Mpox misinformation on Instagram. Health Educ Res 2025;40:cyaf009. 10.1093/her/cyaf009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Lazard AJ, Queen TL, Pulido M, et al. Social media prompts to encourage intervening with cancer treatment misinformation. Soc Sci Med 2025;372:117950. 10.1016/j.socscimed.2025.117950 [DOI] [PubMed] [Google Scholar]
- 43. Sun J, Pan W. Self-Correction or Other-Correction: The Effects of Source Consistency and Ways of Correction on Sharing Intention of Health Misinformation Correction. Health Commun 2025;40:361–71. 10.1080/10410236.2024.2346674 [DOI] [PubMed] [Google Scholar]
- 44. Vraga EK, Bode L. Using Expert Sources to Correct Health Misinformation in Social Media. Sci Commun 2017;39:621–45. 10.1177/1075547017731776 [DOI] [Google Scholar]
- 45. Vraga EK, Bode L. I do not believe you: how providing a source corrects health misperceptions across social media platforms. Information, Communication & Society 2018;21:1337–53. 10.1080/1369118X.2017.1313883 [DOI] [Google Scholar]
- 46. Vraga EK, Kim SC, Cook J. Testing Logic-based and Humor-based Corrections for Science, Health, and Political Misinformation on Social Media. Journal of Broadcasting & Electronic Media 2019;63:393–414. 10.1080/08838151.2019.1653102 [DOI] [Google Scholar]
- 47. Gesser-Edelsburg A, Diamant A, Hijazi R, et al. Correcting misinformation by health organizations during measles outbreaks: A controlled experiment. PLoS One 2018;13:e0209505. 10.1371/journal.pone.0209505 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Vraga E, Tully M, Bode L. Assessing the relative merits of news literacy and corrections in responding to misinformation on Twitter. New Media & Society 2022;24:2354–71. 10.1177/1461444821998691 [DOI] [Google Scholar]
- 49. Vraga EK, Bode L. Correcting What’s True: Testing Competing Claims About Health Misinformation on Social Media. American Behavioral Scientist 2025;69:187–205. 10.1177/00027642221118252 [DOI] [Google Scholar]
- 50. Au CH, Ho KKW, Chiu DKW. Stopping healthcare misinformation: The effect of financial incentives and legislation. Health Policy 2021;125:627–33. 10.1016/j.healthpol.2021.02.010 [DOI] [PubMed] [Google Scholar]
- 51. Ozturk P, Li H, Sakamoto Y. Combating Rumor Spread on Social Media: The Effectiveness of Refutation and Warning. SSRN Journal 2015. 10.2139/ssrn.2564249 [DOI] [Google Scholar]
- 52. Vraga EK, Bode L, Tully M. The Effects of a News Literacy Video and Real-Time Corrections to Video Misinformation Related to Sunscreen and Skin Cancer. Health Commun 2022;37:1622–30. 10.1080/10410236.2021.1910165 [DOI] [PubMed] [Google Scholar]
- 53. Garrett B, Murphy S, Jamal S, et al. Internet health scams-Developing a taxonomy and risk-of-deception assessment tool. Health Soc Care Community 2019;27:226–40. 10.1111/hsc.12643 [DOI] [PubMed] [Google Scholar]
- 54. Di Sotto S, Viviani M. Health Misinformation Detection in the Social Web: An Overview and a Data Science Approach. Int J Environ Res Public Health 2022;19:2173. 10.3390/ijerph19042173 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Fridman I, Boyles D, Chheda R, et al. Identifying Misinformation About Unproven Cancer Treatments on Social Media Using User-Friendly Linguistic Characteristics: Content Analysis. JMIR Infodemiology 2025;5:e62703. 10.2196/62703 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56. Upadhyay R, Pasi G, Viviani M. Vec4Cred: a model for health misinformation detection in web pages. Multimed Tools Appl 2023;82:5271–90. 10.1007/s11042-022-13368-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Brooks I, D’Agostino M, Marti M, et al. An anti-infodemic virtual center for the Americas. Rev Panam Salud Publica 2023;47:e5. 10.26633/RPSP.2023.5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. Therapeutic Goods Administration (TGA) . How to spot a dodgy health product ad, 2024. Available: https://www.tga.gov.au/news/news/how-spot-dodgy-health-product-ad [Accessed 5 Dec 2024].
- 59. Health Canada . Illegal marketing of drugs and devices, 2020. Available: https://www.canada.ca/en/health-canada/services/drugs-health-products/marketing-drugs-devices/illegal-marketing.html [Accessed 5 Dec 2024].
- 60. Australian Health Practitioner Regulation Agency (Ahpra) . Guidelines for advertising a regulated health service, Available: https://www.ahpra.gov.au/Resources/Advertising-hub/Advertising-guidelines-and-other-guidance/Advertising-guidelines.aspx [Accessed 5 Dec 2024].
- 61. U.S. Food and Drug Administration . Guidance for industry internet/social media platforms: correcting independent third-party misinformation about prescription drugs and medical devices. Available: https://www.fda.gov/media/88545/download [Accessed 5 Dec 2024].
- 62. U.S. Food and Drug Administration . Center for drug evaluation, research. about fda guidances. 2024. Available: http://www.fda.gov/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/default.htm [Accessed 5 Dec 2024].
- 63. Paul Tsai China Center . Measures on the Administration of Internet Advertising, 2023. Available: https://www.chinalawtranslate.com/en/internet-advertising/ [Accessed 5 Dec 2024].
- 64. WIPO . Advertising Law of the People’s Republic of China. In: People’s Republic of China, Available: https://wipolex-res.wipo.int/edocs/lexdocs/laws/en/cn/cn393en.html [Accessed 5 Dec 2024].
- 65. Health Canada . Stop illegal marketing of drugs and devices, 2020. Available: https://www.canada.ca/en/health-canada/services/drugs-health-products/marketing-drugs-devices/illegal-marketing/stop.html [Accessed 5 Dec 2024].
- 66. Australian Communications and Media Authority . Second report on digital platforms’ efforts under the australian code of practice on disinformation and misinformation, Available: https://www.acma.gov.au/second-report-digital-platforms-efforts-under-australian-code-practice-disinformation-and-misinformation [Accessed 5 Dec 2024].
- 67. THE PARLIAMENT OF THE COMMONWEALTH OF AUSTRALIA . Communications legislation amendment (combatting misinformation and disinformation) bill 2024. Parliament of Australia. Available: https://www.aph.gov.au/-/media/Senate/committee/Environment_and_Communications/MDI/Combatting_Misinformation_and_Disinformaton_Bill_-_Explanatory_Memorandum.pdf?la=en&hash=B73EB4E758D7B9AD3CEA9897A25590054F98B952 [Accessed 5 Dec 2024]. [Google Scholar]
- 68. European Commission . Shaping europe’s digital future: the 2022 code of practice on disinformation. Available: https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation [Accessed 5 Dec 2024].
- 69. Australia Communication and Media Authority (ACMA) . Misinformation and news quality on digital platforms in Australia - A position paper to guide code development. 2020.
- 70. Canada Health . Regulating Advertising of Health Products, Available: https://www.canada.ca/en/health-canada/services/drugs-health-products/reports-publications/medeffect-canada/regulating-advertising-health-products-health-canada-2011.html [Accessed 15 Jun 2025].
- 71. Armstrong PW, Naylor CD. Counteracting Health Misinformation: A Role for Medical Journals? JAMA 2019;321:1863–4. 10.1001/jama.2019.5168 [DOI] [PubMed] [Google Scholar]
- 72. Abraham C, Michie S. A taxonomy of behavior change techniques used in interventions. Health Psychol 2008;27:379–87. 10.1037/0278-6133.27.3.379 [DOI] [PubMed] [Google Scholar]
- 73. Lotto M, Jorge OS, Cruvinel A, et al. Implications of the health information pollution for society, health professionals, and science. J Appl Oral Sci 2024;32:S1678-77572024000100304. 10.1590/1678-7757-2024-0222 [DOI] [PubMed] [Google Scholar]
- 74. Bagenal J, Crucefix S, Wilson C, et al. To keep health as a unifying force, we must put resources into tackling health misinformation and disinformation. Lancet 2024;404:1792–4. 10.1016/S0140-6736(24)02245-1 [DOI] [PubMed] [Google Scholar]
- 75. Trethewey SP. Strategies to combat medical misinformation on social media. Postgrad Med J 2020;96:4–6. 10.1136/postgradmedj-2019-137201 [DOI] [PubMed] [Google Scholar]
- 76. Chan H-Y, Wang C-C, Jeng W, et al. Strengthening scientific credibility in the face of misinformation and disinformation: Viable solutions. J Control Release 2023;360:163–8. 10.1016/j.jconrel.2023.05.036 [DOI] [PubMed] [Google Scholar]
- 77. Chou W-Y, Oh A, Klein WMP. Addressing Health-Related Misinformation on Social Media. JAMA 2018;320:2417. 10.1001/jama.2018.16865 [DOI] [PubMed] [Google Scholar]
- 78. Sundelson AE, Jamison AM, Huhn N, et al. Fighting the infodemic: the 4 i Framework for Advancing Communication and Trust. BMC Public Health 2023;23:1662. 10.1186/s12889-023-16612-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79. Micallef J, Maisonneuve H, Muller S, et al. What should be done to combat misinformation about health products? Therapies 2024;79:87–98. 10.1016/j.therap.2023.11.001 [DOI] [PubMed] [Google Scholar]
- 80. Chandrasekaran R, Sadiq T M, Moustakas E. Racial and Demographic Disparities in Susceptibility to Health Misinformation on Social Media: National Survey-Based Analysis. J Med Internet Res 2024;26:e55086. 10.2196/55086 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81. Bergstrom CT. Eight rules to combat medical misinformation. Nat Med 2022;28:2468. 10.1038/s41591-022-02118-1 [DOI] [PubMed] [Google Scholar]
- 82. Gram EG, Moynihan R, Kramer BS, et al. False premises, false promises: celebrity endorsement of non-evidence-based anticancer interventions on social media. Health Promot Int 2025. 10.1093/heapro/daaf116 [DOI] [PubMed] [Google Scholar]
- 83. Engel E, Gell S, Heiss R, et al. Social media influencers and adolescents’ health: A scoping review of the research field. Social Science & Medicine 2024;340:116387. 10.1016/j.socscimed.2023.116387 [DOI] [PubMed] [Google Scholar]
- 84. Smith-Mady M. Celebrity Drug Endorsements: Are Consumers Protected? Am J Law Med 2017;43:139–60. 10.1177/0098858817707988 [DOI] [PubMed] [Google Scholar]
- 85. Heiss R, Rudolph L. Patients as health influencers: motivations and consequences of following cancer patients on Instagram. Behav Inf Technol 2022;1–10. 10.1080/0144929x.2022.2045358 [DOI] [Google Scholar]
- 86. Murphy-Reuter B. Social media is the new public health frontline. Let’s treat it that way, Available: https://harvardpublichealth.org/tech-innovation/to-combat-misinformation-social-influencers-need-the-right-tools/
- 87. Harris RP, Sheridan SL, Lewis CL, et al. The harms of screening: a proposed taxonomy and application to lung cancer screening. JAMA Intern Med 2014;174:281–5. 10.1001/jamainternmed.2013.12745 [DOI] [PubMed] [Google Scholar]
- 88. Gram EG, Haas R, Niklasson A, et al. Less is More for Patients, Practitioners, Public, and Planet - A Taxonomy for the Harms of Too Much Medicine. BMJ Evid Based Med 2025 (In press). 10.1136/bmjebm-2025-113874 [DOI] [PubMed] [Google Scholar]
- 89. Johansson M, Guyatt G, Montori V. Guidelines should consider clinicians’ time needed to treat. BMJ 2023;380:e072953. 10.1136/bmj-2022-072953 [DOI] [PubMed] [Google Scholar]
- 90. Lenzen M, Malik A, Li M, et al. The environmental footprint of health care: a global assessment. Lancet Planet Health 2020;4:e271–9. 10.1016/S2542-5196(20)30121-2 [DOI] [PubMed] [Google Scholar]
- 91. Forbrugerombudsmanden . Forbrugerombudsmanden griber ind over for influenter, Available: https://forbrugerombudsmanden.dk/nyheder/forbrugerombudsmanden/pressemeddelelser/2024/20241009-forbrugerombudsmanden-griber-ind-over-for-influenter [Accessed 5 Dec 2025].
- 92. Davidson H. China bans celebrities with “lapsed morals” from endorsing products. The Guardian 2022;Available. Available: https://www.theguardian.com/world/2022/nov/02/china-bans-celebrities-lapsed-morals-endorsing-products [Google Scholar]
- 93. Australian Government, Department of Health, Disability & Aging . Cosmetic surgery reforms.January 17, 2025. Available: https://www.health.gov.au/our-work/cosmetic-surgery-reforms
- 94. The New Zealand Medical Journal . Time for new zealand to ban direct-to-consumer advertising of prescription medicines. 2023. Available: https://nzmj.org.nz/journal/vol-136-no-1575/time-for-new-zealand-to-ban-direct-to-consumer-advertising-of-prescription-medicines [DOI] [PubMed]
- 95. Bak-Coleman JB, Kennedy I, Wack M, et al. Combining interventions to reduce the spread of viral misinformation. Nat Hum Behav 2022;6:1372–80. 10.1038/s41562-022-01388-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96. Southwell BG, Otero Machuca J, Cherry ST, et al. Health Misinformation Exposure and Health Disparities: Observations and Opportunities. Annu Rev Public Health 2023;44:113–30. 10.1146/annurev-publhealth-071321-031118 [DOI] [PubMed] [Google Scholar]
- 97. Braga D, Silva I, Rosário R, et al. Exploring the potential of online social listening for noncommunicable disease monitoring. PeerJ 2025;13:e19311. 10.7717/peerj.19311 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98. Vraga EK, Kim SC, Cook J, et al. Testing the Effectiveness of Correction Placement and Type on Instagram. Int J Press Polit 2020;25:632–52. 10.1177/1940161220919082 [DOI] [Google Scholar]
- 99. Therapeutic Goods Administration (TGA) . Advertising personalised medical devices in Australia, 2024. Available: https://www.tga.gov.au/resources/resource/guidance/advertising-personalised-medical-devices [Accessed 5 Dec 2024].
- 100. Therapeutic Goods Administration (TGA) . Advertising therapeutic goods on social media, 2024. Available: https://www.tga.gov.au/resources/resource/guidance/tga-social-media-advertising-guide [Accessed 5 Dec 2024].
- 101. Therapeutic Goods Administration (TGA) . Advertising legal framework, 2024. Available: https://www.tga.gov.au/how-we-regulate/advertising/legal-framework [Accessed 5 Dec 2024].
- 102. Australia Communications and Media Authority (ACMA) . Online misinformation 1732, Available: https://www.acma.gov.au/online-misinformation [Accessed 18 Jun 2025].
- 103. Health Canada . Policies and Guidance Documents, 2002. Available: https://www.canada.ca/en/health-canada/services/drugs-health-products/regulatory-requirements-advertising/policies-guidance-documents.html [Accessed 5 Dec 2024].
- 104. Health Canada . Regulatory Requirements for Advertising, 2002. Available: https://www.canada.ca/en/health-canada/services/drugs-health-products/regulatory-requirements-advertising.html [Accessed 5 Dec 2024].
- 105. The Danish Government . BEK nr 1155 af 22/10/2014, Indenrigs- og Sundhedsministeriet. In: Retsinformation, Available: https://www.retsinformation.dk/eli/lta/2014/1155 [Accessed 5 Dec 2024].
- 106. The Danish Government . BEK nr 1153 af 22/10/2014, Indenrigs- og Sundhedsministeriet. In: Retsinformation, Available: https://www.retsinformation.dk/eli/lta/2014/1153 [Accessed 5 Dec 2024].
- 107. The Danish Government . VEJ nr 9319 af 26/06/2013, Indenrigs- og Sundhedsministeriet. In: Retsinformation, Available: https://www.retsinformation.dk/eli/retsinfo/2013/9319 [Accessed 5 Dec 2024].
- 108. The European Parliament . How to reduce the impact of disinformation on europeans’ health. Available: https://www.europarl.europa.eu/thinktank/en/document/IPOL_STU(2024)754205 [Accessed 5 Dec 2024].
- 109. European Commission - European Commission . New data shows strong levels of consumer trust, but online threats persist, Available: https://ec.europa.eu/commission/presscorner/detail/en/ip_25_762 [Accessed 18 Jun 2025].
- 110. European Commission . Countering information manipulation. Available: https://commission.europa.eu/topics/countering-information-manipulation_en [Accessed 18 Jun 2025].
- 111. European Database on Medical Devices . EUDAMED - European Database on Medical Devices, Available: https://ec.europa.eu/tools/eudamed/#/screen/home [Accessed 5 Dec 2024].
- 112. European Commission . Directive - 2006/114 - EN - EUR-Lex, Available: https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A32006L0114 [Accessed 5 Dec 2024].
- 113. European Commission . Advertising of medicinal products for human use, Available: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=LEGISSUM%3Al21143 [Accessed 5 Dec 2024].
- 114. European Commission . New Regulations - Medical Devices regulations, Available: https://health.ec.europa.eu/medical-devices-sector/new-regulations_en [Accessed 5 Dec 2024].
- 115. Maharashtra Medical Council . Acts & Rules, Available: https://www.maharashtramedicalcouncil.in/Index.aspx [Accessed 18 Jun 2025].
- 116. Pathare V. MMC to start action against advertising and social media misuse by doctors. Hindustan Times; 2024. Available: https://www.hindustantimes.com/cities/pune-news/mmc-to-start-action-against-advertising-and-social-media-misuse-by-doctors-101734028517791.html [Google Scholar]
- 117. Dubai Health Authorities, Government of Dubai . Standards for Medical Advertisement Content on Social Media, Available: https://www.dha.gov.ae/uploads/042022/Standards%20for%20Medical%20Advertisement%20Content%20in%20Social%20Media2022433965.pdf [Accessed 5 Dec 2024].
- 118. Medicines and Healthcare Products Regulatory Agency (MHRA) . The Blue Guide - Advertising and Promotion of Medicines in the UK, Available: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/956846/BG_2020_Brexit_Final_version.pdf [Accessed 5 Dec 2024].
- 119. The U.S. Surgeon General’s Advisory . Confronting health misinformation - the U.S. surgeon general’s advisory on building a healthy information environment. US Department of Health and Human Services. Confronting Health Misinformation - The U.S. Surgeon General’s Advisory on Building a Healthy Information Environment, Available: https://www.hhs.gov/sites/default/files/surgeon-general-misinformation-advisory.pdf [Accessed 5 Dec 2024]. [PubMed] [Google Scholar]
- 120. Office of the Surgeon General, Health Misinformation . US department of health and human services. HHS Gov; 2024. Available: https://www.hhs.gov/surgeongeneral/priorities/health-misinformation/index.html [Accessed 5 Dec 2024]. [Google Scholar]
- 121. Reisman M. The FDA’s Social Media Guidelines Are Here … Were They Worth the Wait? P T 2012;37:105–6. Available: https://pmc.ncbi.nlm.nih.gov/articles/PMC3351868/ [PMC free article] [PubMed] [Google Scholar]
- 122. National Academy of Medicine . Identifying credible sources of health information in social media: principles and attributes. Available: https://nam.edu/identifying-credible-sources-of-health-information-in-social-media-principles-and-attributes/ [Accessed 5 Dec 2024].
- 123. U.S. Food and Drug Administration . Medication health fraud questions and answers. FDA. Available: https://www.fda.gov/drugs/medication-health-fraud/medication-health-fraud-questions-and-answers [Accessed 5 Dec 2024]. [Google Scholar]
- 124. U.S. Food and Drug Administration . Center for drug evaluation, research. the bad ad program. FDA. Available: https://www.fda.gov/drugs/office-prescription-drug-promotion/bad-ad-program [Accessed 18 Jun 2025]. [Google Scholar]
- 125. Sen. Durbin, RJ . Protecting Patients from Deceptive Drug Ads Online Act, 12 September 2024. Available: https://www.congress.gov/bill/118th-congress/senate-bill/5040 [Accessed 18 Jun 2025].
- 126. U.S. Food and Drug Administration . FDA updates guidance to further empower companies to address the spread of misinformation. 2024. Available: https://www.fda.gov/news-events/press-announcements/fda-updates-guidance-further-empower-companies-address-spread-misinformation [Accessed 18 Jun 2025].
- 127. World Health Organisation (WHO) . Combatting misinformation online. 2024. Available: https://www.who.int/teams/digital-health-and-innovation/digital-channels/combatting-misinformation-online [Accessed 18 Jun 2025].
- 128. World Health Organisation (WHO) . Global principles for identifying credible sources of health information on social media, Available: https://www.who.int/teams/digital-health-and-innovation/digital-channels/phase-ii--global-principles-for-identifying-credible-sources-of-health-information-on-social-media [Accessed 15 Jun 2025].
- 129. World Health Organisation (WHO) . How to report misinformation online, Available: https://www.who.int/campaigns/connecting-the-world-to-combat-coronavirus/how-to-report-misinformation-online [Accessed 18 Jun 2025].
- 130. World Health Organisation (WHO) . Disinformation and public health, 2025. Available: https://www.who.int/news-room/questions-and-answers/item/disinformation-and-public-health [Accessed 18 Jun 2025].
- 131. World Health Organisation (WHO) . Collaboration is key to countering online misinformation about noncommunicable diseases –new WHO/Europe toolkit shows how, Available: https://www.who.int/europe/news/item/20-10-2022-collaboration-is-key-to-countering-online-misinformation-about-noncommunicable-diseases--new-who-europe-toolkit-shows-how [Accessed 15 Jun 2025].
- 132. World Health Organisation (WHO) . A network of healthcare influencers, Available: https://www.who.int/teams/digital-health-and-innovation/digital-channels/fides [Accessed 15 Jun 2025].
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
bmjebm-30-6-s001.pdf (81.9KB, pdf)
Data Availability Statement
All data relevant to the study are included in the article or uploaded as supplementary information. All data is available in the article or supplementary files.

