Skip to main content
Digital Health logoLink to Digital Health
. 2024 Nov 15;10:20552076241291682. doi: 10.1177/20552076241291682

An approach to evaluation of digital data in public health campaigns

Alan R Teo 1,2,, Sean P M Rice 3, Elizabeth Meyer 4, Elizabeth Karras-Pilato 5,6, Susan Strickland 7, Steven K Dobscha 1,2
PMCID: PMC11565621  PMID: 39553283

Abstract

Mass media campaigns for public health often rely heavily on digital media and advertising tools that are customarily the domain of marketing professionals and primarily used for commercial purposes. Digital campaigns also generate a myriad of metrics, which can pose both a challenge and opportunity for scientists wishing to leverage these data for research and evaluation.

Objective

The aim of this article is to provide practical guidance for the evaluation of paid media campaigns, with a focus on analyzing digital data generated directly by the campaign.

Methods

Building off the Centers for Disease Control framework for program evaluation, we describe a step-by-step process for evaluation tailored to the unique considerations of digital and paid media campaigns. We contextualize our guidance with our experience evaluating a suicide prevention campaign conducted from 2021 to 2023 that focused on firearms safety in U.S. military veterans.

Results

Key terminology, conceptual models, and selected findings from our evaluation are presented alongside our guidance.

Conclusions

We conclude with key lessons learned and offer recommendations that are broadly applicable to evaluation of other digital campaigns.

Keywords: Mass media campaign, veterans, lethal means safety, public health, disease, suicide, psychology

Introduction

Background on public health campaigns and digital media

Decades of research has suggested that mass media content can be linked to health outcomes. For instance, media related to suicide can have a demonstrable effect on subsequent suicides. This can include the Werther effect, in which content is linked to an increase in suicide, as well as the Papageno effect, in which content is followed by a decrease in suicide. 1 Randomized controlled trials investigating the Papageno effect have found that educational suicide prevention websites and media stories containing individuals with lived experience of suicidal ideation can have a short-term protective effect against suicide, particularly among individuals with recent evidence of suicide risk.2,3 Mass media campaigns for suicide prevention may be thought of as an attempt, at least in part, to harness the Papageno effect.

Mass media campaigns, whether designed for suicide prevention or any other public health goal, often rely on digital media. Digital campaigns that employ paid media (i.e. ads) allow campaign funders and public health practitioners to reach large audiences quickly and efficiently in places where they already consume information, with targeted messages that can be tailored to specific demographics and needs. However, the landscape is also characterized by a dizzying—and proliferating—array of digital media platforms, which can be broadly categorized as social media (Facebook, Instagram, Twitter/XI, Twitter/X, etc.), video (YouTube, streaming/connected TV, video game platforms, etc.), programmatic display, digital out-of-home, and search engines (Google, Bing, etc.). Digital media and advertising tools are customarily the domain of marketing professionals who primarily used them for commercial purposes. And successful implementation of campaigns often relies on critical expertise of these marketing professionals who may converse about assets, placements, conversions, and other concepts and terms that befuddle uninitiated researchers and public health practitioners. Calls for increased quality and consistency of evaluations of mass media campaigns 4 have become more urgent in light of the burgeoning number of metrics and variation in capability for exporting data across digital advertising platforms.

Given all of these circumstances, confusion among public health practitioners and scientists around how to develop and conduct campaign evaluation is understandable. At the same time, little scientific literature has addressed the question of how to best leverage and analyze the data that can be directly obtained from the campaign itself. For instance, evaluations of large-scale campaigns tend to be descriptive, 5 lacking comparisons of features of the campaign that might provide insights and evidence to guide future campaigns or further research. Experimental designs may be employed in studies of message effectiveness, 6 but these studies do not provide evidence of campaign effectiveness in their natural environment. Finally, surveys or interviews are frequently used to examine campaign recall,7,8 but this approach has been shown to have inaccuracies. 9

Gaps in the literature on public health campaigns

Of the available guidance on digital data in public health campaigns, most has focused on conceptualization of metrics in digital campaigns. One review examined metrics used in digital and traditional tobacco control campaigns. 10 The authors presented a conceptual framework of campaign evaluation metrics and measures. In this framework, evaluation begins with process evaluation of campaign delivery (e.g. impressions). That is followed by proximal digital proxies of impact, such as campaign website visits or other measures of engagement with the campaign. The most distal aspect of evaluation involves measures of the desired behavior change.

A more recent systematic review identified 93 health communication campaigns that utilized social media. 11 They concluded that campaign exposure via social media can lead to individual behavior change and improved health outcomes, either through direct or indirect pathways. However, they also identified several key limitations of the extant literature. First, they found that the majority of campaigns (55) only collected process measures, such as reach and impressions (see Table 1 for an overview to terminology common to paid media and digital campaigns). Second, while two-fifths of campaigns (42) did report engagement measures, the authors suggested evaluations need to expand beyond engagement to include other effects in their models. Suggestions included priming steps (e.g. attitudes, knowledge, or belief change); individual-level behavior change; and social, policy, or environmental changes.

Table 1.

Common terms and definitions used in paid media digital campaigns.

Term Definition
Ad serving A reference to the process of delivering or presenting ads to viewers or users of digital platform.
Ad spend The amount of money spent on a paid media campaign.
Bounce rate The percentage of visitors to a website who navigate away from the site after viewing only one page.
Clicks The number of times a user clicks on the presented ad, sending them to the campaign landing page.
Click-through rate The total number of clicks on an ad divided by the total number of impressions.
Conversion rate The percentage of visitors to a website who take a desired action, such as making a purchase or signing up for a newsletter.
Conversions The number of desired actions taken by users who clicked on an ad. This is an indicator of behavioral engagement with the website or landing page after interaction with an ad.
Cost per thousand The amount of money a campaign pays paid per thousand impressions. Also known as “Cost per mille.”
Creative The visual and textual elements of a paid media ad, including any images and copy (text). Common types of creative include video and display ads. Also known as “Asset.”
Digital proxy An online action that can indicate whether a user is more likely to take an offline action.
Flight A period of time or phase during which a campaign is running.
Impressions The number of times an ad appears on a user's screen.
Line item Groups of individuals identified by various platforms whose platform and other usage suggest a common characteristic, such as age, gender, and interests. Targets are developed by individual platforms, but the campaign team decides which targets to use. Also known as “Audience targeting.”
Reach Number of unique users who see an ad at least once, as distinguished from “Impressions” which can occur multiple times for each unique user.
Session duration The total time a website is being viewed continuously by an individual. Also known as (or very similar to) to “Time on Site.”
Views of video ads The number of times a video ad is played, regardless of whether or not the viewer watches the entire ad.
Website traffic The number of visitors to a website.

In this article, we build off this existing base of literature and address the question of how to best leverage and analyze digital data that are part of contemporary public health campaigns. The aim of this manuscript is to provide practical guidance for evaluation of digital campaigns.

Methods

The Office of Research and Development at VA Portland Health Care System determined that this project was not human participant research, so institutional review board approval and participant consent were not required for the collection, analysis, and publication of the data. We use the Centers for Disease Control (CDC) Framework for Program Evaluation in Public Health 12 as a way to sequentially describe our analytic process for evaluation. In brief, this frame contains six interconnected steps: (1) engage stakeholders; (2) describe the program; (3) focus the evaluation design; (4) gather credible evidence; (5) justify conclusions; and (6) ensure use and share lessons learned. We tailor the specifics of each step to be relevant to digital communication campaigns. We also present our experience with evaluation of the U.S. Department of Veterans Affairs (VA) Keep It Secure campaign, to provide illustrative examples of how to merge marketing and scientific approaches to evaluation of digital campaigns. For each of the six steps, we present recommendations that are generalizable to other campaigns; these recommendations are summarized in Table 2.

Table 2.

Summary of recommendations for evaluation of digital campaigns, organized by steps in the centers for disease control (CDC) framework for program evaluation in public health.

graphic file with name 10.1177_20552076241291682-table2.jpg

The Keep It Secure campaign

In 2021, VA developed a campaign to promote lethal means safety (LMS), an approach that aims to reduce the risk of suicide by encouraging secure storage of firearms, medications, or other objects that can be used to inflict self-directed violence. Called Keep It Secure, this national campaign was part of VA's multipronged strategy for suicide prevention in veterans, 13 and it followed other large campaigns VA has previously invested in. 14 In addition to building awareness about LMS, the current campaign also sought to drive traffic to a VA website (https://www.va.gov/reach/lethal-means/). A variety of digital assets were developed for the campaign, including two videos that focused on safe firearm storage. One of the videos used can be viewed here, and other variations in ad length and age/gender of the narrator, were also used. The website contained tips on safe storage of firearms and medication, downloadable information sheets, and links to additional VA and non-VA resources on firearms safety and suicide prevention.

VA contracted with marketing professionals to promote these assets in a paid digital media campaign across a variety of platforms. The audience target was military veterans and their families, with specific interests in reaching those between 18 and 34 years old, racial and ethnic minorities, the LGBTQ+ community, women, and geographic areas with high rates of suicide by firearm among veterans. In total, the campaign ran between September 2021 and June 2023. Our evaluation focused on a subset of the campaign that used video marketing, comprising approximately 300 million ad impressions, 141 million completed views of the video assets, and 10 million clicks to the Keep It Secure website, at a cost of about $2.2 million.

Results

Steps in the process of rigorous evaluation of a digital campaign

1. Engage stakeholders: In the classic CDC framework, 12 stakeholders include those in program operations, those impacted by the program, and users of the evaluation. Due to the complexity of suicide, major suicide prevention initiatives ought to involve representatives from different social sectors, such as workplaces, schools, healthcare, and media. Veterans were involved in the development of the Keep It Secure campaign, as they are with all suicide prevention campaigns, providing input on design and message. For a large paid media campaign, primary stakeholders often include the funder of the campaign and the marketing or media company that will run and operate the campaign on a day-to-day basis. A major goal in this stage is to align the sponsor or program office's “campaign strategy” with consideration of the audience(s) being targeted.

It is important for these stakeholders to plan for evaluation from the beginning—even as the campaign concept is being developed—and to build the needs of evaluators into applicable scopes of work. Often, the marketing company will provide its own internal campaign evaluation and reporting. Quantitative findings from these reports often present summary measures, such as impressions, click-through rate (CTR), and cost per thousand impressions. (See the Table 1 for an overview to terminology common to paid media campaigns.) Reports from the marketing team can be very important to assess cost-efficiency of a campaign, helping better understand what level of investment, frequency of serving advertisements, and so on might be necessary to the campaign. However, key limitations of many of these reports is the lack of statistical methods that allow quantitative examination of associations between different elements of the campaign and limited ability to draw public health insights.

In order to address this limitation, we recommend investment in a third-party evaluation team possessing training in scientific research methods. This evaluation team—which might consistent of subject matter experts, data analysts, and biostatisticians—is best positioned to provide an objective evaluation, as well as make analytic decisions that can support estimates of effect size (e.g. Cohen's f-squared) and inferences on statistical significance (e.g. p-values). These quantitative analyses can complement descriptive reports prepared by the marketing company. While it is important that the evaluation team work independently, it is still important to establish a close, collaborative relationship between the marketing and evaluation teams. Evaluation teams benefit by including someone intimately familiar with the functionality and available data within paid media platforms. Robust evaluation also relies on partner buy-in and planning an evaluation that answer questions for key stakeholders while also addressing gaps in the literature.

In the case of the Keep It Secure campaign, VA's Office of Suicide Prevention was the sponsor and funder and provided oversight of campaign operations. The Office of Suicide Prevention developed a scope of work (SOW) for the campaign, and contracted this SOW to a marketing company. The marketing team developed a communications plan, within the contract budget. This plan included selecting audience targeting parameters and utilizing a range of digital advertising platforms with the goal of reaching veterans online. A research and evaluation liaison with the Office of Suicide Prevention identified an evaluation team consisting of researchers within VA's Health Services Research and Development Service. While this evaluation team was affiliated with VA, they were able to function as a “third-party” because they were uninvolved in asset development and campaign design, and had no financial stake in the paid media campaign. They conducted a needs assessment, asking about the Office of Suicide Prevention's operational goals for the campaign and what kinds of questions they sought answers to. The team submitted an evaluation proposal, which subsequently received approval and funding to evaluate Keep It Secure.

2. Describe the program: The goal of this step is to clarify all the components and intended outcomes of the campaign to help center its evaluation on the most critical questions. Developing a logic model is a common approach that can help systematically describe components of the campaign. Historically, logic models or conceptual frameworks are often absent in suicide prevention campaigns, 15 which amplified the importance of developing one for this campaign evaluation. The logic model that emerged for the Keep It Secure campaign is shown in Figure 1. This model presents a pathway beginning with inputs and activities of the campaign, and extending to its shorter and longer-term outcomes and impacts.

Figure 1.

Figure 1.

Logic model for the Keep It Secure campaign.

When developing a logic model, it is key to consider whether proposed campaign effects are short-term versus long-term and whether effects involve change at a cognitive, emotional, or physiological level. Chan et al. provide a framework that can be useful in developing a logic model for any digital campaign. 10 Here we present their six categories of campaign metrics, with our own elaboration.

  • A. Process. Measures of the size of the audience exposed to campaign content would be considered process metrics. Examples include impressions (total number of times content is delivered), reach (total number of people who see content), and video views.

  • B. Awareness. Awareness is typically evaluated via surveys or interviews that probe campaign recall. Often this is done independently from the campaign itself. However, digital platforms have the ability to run “brand lift” survey that can measure campaign recall among audiences who have been exposed to the messaging.

  • C. Proximal impact—engagement. Engagement is critical but can be a “nebulous concept” 11 ; generally, it would include metrics evaluating some element of interaction with campaign content, such as clicks, CTR, and varying aspects of website visits such as time on website. For social media platforms, engagement includes likes, shares, comments, and other interactions with campaign content.

  • D. Proximal impact—priming steps. This refers to attitudes, knowledge, or belief change that, like awareness metrics, are usually measured separate from real-world exposure to a campaign.

  • E. Distal impact. This refers to initial behaviors or antecedent behaviors occurring prior to the ultimate desired behavior change. Traditionally, these have not been thought of as something that can be captured directly from a digital campaign. However, so-called digital proxies attempt to bridge the divide between online actions and the offline behaviors. A digital proxy can be thought of as an online action that can indicate whether a user is more likely to take an offline action. Examples of digital proxies include visits to noncampaign websites and search query volume.

    In the context of the Keep It Secure campaign, searches for gun locks or online purchase data for gun locks were thought of as potential digital proxies, and the marketing team explored data available from Google for these digital proxies. The evaluation team also discussed providing coupons for gun locks on the campaign website and measuring these downloads as a measure of antecedent behavior, but we were unable to implement measurement of this during the campaign.

  • Outcomes. This refer to the desired behavior change or changes that are ultimately the goal of many public health campaigns. In a systematic review by Chan et al., 10 all campaign evaluations that included outcomes used surveys, interviews, or population-level prevalence data to assess this.

In the Keep It Secure campaign, the evaluation team helped refine campaign aims, trying to add precision and specificity to what would constitute success of the campaign. For instance, we identified the purchase of gun locks or safes as a specific behavior change target for the campaign.

A final critical component at this stage is obtaining a commitment from the marketing company to share data from the campaign with the evaluation team. This is not a trivial ask. It requires the marketing team to conduct data pulls, assist in the creation of data dictionaries, and be responsive to questions and clarification about the nature of the data.

3. Focus the evaluation design: In this step, the evaluation team should consider whether the evaluation will fall under the definition of “research,” in which case institutional regulatory board or similar review is necessary before proceeding with the evaluation. Then, using the logic model, evaluators should collaborate with the sponsor and marketing team to determine the feasibility of collecting and analyzing certain metrics, address key choices made by the sponsor or the marketing team (e.g. platforms selected), and set realistic expectations. We recommend making an effort at the outset of the campaign to identify and track which platforms and what parts of the campaign will have clear and consistent data available to the evaluation team. Selecting events or actions to measure for the purposes of evaluation prior to campaign launch is vital because many of cannot be tracked retroactively. Finally, trying to evaluate a campaign across multiple platforms makes preparing the data for analysis more challenging due to inconsistencies in how the data can be pulled and exported. We would not generally recommend trying to make comparisons across platforms.

In our case, our local research office offered a process for determining whether a project is classified as quality improvement (QI) or research, and our evaluation plan received a determination of QI. The Keep It Secure campaign included a variety of digital and traditional media placements, although social media was not used. Due to the timing of when this third-party evaluation was initiated, we had to retroactively identify data to use in our evaluation, which limited the digital media platforms that were suitable for data analysis. We also limited our analyses to two flights of the campaign, from July 26, 2022 to December 12, 2022 and January 1, 2023–June 12, 2023. (A “flight” refers to a period of time during which a campaign is running, as campaign are often divided into multiple phases.)

As far as our logic model and available campaign metrics, we felt time on the Keep It Secure website was an especially important engagement metric for the campaign because of the campaign's intention to increase veterans’ awareness of firearms safety resources. We did implement tracking technologies, which can offer the benefit of linking a user's activity on the website to a campaign ad, as opposed to organic website traffic. However, we were limited in what could be tracked because this was done after campaign launch.

4. Gather credible evidence: In the CDC framework, the intention of this stage is to promote the collection of valid, reliable, and systematic information that is the foundation of any effective evaluation. It involves consideration of the sources, quality, and quantity of metrics and how they might be interpreted. Many of the unique challenges (and opportunities) related to conducting evaluation of a digital campaign lie in this step.

The list of potential digital campaign metrics is long, which has the potential to lead problems for evaluators such as “choice overload” or “analysis paralysis” 16 and difficulty in meaningfully interpreting numerous metrics. 10 Thus, proactive consideration of what metrics ought not be considered is especially relevant to digital campaign in order to efficiently use time and evaluation resources.

For engagement metrics (proximal impact in the above framework 10 ) in particular, it is vital to be very clear in what metrics will be analyzed and to establish a process for determining what the metric specifically means in the context of the campaign. An added challenge with the digital advertising landscape is lack of transparency and consistency in how some metrics are calculated. 10 An example of this, often employed by social media platforms, lies in the form of “engagement scores” or similar platform-provided metrics that represent composite scores or combination of other measures. We recommend against use of platform-provided engagement scores and other composite measures; instead, we recommend evaluators use raw metrics when possible and transform the data themselves as needed.

Alongside the selection of metrics, we recommend building a picture of how variables are related to each other. In most digital campaigns, date is a commonality that can be exploited to construct and organize a sample. Next, it is critical to determine what the unit of analysis will be. We recommend using the most granular level unit of analysis that is still practically meaningful to interpret in the context of the campaign. For many campaigns, this unit will be day of the campaign, and accordingly measures of central tendency (e.g. mean, median) will center around daily values. Another important consideration unique to digital campaigns is to adjust the analyses for factors influenced by the advertising platform or choices made by the marketing team that are bound to influence metrics or outcomes. Examples of these factors might include amount spent in the campaign on different assets, number of impression, and number of days campaign assets are used.

For the Keep It Secure campaign, we utilized date as a key variable that would allow us to merge elements of the dataset, and we were able to construct a path diagram of the relationship between variables and metrics in the campaign (Figure 2). Day of campaign was the most sensible unit of analysis: it was the smallest unit of date available, provided variance across the days of the campaign, and was meaningful for stakeholders to interpret in the context of the flights that each lasted several months. We also found it useful to work with the marketing team to conduct a test data pull during each flight of the campaign to preemptively identify concerns that could impact data analysis and address them. Having the marketing and evaluation teams meet together to review these data pulls was vital to establishing data integrity and quality. We used generalized linear mixed modeling to account for the potential intraday correlation present in the data.

Figure 2.

Figure 2.

Path diagram illustrating how variables available in the keep it secure campaign were related to each other.

In one analysis using these robust statistical methods, we were able to evaluate the interaction effect between creative assets (four versions of the video ad) and line items (five audience targets) on CTR, with a random effect for date and days into campaign. Results (not shown) indicated CTR was high for a line item focusing on users interested in Star Trek and science fiction but the video asset using an older male narrator performed relatively poorly in this audience segment. These results can help inform and refine what campaign assets are deployed to which audience segments.

5. Justify conclusions: In this step, interpretation of the campaign evaluation findings are placed in a practical context. Here again, collaboration among stakeholders to review and discuss the data (“Do the findings make sense?”) is crucial. This step can include making inferences about the program's merit and proffering recommendations to stakeholders.

For our evaluation of Keep It Secure, we leveraged existing public policy data to help make policy-relevant inferences from our digital campaign data. We first went through a data cleaning process, whereby we restructured our geographic website data from city- to state-level due to extensive missing data on the city level. This processed dataset contained important metrics such as number of unique website visitors and time spent on the website for a particular state on a particular day of the campaign. Next, we considered sources of open-source data on firearm legislation, suicide deaths by firearm, and other policy and public health data. State-level data on firearm-related laws is particularly abundant, 17 and in one of our analyses, we merged data on gun law rankings obtained from the Giffords Law Center 18 with campaign website traffic data at the state-level.

This analysis led to novel insights useful to guide future campaign efforts. For example, during the campaign relatively few Alaskans per capita accessed the Keep It Secure website, but when they did, they spent more time on the webpage than people in any other state. Texas, in contrast, had a moderate number of people accessing the campaign website but they spent virtually no time on the website. To us, these findings suggested that campaign assets may have particularly resonated with Alaskans (for any of a host of potential reasons), but the campaign's audience targeting was not effective in reaching them. We also suggested to our sponsor that interviews, surveys, or focus groups with Texans might help identify aspects or content on the campaign website that were unappealing to this audience. Figure 3 presents a heat map summarizing this geographical analysis.

Figure 3.

Figure 3.

Heat map that integrates state-level campaign data on website usage with the Gifford law center's gun law scorecard*

*source: https://giffords.org/lawcenter/resources/scorecard/.

6. Ensure use and share lessons learned: This final step is a reminder of the efforts needed to promote dissemination and use of evaluation findings. We inquired with our campaign sponsor as to when and how they preferred to receive findings, and we ultimately provided our evaluation in several ways. We provided multiple interim reports to the VA Office of Suicide Prevention. Reports were intentionally brief (one page or one slide) and included “bottom line up front” statements that are frequently sought by VA program offices to make their review of findings more efficient and practically inform campaign improvements. Throughout the campaign, we participated in monthly group calls led by the marketing team, which included campaign updates and a review of performance to-date; these discussions helped inform iteration and refinement of the evaluation team's analyses. A final report and presentation to the marketing team and our VA sponsor included a compilation of our interim reports and plain-language, forward-looking recommendations.

Discussion

Table 2 identifies recommendations based on our experience with this evaluation. Recommendations are organized by the six steps in the CDC Framework for Program Evaluation in Public Health. Some themes emerge from examination of these recommendations and our campaign evaluation experience. First, planning of the evaluation early in the project—with collaboration among the sponsor, marketing team and evaluation group—is vital. Regular communication among stakeholders is also key. We found that the three groups often “speak different languages” (that of operations, marketing/digital media, and research). Frequent communication helps clarify each group's priorities, build a shared understanding of the nature of the underlying data and how it can be best leveraged, and facilitates the creation of a consensus-based plan that does not result in unrealistic expectations or disappointments.

There are limitations and caveats to what we have presented here. First, the scope of this paper is limited to public health campaigns. A vast amount of content is circulated on social media every day, and this can contain misinformation about suicide. Such digital content is crucial to consider because it likely to reach a wider audience and may circulate longer than public health campaigns. Also, we have not considered surveys, interviews or other qualitative data, or mixed methods. These methods warrant their own consideration in a comprehensive approach to campaign evaluation. They are very helpful in understanding how campaign messages are received by key audiences and whether message effectiveness varies across by age, gender, social class, geographic area, or other characteristics. As noted already, our evaluation for Keep It Secure began well after campaign launch, and we lacked access to pre- and postcampaign data, which can offer additional context. Our campaign had limited options for implementation of trackable digital proxies that would offer insight into offline behavior related to firearms safety in veterans. Digital proxies are a relatively novel element in the evaluation of digital campaigns and warrant future consideration. Finally, there may be unique considerations in the evaluation of public health campaigns that address sensitive or contentious topics, or campaigns in digital spheres where misinformation is common.

Conclusion

By following these steps, we believe a team that is versed in statistical analysis and committed to collaboration with campaign partners can provide substantial value-added to evaluation of digital campaigns.

Acknowledgements

The authors thank Kimberly Hubbard for her contributions to project coordination and manuscript preparation and thank Chelsea Radler for her feedback on the manuscript. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Department of Veterans Affairs.

Footnotes

Contributorship: A.T., S.D, and S.S. were responsible for conceptualization of the project. A.T., S.D., and S.R. developed the methodology of the project. S.R. was responsible for software, formal analysis, and data curation components. A.T and S.D. contributed to the validation of the study. E.M. and S.S. provided the resources for the project. The original manuscript draft was drafted by A.T., and all authors contributed to the review and editing of the manuscript. A.T. and S.R. were responsible for data visualization. A.T. provided supervision for the project alongside S. S. A.T. and E.M. were responsible for project administration. Funding for the project was acquired by E. K., A.T., and S.D.

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Office of Suicide Prevention in the Department of Veterans Affairs.

Guarantor: AT.

References

  • 1.Niederkrotenthaler T, Voracek M, Herberth A, et al. Role of media reports in completed and prevented suicide: Werther v. Papageno effects. Br J Psychiatry 2010; 197: 234–243. [DOI] [PubMed] [Google Scholar]
  • 2.Niederkrotenthaler T, Till B. Effects of suicide awareness materials on individuals with recent suicidal ideation or attempt: online randomised controlled trial. Br J Psychiatry 2020; 217: 693–700. [DOI] [PubMed] [Google Scholar]
  • 3.Till B, Tran US, Voracek M, et al. Beneficial and harmful effects of educative suicide prevention websites: randomised controlled trial exploring Papageno v . Werther effects. Br J Psychiatry 2017; 211: 109–115. [DOI] [PubMed] [Google Scholar]
  • 4.Torok M, Calear A, Shand F, et al. A systematic review of mass media campaigns for suicide prevention: understanding their efficacy and the mechanisms needed for successful behavioral and literacy change. Suicide Life Threat Behav 2017; 47: 672–687. [DOI] [PubMed] [Google Scholar]
  • 5.Hunt IDV, Dunn T, Mahoney M, et al. A social media‒based public health campaign encouraging COVID-19 vaccination across the United States. Am J Public Health 2022; 112: 1253–1256. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Rayala H-T, Rebolledo N, Hall MG, et al. Perceived message effectiveness of the meatless Monday campaign: an experiment with US adults. Am J Public Health 2022; 112: 724–727. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Dominguez ME, Macias-Carlos D, Montoya JA, et al. Integrated multicultural media campaign to increase COVID-19 education and vaccination among californians, 2021. Am J Public Health 2022; 112: 1389–1393. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Farley TA, Halper HS, Carlin AM, et al. Mass media campaign to reduce consumption of sugar-sweetened beverages in a rural area of the United States. Am J Public Health 2017; 107: 989–995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Niederdeppe J. Meeting the challenge of measuring communication exposure in the digital age. Commun Methods Meas 2016; 10: 170–172. [Google Scholar]
  • 10.Chan L, O’Hara B, Phongsavan P, et al. Review of evaluation metrics used in digital and traditional tobacco control campaigns. J Med Internet Res 2020; 22: e17432. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kite J, Chan L, MacKay K, et al. A model of social media effects in public health communication campaigns. Systematic review. J Med Internet Res 2023; 25: e46345. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Centers for Disease Control and Prevention. Framework for program evaluation in public health.
  • 13.Langford L, Litts D, Pearson JL. Using science to improve communications about suicide among military and veteran populations: looking for a few good messages. Am J Public Health 2013; 103: 31–38. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.United States Government Accountability Office. Report to the Ranking Member Committee on veterans’ affairs, house of representatives: improvements needed in Suicide Prevention Media Outreach Campaign oversight and evaluation. GAO-19-66.
  • 15.Karras E, Warfield SC, Stokes CM, et al. Lessons from suicide prevention campaigns: considerations for opioid messaging. Am J Prev Med 2018; 55: 125–128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Schwartz B. The paradox of choice: why more is less . New York, NY, US: HarperCollins Publishers, 2004. [Google Scholar]
  • 17.Siegel M, Pahn M, Xuan Z, et al. Firearm-related laws in all 50 US states, 1991–2016. Am J Public Health 2017; 107: 1122–1129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Giffords Law Center to Prevent Gun Violence. Annual gun law scorecard 2021. https://giffords.org/lawcenter/resources/scorecard2021/

Articles from Digital Health are provided here courtesy of SAGE Publications

RESOURCES