Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2024 Dec 15;45(7):1683–1697. doi: 10.1111/risa.17690

Enhancing risk and crisis communication with computational methods: A systematic literature review

Madison H Munro 1,, Ross J Gore 2, Christopher J Lynch 2, Yvette D Hastings 1, Ann Marie Reinhold 1,3
PMCID: PMC12396938  PMID: 39676035

Abstract

Recent developments in risk and crisis communication (RCC) research combine social science theory and data science tools to construct effective risk messages efficiently. However, current systematic literature reviews (SLRs) on RCC primarily focus on computationally assessing message efficacy as opposed to message efficiency. We conduct an SLR to highlight any current computational methods that improve message construction efficacy and efficiency. We found that most RCC research focuses on using theoretical frameworks and computational methods to analyze or classify message elements that improve efficacy. For improving message efficiency, computational and manual methods are only used in message classification. Specifying the computational methods used in message construction is sparse. We recommend that future RCC research apply computational methods toward improving efficacy and efficiency in message construction. By improving message construction efficacy and efficiency, RCC messaging would quickly warn and better inform affected communities impacted by current hazards. Such messaging has the potential to save as many lives as possible.

Keywords: computational methods, efficacy and efficiency, message construction, risk and crisis communication, theoretical framework

1. INTRODUCTION

Risk and crisis communication (RCC) is a powerful tool for improving hazard preparedness. RCC encourages individuals to take protective actions to keep themselves and their community safe (Fischhoff & Downs, 1997; Jones et al., 2014; Reinhold, Munro, et al., 2023; Reynolds & Seeger, 2005). Such communication encompasses both risk messaging deployed before a hazard and crisis messaging deployed during a disaster. Effective RCCs motivate as many individuals as possible in a target population to adopt protective actions.

Efficacy (definition in Table 1) is essential in RCC. Achieving messaging efficacy involves bridging knowledge gaps between hazard domain experts and affected populations (Reinhold, Raile, et al., 2023). Bridging such gaps relies on how the message is developed. This development frames RCC objectives in ways such that affected populations are receptive to and engaged with the messaging. Receptivity and engagement motivate affected populations to take action to protect themselves from a hazard (Heath et al., 2018; Shanahan et al., 2018).

TABLE 1.

Definitions for key terms used in this systematic literature review (SLR), as defined by the authors and other sources.

Term Definition Definitions inferred from other sources
Efficacy The extent to which risk messaging changes individuals’ risk perceptions and mitigation behavior when faced with a hazard

Measuring persuasive outcomes of risk messages, including changes in attitudes, behaviors, intentions, and knowledge of individuals (Vos et al., 2018)

Risk communication that incorporates honesty, reassurance, and actionable items in its messaging, usually guided by a risk communication framework (Abrams et al., 2022)

Efficiency Combines the speed with which messages can be created with optimal use of resources

A general set of standards, procedures, guidelines, norms, reference points, or principles that are designed to improve performance (Sellnow et al., 2008)

The ability for a computational method to complete its function that is both computationally inexpensive and scalable for large‐scale data (Li et al., 2017)

Improvements to work or task performance through automation and aggregation of tasks (Wang et al., 2019)

RCC also relies on the timely delivery of messages (Lwin et al., 2018). Messages disseminated promptly ensure individuals have as much lead time as possible to prepare for a risk or to respond during a crisis. For this reason, the time it takes to develop a message is an important concern. Efficient RCCs are created with optimal use of computational resources and time.

Efficiency (definition in Table 1) is a crucial aspect of RCCs. Achieving efficiency involves constructing RCC messages using semi‐automated or fully automated computational methods (Karinshak et al., 2023). Such methods enable shorter message development timeframes compared to manually constructing messages, thus resulting in timelier distribution of messages to populations impacted by hazards. Successful RCC depends on how researchers create effective and efficient messaging.

Recent developments in RCC research blend the worlds of psychology, policy process, and data science to construct effective risk messages efficiently (Gore et al., 2024; Reinhold, Munro, et al., 2023). Although there is growing emphasis on improving message efficacy using computational methods (Nelson, 2020; Reinhold, Raile, et al., 2023), RCC research prioritizes social science aspects informing effective message development. This prioritization is specifically on analysis of effective message elements (Bartolucci et al., 2023; Fathollahzadeh et al., 2023; Hannes & Thyssen, 2022).

Computational methods for analyzing message efficacy are well researched (Guetterman et al., 2018; Ogie et al., 2018), but effective computational message construction is largely ignored (Reinhold, Raile, et al., 2023). This is not to say that message efficacy has been overlooked, but that the use of computational methods improving message efficacy is limited. In addition, little research attention has been paid to efficient message construction—both in using computational or manual methods. One reason for this gap in efficiency research is because computational methods have recently become prominent in the last few years (Kalyan, 2024). Overall, research into computational message construction is sparse, a problem that is reflected in systematic literature reviews (SLRs) on RCC research (Bartolucci et al., 2023; Fathollahzadeh et al., 2023; Hannes & Thyssen, 2022; Ogie et al., 2018).

Existing SLRs aggregate research on message efficacy, specifically on manual (Bartolucci et al., 2023; Fathollahzadeh et al., 2023; Hannes & Thyssen, 2022) or computational (Ogie et al., 2018) classification and analysis of message elements. However, none of them focus on message construction. To date, no SLRs investigate methods used in message construction for improving efficacy, nor do any SLRs investigate methods for improving the efficiency of message construction.

To address these critical research gaps, we conduct an SLR to investigate what, if any, research focuses on efficacy and efficiency in message construction. We highlight existing gaps present in research on computational message construction. Further, we highlight usages of computational methods in RCC research. The next section details the methodology this SLR coincides with.

2. METHODS

The methodology for this SLR aligned with the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses1 (PRISMA) guidelines (Page, McKenzie, et al., 2021; Page, Moher, et al., 2021). We primarily adhered to guideline steps aligning with data synthesis. These steps include developing research questions (RQs), developing a search strategy, assessing eligibility, performing data meta‐analysis, and selecting our final literature to report.

2.1. Guideline selection and research questions

The first step in PRISMA facilitated the development of RQs on computational methods used in message construction. The following RQs drove the direction of this SLR:

  1. What established and emerging computational methods improve efficacy in RCC message construction?

  2. What established and emerging computational methods improve efficiency in RCC message construction?

  3. What are the trade‐offs between efficient and effective computational methods for message construction?

All three RQs reflect our research attention on effective and efficient computational methods used in RCC. After RQ development, we focused on the next step in the PRISMA guidelines: developing a search strategy.

2.2. Selection, search, and screening

2.2.1. Database selection and search

We constructed search strings from both Boolean logical operators and terms derived from the defined RQs. Terms within the strings cover RCC, messaging, and computational methods. Search string formatting varied across selected databases (Table 2).

TABLE 2.

Searched databases and respective strings used to query results.

Database used Search string Filters applied Results Date searched
ACM Digital Library [[All: “risk communication”] OR [All: “crisis communication”] OR [All: “hazard communication”]] AND [All: messag*] AND [[All: analysis] OR [All: computational analysis] OR [All: “natural language processing”] OR [All: “nlp”] OR [All: “artificial intelligence”] OR [All: “ai”]] AND [E‐Publication Date: (01/01/2018 TO 12/31/2023)] Research articles only 138 2/7/2024
IEEE Xplore (“Full Text & Metadata”:“risk communication” OR “Full Text & Metadata”: “crisis communication" OR “Full Text & Metadata”:“hazard communication”) AND (“Full Text & Metadata”: messag*) AND (“Full Text & Metadata”: analysis OR “Full Text & Metadata”: computational analysis OR “Full Text & Metadata”: “natural language processing” OR “Full Text & Metadata”: “nlp” OR “Full Text & Metadata”: “artificial intelligence” OR “Full Text & Metadata”: “ai”) Published between 2018 and 2023, journals and conferences only 166 2/7/2024
PubMed ((“risk communication” OR “crisis communication” OR “hazard communication”) AND (messag*)) AND (analysis OR computational analysis OR “natural language processing” OR “nlp” OR “artificial intelligence” OR “ai”) Published between 2018 and 2023 144 2/7/2024
Web of Science ((AB=(“risk communication” OR “crisis communication” OR “hazard communication”)) AND AB=(messag*)) AND AB=(analysis OR computational analysis OR “natural language processing” OR “nlp” OR “artificial intelligence” OR “ai”) Published between 2018 and 2023, articles only 192 2/7/2024

Note: The table also mentions any filters applied to results, the number of results after applying filters, and the search date for each database consulted. The search strings differed from each other since each database utilized different query formats. Any *’s present in search strings indicated that the term preceding it was stemmed.

We utilized the research databases IEEE Xplore,2 ACM Digital Library,3 Web of Science,4 and PubMed 5 to find relevant RCC message literature. IEEE Xplore and ACM Digital Library were chosen for their collection of multidisciplinary computer science research; Web of Science and PubMed were chosen for their collection of multidisciplinary social science research. The database searches uncovered 1385 potentially relevant literature on message construction (548 from IEEE Xplore, 299 from ACM Digital Library, 313 from Web of Science, and 225 from PubMed).

2.2.2. Results screening

Literature identified from each database underwent automatic and manual screening. Filters for publishing year and publication type narrowed the number of articles down to 640. Filtered articles had their titles and DOI links web scraped into dataframes corresponding to the database the article was identified from. The scraping was done both manually and using the R 6 package rvest.7 R was also used for postprocessing to filter out duplicate and invalid entries in each dataframe, reducing the number of articles from 640 to 627. Articles selected after the screening were subsetted and dispersed to six reviewers recruited for manual evaluation.

2.3. Manual eligibility analysis

2.3.1. Inclusion criteria

We developed inclusion criteria to assess the relevance of the screened literature (Table 3). The criteria specified which topics and characteristics research‐relevant literature needed to include. For example, a relevant literature source needed to discuss computational RCC messaging, have an English publication, be published between January 2018 and December 2023, and be a primary literature source. Such characteristics were chosen to ensure that the most recent, state‐of‐the‐art, and high‐impact literature was captured. The criteria assisted reviewers with assessing article relevance based on their abstracts.

TABLE 3.

Inclusion and exclusion criteria developed for the systematic literature review (SLR).

Criteria Status
Study/Research topic is on risk, crisis, or hazard communication Inclusion
Article covers computational or mixed methods used in research on the above topic Inclusion
Article covers computational or mixed methods used specifically in risk message construction/development Inclusion
Article is published between 2018 and 2023 Inclusion
Article is published in English Inclusion
Article is a research article from primary literature Inclusion
Article is not retracted, outdated, or pre‐published Inclusion
Article comes from a journal or conference proceeding Inclusion
Article does not meet all the above criteria Exclusion

2.3.2. Abstract reviews

Disbursement of literature to reviewers occurred as follows. For this study, the lead author reviewed 317 abstracts, 2 co‐authors reviewed 100, and 1 co‐author reviewed 70. Two additional reviewers each read 20 abstracts to lessen the workload of the main reviewers. Three reviewers with RCC knowledge took on most of the abstract review load (at least 100 abstracts), and the remaining three were doctoral students; every reviewer had knowledge about computational sciences. All reviewers worked independently of each other with no overlap, meaning that no ties among reviewers could occur.

Prior to evaluating abstracts, all reviewers received a copy of the inclusion criteria (Table 3). Reviewers read each article's abstract to determine research relevance. If the abstract was unclear, reviewers assessed the introduction for relevance. Reviewers recorded each criterion that each article met in their spreadsheets. Criteria recorded as “YES” indicated that the article was potentially relevant; criteria recorded as “NO” indicated that the article should be excluded from the SLR. Reviewers returned their spreadsheets to the lead author after completion. The lead author did a brief quality check to ensure the spreadsheets were filled in properly, removing any duplicate entries. A total of 124 articles met the inclusion criteria and thus formed our corpus. These articles underwent meta‐analysis to further assess relevance.

2.4. Meta‐analysis

We conducted the meta‐analysis primarily using the R package tidytext.8 As part of the meta‐analysis, titles and hyperlinks for our corpus of articles were aggregated into one dataframe. Each entry contained full text that was manually scraped from corresponding hyperlinks. The text underwent data preprocessing and cleaning before text analysis and term tokenization. Tokenized terms classified as stopwords (e.g., “and,” “the,” and “2020”) were filtered out before calculating term frequencies.

We calculated the frequency of each term across all articles by dividing the occurrence of each term in each by the total number of terms present in the same article. Term frequency calculations were foundational to calculating term frequency‐inverse document frequency9 (TF‐IDF) scores for terms. The top 50 TF‐IDF scores corresponded to the terms used most in our corpus (Figure 1). We then manually tagged terms that were relevant to our RQs (e.g., “communication,” “computational,” and “efficacy”; herein, “research‐relevant terms”). Terms with both a high TF‐IDF score and relevance to our RQs determined the next selection of literature. In total, 51 articles from our corpus contained research‐relevant terms and were thus selected to undergo manual quality assessment.

FIGURE 1.

FIGURE 1

Top 50 terms based on highest term frequency‐inverse document frequency (TF‐IDF) scores across our corpus of articles. Research‐relevant terms are shaded dark blue. Determining research relevance involved manual tagging of terms in R.

2.5. Article quality assessment

Selected articles underwent manual quality assessment of topical findings and credibility. Regarding topical findings, final inclusion criteria required that each article presented quantitative assessments of computational methods improving RCC message efficacy and/or efficiency. If an article did not present such quantitative assessments, the article was excluded. Regarding credibility, final inclusion criteria required that the presentation of the article was coherent and well‐reasoned. If grammatical errors, misspellings, illogical structure, or vagaries prevented the lead author from understanding the study, the article was excluded. Based on these criteria, 25 articles were excluded from the study. Twenty‐six articles made up the final selection of literature on computational risk message construction (Figure 2 shows how many articles were filtered out for each step in the SLR; also see Table 4 for article titles, authors, and topics addressed in the final selection of literature). Findings are summarized in the next section.

FIGURE 2.

FIGURE 2

Flowchart detailing each step of the systematic literature review (SLR) process for selecting relevant literature. The figure is based on the flowchart structure specified in the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) guidelines.

TABLE 4.

Final selection of literature on computational risk message construction identified in this systematic literature review (SLR).

Article title Authors Article topic
Arabic Twitter Corpus for Crisis Response Messages Classification Adel and Wang (2020) Researchers developed a corpus in the Arabic language to classify crisis communication messages/tweets based on what crisis category they fell under
Comparing the Effectiveness of Text‐based and Video‐based Delivery in Motivating Users to Adopt a Password Manager Albayram et al. (2021) Researchers compared the efficacy of text‐ and video‐based risk communication for motivating users to adopt password managers
The Saudi Ministry of Health's Twitter Communication Strategies and Public Engagement During the COVID‐19 Pandemic: Content Analysis Study Alhassan and AlDossary (2021) Researchers aimed to evaluate the Saudi Arabia Ministry of Health's use of Twitter and the public's engagement during different stages of the COVID‐19 pandemic in Saudi Arabia. They classified tweets based on the CERC framework
Hacked Time: Design and Evaluation of a Self‐Efficacy Based Cybersecurity Game Chen et al. (2020) Researchers developed a game that provided an interactive risk communication approach to improve risk perception of cybersecurity threats and self‐efficacy in users affected
Platform Effects on Public Health Communication: A Comparative and National Study of Message Design and Audience Engagement Across Twitter and Facebook DePaula et al. (2022) Researchers analyzed risk communication messages dispersed by government accounts on Facebook and Twitter, specifically looking at how the public engaged with these messages
Emotionality in COVID‐19 crisis communication from authorities and independent experts on Twitter Drescher et al. (2023) Researchers analyzed the sentiment (negative, neutral, or positive) of tweets from German health organizations during the early stages of COVID‐19
Knowing Your Audience: A Typology of Smoke Sense Participants to Inform Wildfire Smoke Health Risk Communication Hano et al. (2020) This study explored perspectives on wildfire smoke as a health risk among participants of Smoke Sense, a citizen science project with an objective to engage affected individuals on wildfire smoke. Researchers then developed effective health risk communication strategies to motivate individual‐level behavior change
Developing a gist‐extraction typology based on journalistic lead writing: A case of food risk news Ju and You (2018) This study aimed to construct a journalistic gist extraction typology to improve the development of risk communication messages. Researchers aimed to translate expert jargon into a format that was easy to read and digest for the public at large
Validation of mobile phone text messages for nicotine and tobacco risk communication among college students: A content analysis Khalil et al. (2018) Researchers constructed text messages for tobacco risk communication based on three main structures: framing (gain‐ or loss‐framed messages), depth (simple or complex messages), and appeal (emotional or rational messages)
Canadian COVID‐19 Crisis Communication on Twitter: Mixed Methods Research Examining Tweets from Government, Politicians, and Public Health for Crisis Communication Guiding Principles and Tweet Engagement MacKay, Cimino et al. (2022) This study described how crisis actors used guiding principles in COVID‐19 tweets and how the use of these guiding principles related to tweet engagement. Researchers classified tweets based on said guiding principles
Examining Social Media Crisis Communication during Early COVID‐19 from Public Health and News Media for Quality, Content, and Corresponding Public Sentiment MacKay et al. (2021) Researchers aimed to evaluate the quality and content of Canadian public health and news media crisis communication during the first wave of the COVID‐19 pandemic on Facebook and the subsequent emotional response to messaging by the public
A content analysis of Canadian influencer crisis messages on Instagram and the public's response during COVID‐19 MacKay, Ford et al. (2022) Researchers examined COVID‐19‐related crisis messages across Canadian influencer accounts on Instagram to examine their efficacy based on message constructs outlined by the Health Belief and Extended Parallel Processing models. Researchers also analyzed audience sentiment
Machine Learning Framework for Analyzing Disaster‐Tweets Manimegalai et al. (2023) This study analyzed the performance of computational classifier models when classifying types of disaster crisis tweets
Build community before the storm: The National Weather Service's social media engagement Olson et al. (2019) This study examined crisis communication on social media by observing how 12 National Weather Service (NWS) offices used Twitter to facilitate engagement with stakeholders during threat and nonthreat periods
Narrative Risk Communication as a Lingua Franca for Environmental Hazard Preparation Raile et al. (2022) Researchers developed a new risk communication framework that guides the construction of risk messages both using narrative structure and invoking narrative transport
Investigating the presentation of uncertainty in an icon array: A randomized trial Recchia et al. (2022) Researchers analyzed the efficacy of visual risk communication about the risks of breast and ovarian cancer for individuals carrying the BRCA1 pathogenic variant
User‐Generated Crisis Communication: Exploring Crisis Frames on Twitter during Hurricane Harvey Riddell and Fenner (2021) Researchers analyzed user‐generated crisis communication—as well as crisis communication distributed by organizations—to get a well‐founded understanding of how the public views risks and crises and what information was sought after
Communicating risk of medication side‐effects: role of communication format on risk perception Sawant and Sansgiry (2018) This study assessed the interaction effects of message format and contextual factors (rate of occurrence and severity) on risk perception of medication side‐effects after considering message format and contextual factors influencing risk perception
Characters matter: How narratives shape affective responses to risk communication Shanahan et al. (2019) Researchers analyzed the use and effectiveness of narrative elements in flood risk messaging. Subsequent messages were constructed, aiming to improve individual affective response and changes in intended behavior and risk perception
A machine learning approach to flood severity classification and alerting Sharma et al. (2021) Researchers leveraged several machine learning models and assessed their performance at classifying flood risk message types (advisory, information, warning, and watch)
Examining Tweet Content and Engagement of Canadian Public Health Agencies and Decision Makers During COVID‐19: Mixed Methods Analysis Slavik et al. (2021) This study examined the content and engagement of COVID‐19 tweets authored by Canadian public health agencies and decision makers, making suggestions on how to improve the efficacy of crisis communication based on the results
Qualitative analysis of visual risk communication on twitter during the Covid‐19 pandemic Sleigh et al. (2021) Researchers investigated how visual risk communication was used on Twitter to promote the World Health Organization's (WHO) recommended preventative behaviors and how this communication changed over time
Story mapping and sea level rise: listening to global risks at street level Stephens and Richards (2020) This study described the development of an interactive tool that juxtaposed coastal residents’ video‐recorded stories about sea level rise and coastal flooding with an interactive map that showed future sea level rise projections
An application of the extended parallel process model to protective behaviors against COVID‐19 in South Korea Yoon et al. (2022) This study applied the EPPM to understand factors that affect an individual's participation in protective behaviors against COVID‐19. Such factors included the effect of public perception of threat, the efficacy of fatalism, and undertaking protective behaviors
Sharing health risk messages on social media: Effects of fear appeal message and image promotion Zhang and Zhou (2020) This study examined how fear appeal and individuals’ image promotion consideration drove users’ intentions to share fear appeal messages on social networking sites
Understanding motivated publics during disasters: Examining message functions, frames, and styles of social media influentials and followers Zhao et al. (2019) Researchers analyzed how different message functions in risk and crisis communication were employed by Twitter users, both general users and popular influencers, using the Ariana Grande concert bombing event as the hazard subject

Note: Included are the 26 article titles, authors, and the topics discussed therein.

Abbreviation: CERC, Crisis and Emergency Risk Communication; EPPM, Extended Parallel Process Model.

3. RESULTS

We report the final selection of literature identified from the SLR. Findings presented within the first three subsections provide answers to our three RQs. These RQs ask what computational methods improve message construction efficacy and efficiency as well as what trade‐offs between them exist. We also present findings from the selected literature discussing other applications of computational methods in RCC.

3.1. Computational methods and message construction efficacy

Risk and crisis message construction utilizing computational methods is rarely discussed in RCC research. This SLR identified seven studies explicitly discussing computational methods used in RCC message construction (Chen et al., 2020; Khalil et al., 2018; Raile et al., 2022; Recchia et al., 2022; Sawant & Sansgiry, 2018; Shanahan et al., 2019; Stephens & Richards, 2020). Furthermore, studies on computational methods used in message construction discussed only efficacy, not efficiency.

Studies on message efficacy focused on combining social science theory and computational tools to construct risk messages (Chen et al., 2020; Raile et al., 2022; Sawant & Sansgiry, 2018; Shanahan et al., 2019), to analyze risk messages (Albayram et al., 2021; Alhassan & AlDossary, 2021; DePaula et al., 2022; Drescher et al., 2023; Ju & You, 2018; MacKay et al., 2021; MacKay, Cimino, et al., 2022; MacKay, Ford, et al., 2022; Olson et al., 2019; Riddell & Fenner, 2021; Slavik et al., 2021; Sleigh et al., 2021; Zhao et al., 2019), or to analyze audience interaction with RCC messaging (Albayram et al., 2021; DePaula et al., 2022; Hano et al., 2020; MacKay et al., 2021; MacKay, Cimino, et al., 2022; MacKay, Ford, et al., 2022; Riddell & Fenner, 2021; Slavik et al., 2021; Yoon et al., 2022; Zhang & Zhou, 2020; Zhao et al., 2019). Natural language processing (NLP) and content analysis were the most prevalent computational methods addressed for message construction, message analysis, and audience response analysis (Table 5).

TABLE 5.

Effective and efficient computational methods used for risk message construction, classification, and analysis across all selected articles.

Message aspect addressed Computational methods used Articles
Message classification efficiency Random forest, Naïve Bayes, support vector machine, logistic regression, extreme gradient boost, decision tree Adel and Wang (2020), Manimegalai et al. (2023), Sharma et al. (2021)
Message classification efficacy Content analysis, natural language processing, random forest, Naïve Bayes, support vector machine, logistic regression, Extreme Gradient Boost, decision tree, cluster analysis, linguistic inquiry, and word count Adel and Wang (2020), Albayram et al. (2021), Alhassan and AlDossary (2021), DePaula et al. (2022), Khalil et al. (2018), MacKay et al. (2021), MacKay, Cimino et al. (2022), MacKay, Ford et al. (2022), Manimegalai et al. (2023), Olson et al. (2019), Sharma et al. (2021), Slavik et al. (2021)
Message construction efficacy Content analysis, natural language processing, transformational game design and programming, icon arrays, interactive story map development, linguistic inquiry, and word count Chen et al. (2020), Khalil et al. (2018), Raile et al. (2022), Recchia et al. (2022), Sawant and Sansgiry (2018), Shanahan et al. (2019), Stephens and Richards (2020)
Message analysis efficacy Content analysis, chi‐squared analysis, natural language processing, cluster analysis, logistic regression, multiple regression, ANOVA/ANCOVA Albayram et al. (2021), Alhassan and AlDossary (2021), DePaula et al. (2022), Drescher et al. (2023), Hano et al. (2020), Ju and You (2018), MacKay et al. (2021), MacKay, Cimino et al. (2022), MacKay, Ford et al. (2022), Olson et al. (2019), Riddell and Fenner (2021), Slavik et al. (2021), Sleigh et al. (2021), Yoon et al. (2022), Zhang and Zhou (2020), Zhao et al. (2019)

Note: Some articles discussed more than one aspect of message development.

Computational methods helped operationalize theoretical frameworks for effective RCC messaging. Theoretical frameworks were embedded in codebook creation and generally had two aims: (1) to improve the efficacy of RCC messaging or (2) to identify effective elements in RCC messages. These codebooks were used for message construction (Chen et al., 2020; Khalil et al., 2018; Raile et al., 2022; Sawant & Sansgiry, 2018; Shanahan et al., 2019) and message analysis (Albayram et al., 2021; Alhassan & AlDossary, 2021; DePaula et al., 2022; Drescher et al., 2023; Hano et al., 2020; Ju & You, 2018; MacKay et al., 2021; MacKay, Cimino, et al., 2022; MacKay, Ford, et al., 2022; Olson et al., 2019; Riddell & Fenner, 2021; Sleigh et al., 2021; Yoon et al., 2022; Zhang & Zhou, 2020; Zhao et al., 2019). The Crisis and Emergency Risk Communication (CERC) model, the Narrative Policy Framework (NPF), the extended parallel process model (EPPM), and protection motivation theory (PMT) were the most prevalent operationalized theoretical frameworks (Table 6).

TABLE 6.

Theoretical frameworks discussed and implemented in risk message development across all final selections of articles.

Theoretical framework Article(s)
Crisis and Emergency Risk Communication model Alhassan and AlDossary (2021), MacKay et al. (2021), MacKay, Cimino et al. (2022)
Extended parallel process model MacKay, Ford et al. (2022), Yoon et al. (2022)
Fuzzy‐trace theory Ju and You (2018)
Hermann's crisis model Riddell and Fenner (2021)
Narrative Policy Framework Raile et al. (2022), Shanahan et al. (2019)
Narrative Risk Communication Framework Raile et al. (2022)
Precaution adoption process model Hano et al. (2020)
Prospect theory Sleigh et al. (2021)
Protection motivation theory Albayram et al. (2021), Chen et al. (2020)
Rhormann's risk communication process model Sawant and Sansgiry (2018)
Self‐efficacy design framework Chen et al. (2020)
Social media analytics framework Drescher et al. (2023)
Social‐mediated crisis communication model Zhao et al. (2019)
Unspecified or combined frameworks DePaula et al. (2022), Khalil et al. (2018), MacKay et al. (2021), MacKay, Cimino et al. (2022), MacKay, Ford et al. (2022), Olson et al. (2019), Riddell and Fenner (2021), Slavik et al. (2021), Zhang and Zhou (2020)

Note: Some articles addressed or combined multiple frameworks; some frameworks were not specified explicitly.

3.2. Computational methods and message construction efficiency

No studies in this SLR addressed using computational methods to improve message construction efficiency. Any coverage of efficient computational methods analyzed and compared various machine learning models on how well risk messages were classified based on framework or hazard keywords (Adel & Wang, 2020; Manimegalai et al., 2023; Sharma et al., 2021). Support vector machines (SVMs), Extreme Gradient Boost (XGB), Naïve Bayes, and random forest were commonly used for efficient classification of message elements (Table 5).

3.3. Trade‐offs between method efficacy and efficiency

No articles analyzed trade‐offs between efficacy and efficiency in computational message construction. Limitations with effective and efficient computational methods were rarely discussed as well. Rather, discussions mainly focused on a lack of NLP term dictionaries for low‐resource languages (Adel & Wang, 2020) and NLP tools inconsistently analyzing sentiment in text (Drescher et al., 2023).

3.4. Message classification with computational methods

Classification of messages was the predominant application of computational methods in RCC. Computational methods helped identify and classify message elements that were effective in changing individuals’ risk perceptions, mitigation behavior, and self‐efficacy (Adel & Wang, 2020; Albayram et al., 2021; Alhassan & AlDossary, 2021; DePaula et al., 2022; Khalil et al., 2018; MacKay et al., 2021; MacKay, Cimino, et al., 2022; MacKay, Ford, et al., 2022; Manimegalai et al., 2023; Olson et al., 2019; Sharma et al., 2021; Slavik et al., 2021). The most prevalent computational methods used for message and element classification were NLP, content analysis, logistic regression, Naïve Bayes, SVMs, and XGB (Table 5). All the above computational methods contributed toward improving the efficacy and efficiency of message classification.

Operationalized theoretical frameworks were also used to classify messages on their efficacy (Alhassan & AlDossary, 2021; DePaula et al., 2022; Ju & You, 2018; MacKay et al., 2021; MacKay, Cimino, et al., 2022; MacKay, Ford, et al., 2022; Olson et al., 2019; Slavik et al., 2021; Zhao et al., 2019). The CERC model was the most common theoretical framework operationalized for message classification. Combined or unspecified frameworks were more common in message classification studies than in studies that used computational methods to analyze or construct RCC messages. These combined or unspecified frameworks were covered in nine articles (DePaula et al., 2022; Khalil et al., 2018; MacKay et al., 2021; MacKay, Cimino, et al., 2022; MacKay, Ford, et al., 2022; Olson et al., 2019; Riddell & Fenner, 2021; Slavik et al., 2021; Zhang & Zhou, 2020) (Table 6).

4. DISCUSSION

We report current and emerging aspects in RCC research that improve message construction. Discussions on specific methods used to improve message construction efficacy and efficiency, as well as their limitations, are also present in each subsection below. We also share insight into how cultural differences within affected populations can impact RCC message development.

4.1. Theoretical frameworks in risk and crisis communication

Effective RCC depends on the theoretical framework chosen for computational message development. The most commonly referenced frameworks used in message construction and analysis are the NPF (Raile et al., 2022; Shanahan et al., 2018, 2019) and PMT (Albayram et al., 2021; Chen et al., 2020; Maddux & Rogers, 1983), as described in Sections 4.1.1 and 4.1.2. Choosing between the two frameworks depends on how researchers want to motivate individuals to change risk mitigation behavior and risk perception.

4.1.1. Protection motivation theory (PMT)

RCC developed using PMT targets individuals’ fear‐appeal when faced with a hazard through threat and coping appraisal (Boss et al., 2015; Maddux & Rogers, 1983). Fear‐appeal in PMT messages is surmounted when an individual's response efficacy and self‐efficacy outweigh the costs of taking protective action against the hazard communicated. The most common hazard domain applying PMT in messaging is cybersecurity, specifically RCC on increasing and encouraging cybersecure behaviors in users (Albayram et al., 2021; Chen et al., 2020). Visual and interactive messaging, embedded with PMT tenets, improves user self‐efficacy, response efficacy, and changes in cybersecure behavior (Albayram et al., 2021; Chen et al., 2020).

4.1.2. The narrative policy framework (NPF)

The NPF is another theoretical framework used to improve message efficacy. The NPF asserts that narrative elements such as plot, setting, moral, and characters‐in‐action play an important role in the policy process (Shanahan et al., 2018). Specific characters that appear in narratives are heroes, villains, and victims (Shanahan et al., 2018). Hero characters can improve the efficacy of messages created using the NPF (Raile et al., 2022; Shanahan et al., 2019). Furthermore, communicating risk using narratives invokes narrative transportation (Green & Brock, 2000) and makes the messaging more personable and memorable for individuals (Dahlstrom, 2014; Raile et al., 2022; Shanahan et al., 2019; Stephens & Richards, 2020). Messages created with the NPF can heighten affective response and thereby have a greater impact on intended behavior as compared to strict science messages (Raile et al., 2022; Shanahan et al., 2019).

4.1.3. Trade‐offs between PMT and NPF

Inducing individual affective response differs among messages developed using PMT and the NPF. With PMT, negative affective response is influenced using fear‐appeal (Albayram et al., 2021). With the NPF, positive affective response is influenced using character selection (Raile et al., 2022). Risk messages that induce a positive valence of affect motivate individual risk mitigation behavior better than messages that induce a negative valence of affect; however, negative affect appears to have as much of an impact on individual risk perceptions as positive affect (Raile et al., 2022). For crisis communications, the magnitude of affective response may be more important than the valence of affect in crisis messaging because individuals need to take protective actions promptly (Albayram et al., 2021; Chen et al., 2020). However, in risk communications, we posit that the valence of affect may be more important if the messages are deployed frequently.

RCC developed with either framework is effective at inducing affective response. Although messages developed with PMT can induce affective responses, these messages motivate risk mitigation behavior with varying degrees of success (Albayram et al., 2021; Chen et al., 2020). Factors that impact the success of these messages are individual risk perception of a hazard and increased self‐efficacy (Chen et al., 2020). Messages developed using the NPF, on the other hand, motivate risk mitigation behavior consistently when compared to conventional RCC messaging (Raile et al., 2022; Shanahan et al., 2019).

Messaging on a specific hazard, as opposed to a generalized hazard, changes how individuals perceive associated risks. Individuals focusing their attention on one hazard at a time limits cognitive overload and is correlated with changes in perception (Albayram et al., 2021; Chen et al., 2020). The medium through which a message is communicated also impacts risk perception in individuals. Visuals or interactive elements derived from PMT tenets improve response efficacy and self‐efficacy in individuals (Albayram et al., 2021; Chen et al., 2020). Researchers have also developed visual messaging adhering to the NPF, showing such messaging to be as effective at inducing affective responses in individuals as text‐based messaging (Guenther & Shanahan, 2021; Shanahan et al., 2023). By visualizing the risks of a hazard, individuals understand how a given hazard affects them, thus improving risk perception.

4.2. Effective computational methods in risk communication

Applications of and discussions on computational message construction are scant in RCC research. The research focuses on classifying or analyzing RCC messages, specifically message elements like calls to action, hazard information, and visual media (Adel & Wang, 2020; Albayram et al., 2021; Alhassan & AlDossary, 2021; DePaula et al., 2022; Drescher et al., 2023; Hano et al., 2020; Ju & You, 2018; MacKay et al., 2021; MacKay, Cimino, et al., 2022; MacKay, Ford, et al., 2022; Manimegalai et al., 2023; Olson et al., 2019; Riddell & Fenner, 2021; Sharma et al., 2021; Slavik et al., 2021; Sleigh et al., 2021; Yoon et al., 2022; Zhang & Zhou, 2020; Zhao et al., 2019). Sometimes, articles include discussions on how analyzed elements improve message construction in the future, but how to construct messages is never specified (Albayram et al., 2021; DePaula et al., 2022; Drescher et al., 2023; Hano et al., 2020; Khalil et al., 2018; MacKay et al., 2021; MacKay, Cimino, et al., 2022; MacKay, Ford, et al., 2022; Manimegalai et al., 2023; Olson et al., 2019; Yoon et al., 2022). Failure to specify how messages are constructed represents a larger problem in message development as a whole, not just computational message construction (Reinhold, Raile, et al., 2023).

4.2.1. Natural language processing (NLP)

Little research delves into using computational methods to improve text‐based messaging (Reinhold, Raile, et al., 2023). When message construction methods are discussed, messages are constructed either with visual or video elements and compared against textual risk messaging (Chen et al., 2020; Recchia et al., 2022; Sawant & Sansgiry, 2018; Stephens & Richards, 2020). From the limited body of work, computational methods used in text‐based message construction are linguistic computer science tools such as Linguistic Inquiry and Word Count (Khalil et al., 2018) or NLP (Raile et al., 2022; Shanahan et al., 2019).

NLP proves to be effective at improving message development. NLP tools have been applied in operationalizing theoretical frameworks (Raile et al., 2022; Reinhold, Raile, et al., 2023; Shanahan et al., 2019) and analyzing message efficacy through audience engagement (Drescher et al., 2023; MacKay et al., 2021; MacKay, Ford, et al., 2022). Messages developed using NLP can automate content analysis (Nelson, 2020; Raile et al., 2022; Shanahan et al., 2019) and help select terms associated with framework elements (Raile et al., 2022; Reinhold, Raile, et al., 2023; Shanahan et al., 2019). Additionally, NLP tools such as sentiment analysis can help determine how individuals respond to RCC messaging by examining the sentiment expressed in RCC messages (Drescher et al., 2023; MacKay et al., 2021; MacKay, Ford, et al., 2022).

4.2.2. Limitations with NLP

Although NLP is a powerful tool for textual analysis, limitations emerge with text classification, sentiment analysis, and topic modeling techniques. All three techniques have difficulties assessing term sentiment, classification, and topic relevance of context‐specific words or sentences (Drescher et al., 2023; Guetterman et al., 2018; Reinhold, Raile, et al., 2023; Silberztein, 2024). In contrast, manual classification and sentiment analysis can contextualize words better than equivalent NLP tools because humans can intuit situational context better than computers (Guetterman et al., 2018; Silberztein, 2024). Difficulties in contextualizing terms can be attributed to generalized word dictionaries used when working with NLP tools.

Sole reliance on generalized word dictionaries contributes to the disparity between manual text analysis and NLP tools. For example, sentiment analysis performed on RCC messages sometimes calculate a negative polarity score (i.e., a message is interpreted to have a negative sentiment) for the whole message even if terms used are “neutral” for the given context (Drescher et al., 2023). Additionally, Part of Speech (POS) tagging with generalized word dictionaries often results in poor accuracy. Typical problems include distinguishing different noun types, ignoring multiword units, and assigning wrong POS tags (Silberztein, 2024). Nuances of human language are challenging for computational methods to fully analyze or classify terms, a problem further exacerbated when applying NLP on low‐resource languages (Farghaly & Shaalan, 2009).

NLP lacks substantial support for low‐resource languages (Ghafoor et al., 2021). Typical strategies take messages written in a low‐resource language (e.g., Arabic) and translate them into a high‐resource language (e.g., English) before applying word classification, term frequency analysis, or sentiment analysis (Adel & Wang, 2020; Farghaly & Shaalan, 2009; Ghafoor et al., 2021). Translating the original text often loses the context or meaning of the message (Ghafoor et al., 2021). However, this is not the only challenge present with NLP for low‐resource languages. These languages can contain linguistic and semantic ambiguities, and some do not adhere to punctuation or capitalization rules present in high‐resource languages (Farghaly & Shaalan, 2009; Ghafoor et al., 2021). Messages deployed lose their effectiveness if the wrong words are chosen or if the language structure is incoherent for message recipients.

4.3. Efficient computational methods in risk and crisis communication

RCC research largely overlooks applications of efficient computational methods in message development. Current applications of computational message construction, while improving construction and message efficacy, are time‐ and resource‐consuming (Reinhold, Munro, et al., 2023). Hence, researchers focus their attention toward large language models (LLMs) for efficient message construction (Karinshak et al., 2023; Lynch et al., 2023; Reinhold, Munro, et al., 2023). Current work on constructing messages using LLMs combines zero‐shot learning and prompt engineering to develop accurate, quality, and impactful messaging (Filippi, 2023; Lim & Schmälzle, 2023; Lynch et al., 2024, 2023). Additionally, advancements in generating non‐textual, multimedia forms of communication through LLMs are also ongoing (Meskó, 2023; Moor et al., 2023).

4.3.1. Limitations with efficiency

LLMs have become popular and powerful tools for text generation and communication research (Kalyan, 2024; Lynch et al., 2023). However, LLMs have technical and ethical limitations with respect to accountability, responsibility, safety, and honest use (Lynch et al., 2023; Sallam, 2023; Stokel‐Walker & van Noorden, 2023; van Dis et al., 2023). LLMs perform well when fed multimodal (Thirunavukarasu et al., 2023) and validated information (Gilbert et al., 2023; Karabacak & Margetis, 2023), and LLM prompts created with explicit guidance can result in consistent, well‐structured outputs (Filippi, 2023). Although these methods are implemented to mitigate concerns over validity, uncertainty, bias, and accountability when generating LLM outputs, challenges still present themselves. On the technical side, generating multiple messages utilizing the same prompt can result in temporal mismatches within the message content across a set of messages (Lynch et al., 2023). Issues also arise when irrelevant contextual or personal information is introduced into the prompt (Lynch et al., 2024; Shi et al., 2023). On the ethical side, LLMs can generate messages embedded with social, scientific, and psychological biases (Hagendorff & Danks, 2023; Zhang et al., 2020) or provide morally inconsistent advice (Krügel et al., 2023). In addition, responses given by popular LLM ChatBots, such as OpenAI's ChatGPT,10 can contain inaccurate or overgeneralized information, stemming from limited or biased training sets (van Dis et al., 2023). However, solutions are being explored to mitigate the impacts of position, verbosity, and self‐enhancement biases (Zheng et al., 2024).

These limitations with LLM message generation threaten the objectivity and accuracy of RCC messaging. Therefore, it is important that researchers take full consideration of the trade‐offs between timely message deployment and precise message content. With these concerns in mind, our position is that message construction should not be fully reliant on LLMs. Rather, LLM‐generated messages should involve human validation to ensure message content is accurate (Lynch et al., 2024) and to judge whether linguistic nuances are properly reflected (Nasution & Onan, 2024).

4.4. Impact of culture on risk and crisis communication messaging

Computational tools and theoretical frameworks are instrumental for improving RCC message construction efficacy and efficiency. However, variation in cultural contexts impacts the efficacy of computational tools. A significant cultural barrier is the predominance of the English language in computational linguistic tools and dictionaries (Adel & Wang, 2020; Farghaly & Shaalan, 2009; Ghafoor et al., 2021). For all other languages, the tools’ reliance on English requires that text be translated. Because translation can ignore or introduce semantic ambiguities, we posit that RCC messages will be less effective if they require translation. Consequently, we expect that linguistic barriers reduce the efficacy of messages when dispersed to non‐English‐speaking populations. Therefore, language is an important cultural consideration.

Language is not the only cultural barrier that impacts RCC messaging. Culture is inextricably linked to geolocation. For example, Asian countries tend to be more collectivist than Western countries like the United States (Yoon et al., 2022; Zhang, 2021). Collectivism is an example of a cultural factor that can impact message receptivity. Message receptivity is also impacted by intercultural differences such as race, ethnicity, political affiliation and ideology, cultural norms and prevailing personal beliefs and attitudes (e.g., religious and philosophical) (Chen et al., 2020; Raile et al., 2022; Shanahan et al., 2023; Zhang, 2021). Therefore, cultural influences on message receptivity cannot be tackled with computational methods alone because linguistic and other cultural considerations influence message efficacy.

5. THREATS TO VALIDITY

5.1. Construct validity

We identified construct validity as a potential threat. Construct validity refers to the extent to which an instrument or test reflects the construct being investigated (Reinhold, Raile, et al., 2023). Our investigation into computational message construction methods involved both manual and automatic filtering of literature. The use of manual filtering methods can threaten construct validity. Both the abstract reviews and quality assessments used manual filtering methods via reading key findings of a given article and recording criteria for SLR inclusion. The inclusion criteria served as a guide for assessing article relevance. Yet, differences in how reviewers assessed relevance could have introduced some inconsistencies in the inclusion of articles meeting the inclusion criteria in Table 3 by a slim margin.

We selected reviewers that had a solid knowledge base on computational sciences to ensure literature on computational RCC was included in the SLR. Of the six reviewers, all were familiar with RCC, but only half of them were RCC experts. Although it would have been better if all reviewers had been experts in RCC, our stringent and well‐defined inclusion criteria mitigated threats to construct validity resulting from some of the reviewers’ limited expertise in RCC.

In addition, we did not conduct overlapping abstract reviews, meaning distributed articles were unique for each reviewer. Distributing articles with reviewer overlap would have improved the reliability and construct validity of the study. However, the large volume of articles (Figure 2) was a significant undertaking for our team; hence, our decision to not distribute overlapping articles. We perceive threats resulting from this to be limited to articles that were included or excluded by a narrow margin, as mentioned earlier in this section.

Computational filtering methods may have also introduced threats to construct validity. The TF‐IDF analysis potentially threatens construct validity by assuming frequently occurring terms correlate with an article's relevancy to computational risk message construction. Most frequently occurring terms across all vetted articles were not deemed research‐relevant, so it is likely that research‐relevant articles were falsely excluded or included.

Another potential threat to construct validity stems from the database searches, specifically with the search string construction. The strings went through a month‐long refinement process to best capture relevant literature on RCC messaging. However, it is possible that the search strings did not capture all relevant literature on the topic. For example, the studies of Reinhold, Raile et al. (2023) and Lynch et al. (2023) were sources not captured in our SLR despite their discussion of computational RCC message construction. The search strings also yielded some results irrelevant to RCC messaging, which we manually filtered out both in the database search and results screening steps of this SLR (Figure 2).

5.2. External validity

External validity refers to whether results from this study can be generalized beyond the specific research content (Bryman, 2016). We searched for literature on RCC messaging across four distinct databases. For each database, we limited our search to include literature published between the years 2018 and 2023. This range was chosen to best capture any RCC research covering LLMs or NLP. However, our results from this research scope could be too specific to generalize to most RCC research available.

6. CONCLUSION

The primary application of computational methods in RCC is for message classification. Computational methods help classify effective RCC message elements, and similar methods classify RCC messages based on hazard domains efficiently. However, computational methods are seldom used to improve the efficacy and efficiency of message construction. Although some RCC research leverages computational methods to improve message construction efficacy, improving construction efficiency is a nascent area of research but one that is rapidly growing with the maturation of LLMs. Yet, by improving message construction efficacy and efficiency, RCC messaging would have greater potential to quickly warn and better inform affected communities impacted by hazards. Thus, we recommend that future RCC research focus on the development of computational methods for improving efficacy and efficiency in message construction.

CONFLICT OF INTEREST STATEMENT

The authors declare no conflicts of interest.

ACKNOWLEDGMENTS

The authors thank A. Redempta Manzi Muneza and Tom McElroy from the MSU Software Engineering and Cybersecurity Lab (SECL) for their assistance with supplemental abstract reviews. We also thank Seyedmojtaba Mohasel from the MSU Department of Mechanical Engineering and Garrett Perkins and Zach Wadhams from the MSU SECL for providing insightful and constructive feedback on earlier drafts of this article.

Munro, M. H , Gore, R. J. , Lynch, C. J. , Hastings, Y. D. , & Reinhold, A. M. (2025). Enhancing risk and crisis communication with computational methods: A systematic literature review. Risk Analysis, 45, 1683–1697. 10.1111/risa.17690

Footnotes

6

R Core Team (2023). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R‐project.org/

7

Wickham H (2024). rvest: Easily Harvest (Scrape) Web Pages. R package version 1.0.4, https://CRAN.R‐project.org/package=rvest

8

Silge, J & Robinson, D (2024). Text Mining using ‘dplyr’, ‘ggplot2’, and Other Tidy Tools. R version 0.4.2, https://cran.r‐project.org/web/packages/tidytext/tidytext.pdf

REFERENCES

  1. Abrams, E. M. , Shaker, M. , & Greenhawt, M. (2022). COVID‐19 and the importance of effective risk communication with children. Paediatrics & Child Health, 27(Suppl.1), S1–S3. 10.1093/pch/pxab101 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Adel, G. , & Wang, Y. (2020). Arabic Twitter corpus for crisis response messages classification. In Proceedings of the 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence (ACAI '19) (pp. 498–503). ACM. 10.1145/3377713.3377799 [DOI] [Google Scholar]
  3. Albayram, Y. , Liu, J. , & Cangonj, S. (2021). Comparing the effectiveness of text‐based and video‐based delivery in motivating users to adopt a password manager. In Proceedings of the 2021 European Symposium on Usable Security (EuroUSEC '21) (pp. 89–104). ACM. 10.1145/3481357.3481519 [DOI] [Google Scholar]
  4. Alhassan, F. M. , & AlDossary, S. A. (2021). The Saudi Ministry of Health's Twitter communication strategies and public engagement during the COVID‐19 pandemic: Content analysis study. JMIR Public Health Surveillance, 7(7), e27942. 10.2196/27942 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bartolucci, A. , Aquilino, M. C. , Bril, L. , Duncan, J. , & van Steen, T. (2023). Effectiveness of audience segmentation in instructional risk communication: A systematic literature review. International Journal of Disaster Risk Reduction, 95, 103872. 10.1016/j.ijdrr.2023.103872 [DOI] [Google Scholar]
  6. Boss, S. R. , Galletta, D. F. , Lowry, P. B. , Moody, G. D. , & Polak, P. (2015). What do systems users have to fear? Using fear appeals to engender threats and fear that motivate protective security behaviors. MIS Quarterly, 39(4), 837–864. https://www.jstor.org/stable/26628654 [Google Scholar]
  7. Bryman, A. (2016). Social research methods. Oxford University Press. [Google Scholar]
  8. Chen, T. , Stewart, M. , Bai, Z. , Chen, E. , Dabbish, L. , & Hammer, J. (2020). Hacked time: Design and evaluation of a self‐efficacy based cybersecurity game. In Proceedings of the 2020 ACM Designing Interactive Systems Conference (DIS '20) (pp. 1737–1749). ACM. 10.1145/3357236.3395522 [DOI] [Google Scholar]
  9. Dahlstrom, M. F. (2014). Using narratives and storytelling to communicate science with nonexpert audiences. Proceedings of the National Academy of Sciences of the United States of America, 111(Suppl.4), 13614–13620. 10.1073/pnas.1320645111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. DePaula, N. , Hagen, L. , Roytman, S. , & Alnahass, D. (2022). Platform effects on public health communication: A comparative and national study of message design and audience engagement across Twitter and Facebook. JMIR Infodemiology, 2(2), e40198. 10.2196/40198 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Drescher, L. S. , Roosen, J. , Aue, K. , Dressel, K. , Schär, W. , & Götz, A. (2023). Emotionality in COVID‐19 crisis communication from authorities and independent experts on Twitter. Federal Health Gazette—Health Research—Health Protection, 66, 689–699. 10.1007/s00103-023-03699-z [DOI] [Google Scholar]
  12. Farghaly, A. , & Shaalan, K. (2009). Arabic natural language processing: Challenges and solutions. ACM Transactions on Asian Language Information Processing, 8(4), 1–22. 10.1145/1644879.1644881 [DOI] [Google Scholar]
  13. Fathollahzadeh, A. , Salmani, I. , Morowatisharifabad, M. A. , Khajehaminian, M. R. , Babaie, J. , & Fallahzadeh, H. (2023). Models and components in disaster risk communication: A systematic literature review. Journal of Education and Health Promotion, 12, 87. 10.4103/jehp.jehp_277_22 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Filippi, S. (2023). Measuring the impact of ChatGPT on fostering concept generation in innovative product design. Electronics, 12(16), 3535. https://www.mdpi.com/2079‐9292/12/16/3535 [Google Scholar]
  15. Fischhoff, B. , & Downs, J. S. (1997). Communicating foodborne disease risk. Emerging Infectious Diseases, 3(4), 489–495. 10.3201/eid0304.970412 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Ghafoor, A. , Imran, A. S. , Daudpota, S. M. , Kastrati, Z. , Abdullah, B. R. , & Wani, M. A. (2021). The impact of translating resource‐rich datasets to low‐resource languages through multi‐lingual text processing. IEEE Access, 9, 124478–124490. 10.1109/ACCESS.2021.3110285 [DOI] [Google Scholar]
  17. Gilbert, S. , Harvey, H. , Melvin, T. , Vollebregt, E. , & Wicks, P. (2023). Large language model AI chatbots require approval as medical devices. Nature Medicine, 29(10), 2396–2398. https://www.nature.com/articles/s41591‐023‐02412‐6 [DOI] [PubMed] [Google Scholar]
  18. Gore, R. , Ezell, B. , Lynch, C. J. , O'Brien, J. , Zamponi, V. , Jensen, E. , Reinhold, A. M. , Izurieta, C. , Munro, M. , & Shanahan, E. (2024). Building a domain agnostic framework for efficient and effective risk communication messages. MODSIM World 2024, 24, 1–11. https://modsimworld.org/papers/2024/MODSIM_2024_paper_26.pdf [Google Scholar]
  19. Green, M. C. , & Brock, T. C. (2000). The role of transportation in the persuasiveness of public narratives. Journal of Personality and Social Psychology, 79(5), 701. 10.1037/0022-3514.79.5.701 [DOI] [PubMed] [Google Scholar]
  20. Guenther, S. K. , & Shanahan, E. A. (2021). Communicating risk in human‐wildlife interactions: How stories and images move minds. PLoS ONE, 15(12), e0244440. 10.1371/journal.pone.0244440 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Guetterman, T. C. , Chang, T. , DeJonckheere, M. , Basu, T. , Scruggs, E. , & Vydiswaran, V. V. (2018). Augmenting qualitative text analysis with natural language processing: Methodological study [Original Paper]. Journal of Medical Internet Research, 20(6), e231. 10.2196/jmir.9702 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Hagendorff, T. , & Danks, D. (2023). Ethical and methodological challenges in building morally informed AI systems. AI and Ethics, 3(2), 553–566. 10.1007/s43681-022-00188-y [DOI] [Google Scholar]
  23. Hannes, K. , & Thyssen, P. (2022). Towards an inclusive Covid‐19 crisis communication policy in Belgium: The development and validation of strategies for multilingual and media accessible crisis communication. Deliverable 1: Scientific evidence feeding into the guideline development process—Rapid systematic literature review (pp. 1–53). Sciensano. [Google Scholar]
  24. Hano, M. C. , Prince, S. E. , Wei, L. , Hubbell, B. J. , & Rappold, A. G. (2020). Knowing your audience: A typology of smoke sense participants to inform wildfire smoke health risk communication. Frontiers in Public Health, 8, 2296–2565. 10.3389/fpubh.2020.00143 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Heath, R. L. , Lee, J. , Palenchar, M. J. , & Lemon, L. L. (2018). Risk communication emergency response preparedness: Contextual assessment of the protective action decision model. Risk Analysis, 38(2), 333–344. 10.1111/risa.12845 [DOI] [PubMed] [Google Scholar]
  26. Jones, M. D. , Shanahan, E. A. , & McBeth, M. K. (2014). The science of stories: Applications of the narrative policy framework in public policy analysis. Springer. 10.1057/978113748586 [DOI] [Google Scholar]
  27. Ju, Y. , & You, M. (2018). Developing a gist‐extraction typology based on journalistic lead writing: A case of food risk news. Heliyon, 4(8), e00738. 10.1016/j.heliyon.2018.e00738 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Kalyan, K. S. (2024). A survey of GPT‐3 family large language models including ChatGPT and GPT‐4. Natural Language Processing Journal, 6, 100048. 10.1016/j.nlp.2023.100048 [DOI] [Google Scholar]
  29. Karabacak, M. , & Margetis, K. (2023). Embracing large language models for medical applications: Opportunities and challenges. Cureus, 15(5), e39305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Karinshak, E. , Liu, S. X. , Park, J. S. , & Hancock, J. T. (2023). Working with AI to persuade: Examining a large language model's ability to generate pro‐vaccination messages. Proceedings of the ACM on Human‐Computer Interaction, 7(CSCW1), 1–29. 10.1145/3579592 [DOI] [Google Scholar]
  31. Khalil, G. E. , Calabro, K. S. , Crook, B. , Machado, T. C. , Perry, C. L. , & Prokhorov, A. V. (2018). Validation of mobile phone text messages for nicotine and tobacco risk communication among college students: A content analysis. Tobacco Prevention & Cessation, 4, 7. 10.18332/tpc/84866 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Krügel, S. , Ostermaier, A. , & Uhl, M. (2023). ChatGPT's inconsistent moral advice influences users’ judgment. Scientific Reports, 13(1), 4569. 10.1038/s41598-023-31341-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Li, J. , Cheng, K. , Wang, D. , Morstatter, F. , Trevino, R. P. , Tang, J. , & Liu, H. (2017). Feature selection: A data perspective. ACM Computing Surveys, 50(6), 1–45. 10.1145/3136625 [DOI] [Google Scholar]
  34. Lim, S. , & Schmälzle, R. (2023). Artificial intelligence for health message generation: An empirical study using a large language model (LLM) and prompt engineering [Original Research]. Frontiers in Communication, 8, 1129082. https://www.frontiersin.org/articles/10.3389/fcomm.2023.1129082 [Google Scholar]
  35. Lwin, M. O. , Lu, J. , Sheldenkar, A. , & Schulz, P. J. (2018). Strategic uses of Facebook in Zika outbreak communication: Implications for the crisis and emergency risk communication model. International Journal of Environmental Research and Public Health, 15(9), 1974. https://www.mdpi.com/1660‐4601/15/9/1974 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Lynch, C. J. , Jensen, E. , Munro, M. H. , Zamponi, V. , Martinez, J. , O'Brien, K. , Feldhaus, B. , Smith, K. , Reinhold, A. M. , & Gore, R. (2024). GPT‐4 generated narratives of life events using a structured narrative prompt: A validation study. Arxiv. 10.48550/arXiv.2402.05435 [DOI]
  37. Lynch, C. J. , Jensen, E. J. , Zamponi, V. , O'Brien, K. , Frydenlund, E. , & Gore, R. (2023). A structured narrative prompt for prompting narratives from large language models: Sentiment assessment of ChatGPT‐generated narratives and real tweets. Future Internet, 15(12), 375. https://www.mdpi.com/1999‐5903/15/12/375 [Google Scholar]
  38. MacKay, M. , Cimino, A. , Yousefinaghani, S. , McWhirter, J. E. , Dara, R. , & Papadopoulos, A. (2022). Canadian COVID‐19 crisis communication on Twitter: Mixed methods research examining tweets from government, politicians, and public health for crisis communication guiding principles and tweet engagement. International Journal of Environmental Research and Public Health, 19(11), 1660–4601. 10.3390/ijerph19116954 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. MacKay, M. , Colangeli, T. , Gillis, D. , McWhirter, J. , & Papadopoulos, A. (2021). Examining social media crisis communication during early COVID‐19 from public health and news media for quality, content, and corresponding public sentiment. International Journal of Environmental Research and Public Health, 18(15), 1660–4601. 10.3390/ijerph18157986 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. MacKay, M. , Ford, C. , Colangeli, T. , Gillis, D. , McWhirter, J. E. , & Papadopoulos, A. (2022). A content analysis of Canadian influencer crisis messages on Instagram and the public's response during COVID‐19. BMC Public Health, 22(1), 763–763. 10.1186/s12889-022-13129-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Maddux, J. E. , & Rogers, R. W. (1983). Protection motivation and self‐efficacy: A revised theory of fear appeals and attitude change. Journal of Experimental Social Psychology, 19(5), 469–479. 10.1016/0022-1031(83)90023-9 [DOI] [Google Scholar]
  42. Manimegalai, R. , Kavisri, S. , Vasundhra, M. , & Grace, R. K. (2023). Machine learning framework for analyzing disaster‐tweets. In 2023 International Conference on Intelligent Systems for Communication, IoT and Security (ICISCoIS) (pp. 55–60). IEEE. https://ieeexplore.ieee.org/document/10100450 [Google Scholar]
  43. Meskó, B. (2023). The impact of multimodal large language models on health care's future. Journal of Medical Internet Research, 25, e52865. 10.2196/52865 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Moor, M. , Banerjee, O. , Abad, Z. S. H. , Krumholz, H. M. , Leskovec, J. , Topol, E. J. , & Rajpurkar, P. (2023). Foundation models for generalist medical artificial intelligence. Nature, 616(7956), 259–265. 10.1038/s41586-023-05881-4 [DOI] [PubMed] [Google Scholar]
  45. Nasution, A. H. , & Onan, A. (2024). ChatGPT label: Comparing the quality of human‐generated and LLM‐generated annotations in low‐resource language NLP tasks. IEEE Access, 12, 71876–71900. 10.1109/ACCESS.2024.3402809 [DOI] [Google Scholar]
  46. Nelson, L. K. (2020). Computational grounded theory: A methodological framework. Sociological Methods & Research, 49(1), 3–42. 10.1177/0049124117729703 [DOI] [Google Scholar]
  47. Ogie, R. I. , Rho, J. C. , & Clarke, R. J. (2018). Artificial intelligence in disaster risk communication: A systematic literature review. In 2018 5th International Conference on Information and Communication Technologies for Disaster Management (ICT‐DM) (pp. 1–8). IEEE. https://ieeexplore.ieee.org/document/8636380 [Google Scholar]
  48. Olson, M. K. , Sutton, J. , Vos, S. C. , Prestley, R. , Renshaw, S. L. , & Butts, C. T. (2019). Build community before the storm: The National Weather Service's social media engagement. Journal of Contingencies and Crisis Management, 27(4), 359–373. 10.1111/1468-5973.12267 [DOI] [Google Scholar]
  49. Page, M. J. , McKenzie, J. E. , Bossuyt, P. M. , Boutron, I. , Hoffmann, T. C. , Mulrow, C. D. , Shamseer, L. , Tetzlaff, J. M. , Akl, E. A. , Brennan, S. E. , Chou, R. , Glanville, J. , Grimshaw, J. M. , Hróbjartsson, A. , Lalu, M. M. , Li, T. , Loder, E. W. , Mayo‐Wilson, E. , McDonald, S. , … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. 10.1136/bmj.n71 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Page, M. J. , Moher, D. , Bossuyt, P. M. , Boutron, I. , Hoffmann, T. C. , Mulrow, C. D. , Shamseer, L. , Tetzlaff, J. M. , Akl, E. A. , Brennan, S. E. , Chou, R. , Glanville, J. , Grimshaw, J. M. , Hróbjartsson, A. , Lalu, M. M. , Li, T. , Loder, E. W. , Mayo‐Wilson, E. , McDonald, S. , … McKenzie, J. E. (2021). PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews. BMJ, 372, n160. 10.1136/bmj.n160 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Raile, E. D. , Shanahan, E. A. , Ready, R. C. , McEvoy, J. , Izurieta, C. , Reinhold, A. M. , Poole, G. C. , Bergmann, N. T. , & King, H. (2022). Narrative risk communication as a lingua franca for environmental hazard preparation. Environmental Communication, 16(1), 108–124. 10.1080/17524032.2021.1966818 [DOI] [Google Scholar]
  52. Recchia, G. , Lawrence, A. C. E. , & Freeman, A. L. J. (2022). Investigating the presentation of uncertainty in an icon array: A randomized trial. PEC Innovation, 1, 100003. 10.1016/j.pecinn.2021.100003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Reinhold, A. M. , Munro, M. H. , Shanahan, E. A. , Gore, R. J. , Ezell, B. C. , & Izurieta, C. (2023). Embedding software engineering in mixed methods: Computationally enhanced risk communication. International Journal of Multiple Research Approaches, 15, 67–72. 10.29034/ijmra.v15n2a2 [DOI] [Google Scholar]
  54. Reinhold, A. M. , Raile, E. D. , Izurieta, C. , McEvoy, J. , King, H. W. , Poole, G. C. , Ready, R. C. , Bergmann, N. T. , & Shanahan, E. A. (2023). Persuasion with precision: Using natural language processing to improve instrument fidelity for risk communication experimental treatments. Journal of Mixed Methods Research, 17(4), 373–395. 10.1177/15586898221096934 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Reynolds, B. , & Seeger, M. W. (2005). Crisis and emergency risk communication as an integrative model. Journal of Health Communication, 10(1), 43–55. 10.1080/10810730590904571 [DOI] [PubMed] [Google Scholar]
  56. Riddell, H. , & Fenner, C. (2021). User‐generated crisis communication: Exploring crisis frames on Twitter during Hurricane Harvey. Southern Communication Journal, 86(1), 31–45. 10.1080/1041794X.2020.1853803 [DOI] [Google Scholar]
  57. Sallam, M. (2023). ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare, 11(6), 887. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Sawant, R. , & Sansgiry, S. (2018). Communicating risk of medication side‐effects: Role of communication format on risk perception. Pharmacy Practice (Granada), 16(2), 1174. 10.18549/PharmPract.2018.02.1174 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Sellnow, T. L. , Ulmer, R. R. , Seeger, M. W. , & Littlefield, R. (2008). Effective risk communication: A message‐centered approach. Springer Science & Business Media. [Google Scholar]
  60. Shanahan, E. A. , DeLeo, R. A. , Albright, E. A. , Li, M. , Koebele, E. A. , Taylor, K. , Crow, D. A. , Dickinson, K. L. , Minkowitz, H. , Birkland, T. A. , & Zhang, M. (2023). Visual policy narrative messaging improves COVID‐19 vaccine uptake. PNAS Nexus, 2(4), pgad080. 10.1093/pnasnexus/pgad080 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Shanahan, E. A. , Jones, M. D. , McBeth, M. K. , & Radaelli, C. M. (2018). The narrative policy framework. In Weible C. M. & Sabatier P. A. (Eds.), Theories of the policy process (4th ed., pp. 173–213). Westview Press. 10.4324/9780429494284-6 [DOI] [Google Scholar]
  62. Shanahan, E. A. , Reinhold, A. M. , Raile, E. D. , Poole, G. C. , Ready, R. C. , Izurieta, C. , McEvoy, J. , Bergmann, N. T. , & King, H. (2019). Characters matter: How narratives shape affective responses to risk communication. PLoS ONE, 14(12), 1–24. 10.1371/journal.pone.0225968 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Sharma, P. , Kar, B. , Wang, J. , & Bausch, D. (2021). A machine learning approach to flood severity classification and alerting. In Proceedings of the 4th ACM SIGSPATIAL International Workshop on Advances in Resilient and Intelligent Cities (ARIC '21) (pp. 42–47). ACM. 10.1145/3486626.3493432 [DOI] [Google Scholar]
  64. Shi, F. , Chen, X. , Misra, K. , Scales, N. , Dohan, D. , Chi, E. H. , Schärli, N. , & Zhou, D. (2023). Large language models can be easily distracted by irrelevant context. In Proceedings of the 40th International Conference on Machine Learning, Proceedings of Machine Learning Research (ICML ’23) (pp. 31210–31227). ACM. https://proceedings.mlr.press/v202/shi23a.html [Google Scholar]
  65. Silberztein, M. (2024). The limitations of corpus‐based methods in NLP. In Silberztein M. (Ed.), Linguistic resources for natural language processing: On the necessity of using linguistic methods to develop NLP software (pp. 3–24). Springer Nature Switzerland. 10.1007/978-3-031-43811-0_1 [DOI] [Google Scholar]
  66. Slavik, C. E. , Buttle, C. , Sturrock, S. L. , Darlington, J. C. , & Yiannakoulias, N. (2021). Examining Tweet content and engagement of Canadian public health agencies and decision makers during COVID‐19: Mixed methods analysis. Journal of Medical Internet Research, 23(3), e24883. 10.2196/24883 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Sleigh, J. , Amann, J. , Schneider, M. , & Vayena, E. (2021). Qualitative analysis of visual risk communication on Twitter during the Covid‐19 pandemic. BMC Public Health, 21, 1–12. 10.1186/s12889-021-10851-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Stephens, S. H. , & Richards, D. P. (2020). Story mapping and sea level rise: Listening to global risks at street level. Communication Design Quarterly Review, 8(1), 5–18. 10.1145/3375134.3375135 [DOI] [Google Scholar]
  69. Stokel‐Walker, C. , & Van Noorden, R. (2023). The promise and peril of generative AI. Nature, 614, 214–216. 10.1038/d41586-023-00340-6 [DOI] [PubMed] [Google Scholar]
  70. Thirunavukarasu, A. J. , Ting, D. S. J. , Elangovan, K. , Gutierrez, L. , Tan, T. F. , & Ting, D. S. W. (2023). Large language models in medicine. Nature Medicine, 29(8), 1930–1940. [DOI] [PubMed] [Google Scholar]
  71. van Dis, E. A. M. , Bollen, J. , Zuidema, W. , van Rooij, R. , & Bockting, C. L. (2023). ChatGPT: Five priorities for Research. Nature, 614, 224–226. 10.1038/d41586-023-00288-7 [DOI] [PubMed] [Google Scholar]
  72. Vos, S. C. , Sutton, J. , Yu, Y. , Renshaw, S. L. , Olson, M. K. , Gibson, C. B. , & Butts, C. T. (2018). Retweeting risk communication: The role of threat and efficacy. Risk Analysis, 38(12), 2580–2598. 10.1111/risa.13140 [DOI] [PubMed] [Google Scholar]
  73. Wang, D. , Weisz, J. D. , Muller, M. , Ram, P. , Geyer, W. , Dugan, C. , Tausczik, Y. , Samulowitz, H. , & Gray, A. (2019). Human‐AI collaboration in data science: Exploring data scientists' perceptions of automated AI. Proceedings of the ACM on Human‐Computer Interaction, 3(CSCW), 211. 10.1145/3359313 [DOI] [Google Scholar]
  74. Yoon, H. , You, M. , & Shon, C. (2022). An application of the extended parallel process model to protective behaviors against COVID‐19 in South Korea. PLoS ONE, 17(3), 1–15. 10.1371/journal.pone.0261132 [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Zhang, H. , Lu, A. X. , Abdalla, M. , McDermott, M. , & Ghassemi, M. (2020). Hurtful words: Quantifying biases in clinical contextual word embeddings. In Proceedings of the ACM Conference on Health, Inference, and Learning (CHIL '20) (pp. 110–120). ACM. 10.1145/3368555.3384448 [DOI] [Google Scholar]
  76. Zhang, X. , & Zhou, S. (2020). Sharing health risk messages on social media: Effects of fear appeal message and image promotion. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 14(2), 4. 10.5817/CP2020-2-4 [DOI] [Google Scholar]
  77. Zhang, X. A. (2021). Understanding the cultural orientations of fear appeal variables: A cross‐cultural comparison of pandemic risk perceptions, efficacy perceptions, and behaviors. Journal of Risk Research, 24(3–4), 432–448. 10.1080/13669877.2021.1887326 [DOI] [Google Scholar]
  78. Zhao, X. , Zhan, M. M. , & Liu, B. F. (2019). Understanding motivated publics during disasters: Examining message functions, frames, and styles of social media influentials and followers. Journal of Contingencies and Crisis Management, 27(4), 387–399. 10.1111/1468-5973.12279 [DOI] [Google Scholar]
  79. Zheng, L. , Chiang, W. L. , Sheng, Y. , Zhuang, S. , Wu, Z. , Zhuang, Y. , … Stoica, I. (2024). Judging LLM‐as‐a‐judge with MT‐bench and chatbot arena. Advances in Neural Information Processing Systems, 36, 1–12. [Google Scholar]

Articles from Risk Analysis are provided here courtesy of Wiley

RESOURCES