Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2023 Apr 3;14(6):7827–7843. doi: 10.1007/s12652-023-04594-w

The role of explainable Artificial Intelligence in high-stakes decision-making systems: a systematic review

Bukhoree Sahoh 1,, Anant Choksuriwong 2
PMCID: PMC10069719  PMID: 37228699

Abstract

A high-stakes event is an extreme risk with a low probability of occurring, but severe consequences (e.g., life-threatening conditions or economic collapse). The accompanying lack of information is a source of high-stress pressure and anxiety for emergency medical services authorities. Deciding on the best proactive plan and action in this environment is a complicated process, which calls for intelligent agents to automatically produce knowledge in the manner of human-like intelligence. Research in high-stakes decision-making systems has increasingly focused on eXplainable Artificial Intelligence (XAI), but recent developments in prediction systems give little prominence to explanations based on human-like intelligence. This work investigates XAI based on cause-and-effect interpretations for supporting high-stakes decisions. We review recent applications in the first aid and medical emergency fields based on three perspectives: available data, desirable knowledge, and the use of intelligence. We identify the limitations of recent AI, and discuss the potential of XAI for dealing with such limitations. We propose an architecture for high-stakes decision-making driven by XAI, and highlight likely future trends and directions.

Keywords: Cause and effect, Machine learning, Causal discovery, Bayesian networks, Deep learning, Causal inference

Introduction

A complex situation with ambiguous factors, such as an emergency event, may cause a decision-maker (e.g., first responders, search and rescue personnel, or medical technicians) to choose erroneous solutions which may have catastrophic consequences. Such choices need practical strategies to guide effective planning (Bernardi et al. 2020). The decision-making process under these particular circumstances is called high-stakes decision-making (Kahn and Baron 1995). The goal is to understand the possible scenarios, and choose suitable solutions to prevent undesirable situations from becoming worse (Dornschneider 2019). There is a low probability of such situations occurring, but a high cost (e.g., loss of life) if they do.

A proactive plan must be produced that supports decision-makers, but this is difficult since high-stakes situations are by their nature insecure, unknowable, and unique (Oroszi 2018). The proactive plan requires observations and advanced knowledge to understand events, which needs human-like interpretation to speculate about catastrophic consequences and adaptive solutions. While observations can be obtained from big data (Xu et al. 2016), it arrives in real-time, is unstructured, and diverse. Moreover, knowledge is only available implicitly through experts and practitioners, and modelling observations and their human-like interpretation is a challenge.

Artificial Intelligence (AI) is a principal technology for handling such problems, and plays an important role in decision-making systems (Chaudhuri and Bose 2020). Recent work has focused on curve-fitting models that match observations with deep learning (Liu et al. 2016). Zhou et al. (2018) and Sun et al. (2019) reviewed AI for supporting decision-making when a situation is insecure and high risk. They focused on AI-based first aid strategies for natural disasters, and claimed that curve-fitting models can address such complex patterns. Seo et al. (2017) and Stylianou et al. (2019) reviewed big data-driven approaches using AI for decision-making systems that spotlighted situations caused by man-made disasters. AI can summarize high-stakes data for first aid and medical emergencies analysis when a situation is uncertain, complex, and highly sensitive. However, AI data summarization does not supply interpretations of the data to help to understand the motivations behind high-stakes situations. Software agents without this interpretable ability cannot provide reasons for why predicted outcomes occur, and how they can be related to high-stakes situations. These are significant limitations since decision-makers need predicted outcomes to have rational interpretations, and explanations which offer reasons for outcomes.

As a consequence, this paper focuses on data interpretation based on eXplainable Artificial Intelligence (XAI) for high-stakes decision making for first aid and medical emergencies. We investigate the potential of XAI for generating common-sense reasons based on cause-and-effect interpretation underlying data summarization, particularly the details for individual situations. Our contributions are as follows:

  • A review of high-stakes decision-making research using three perspectives: (1) available data that is a source of (2) desirable knowledge generated by (3) an intelligent approach;

  • A discussion of the limitations of traditional AI applications for this problem, and how these problems can be addressed by XAI;

  • The proposal of architecture for high-stakes decision-making based on XAI, and the identification of new research trends and future challenges.

Section 2 presents background knowledge. Review methodologies of causal computing and related applications are examined in Sect. 3, and the results of these applications are discussed in Sect. 4. Section 5 concentrates on the limitation of traditional AI and introduces causal science, with Sect. 6 describing XAI-based architectures. Section 7 investigates future directions and open challenges, and our conclusions are given in Sect. 8.

Background knowledge

This section discusses high-stakes decision-making and current problems that need to be solved in first aid and medical emergencies. We propose a review methodology for protocols that identifies the most relevant papers that contribute to this problem-solving agenda.

High-stakes decision-making

Decision-making chooses policies and actions that best respond to the environment to solve their problems. Human-cognitive understanding is needed to interpret a problem with complex factors in a dynamic environment, and provide intuitive reasons for addressing the problem in such a way.

High-stakes decision-making builds upon traditional decision-making by emphasizing human safety and security in medical emergencies. Possible catastrophic consequences have a low chance of occurring but can cause serious harm and high costs if they do. In general, a medical emergency is both uncertain and unpredictable due to poor evidence, limited information, and intensive-time pressures. High-stakes decision-making requires comprehensive information observed in a dynamic environment, and knowledge from human experience at multiple levels of care to help decision-makers understand how matters will develop. However, the large number of factors arising from modern frameworks [e.g., smart cities (Tonmoy et al. 2020), big data (Correia et al. 2020), and IoT (Mohan and Mittal 2020)], makes the discovery of such knowledge both time-consuming and labor-intensive. An intelligent system that can generate this information is urgently needed for high-stakes decision-making.

Jung et al. (2020) utilized AI-based deep learning in an emergency management system to predict high-stakes events. The system helped decision-makers identify events and take actions to protect human lives. Kaveh et al. (2020) and Chikaraishi et al. (2020) reviewed systems that aided high-stakes decision-making during disastrous events. They focused on a prediction approach that classified the output of high-stakes events that might cause poor decisions.

None of these AI systems considered the need for human-like interpretation, which would raise high-stakes event prediction towards cognitive understanding for interpreting how and why events are predicted. In other words, current AI systems can produce information to help decision-makers answer “what it is?” but are unable to answer “how it is?” and “why not something different?”. For these questions, decision-makers need deep knowledge supplied by a human-like intelligence. The challenge is how to make AI systems exhibit such human-like intelligence to help decision-makers choose first aid that minimizes the loss of life and prevents significant, long-term adverse effects on physical health.

The needs of XAI

Decision-makers need specialized knowledge from AI systems to ensure that their decisions are rational and comprehensive, but high-stakes situations are both time sensitive and lack knowledge, which makes it easy to make poor decisions. XAI responds to the limitations of data interpretation by employing the concept of human-like intelligence (Gunning et al. 2019; Lecue 2020). It mimics human-like intelligence to produce event interpretations based on three levels of understanding: evidence, actions, and alternative actions (Hagras 2018). XAI is relevant to various fields, but can be categorized based on its types and methods, to produce an interrelated taxonomy shown in Fig. 1, along with examples of the taxonomy.

Fig. 1.

Fig. 1

XAI Taxonomy classified by methods

XAI goes beyond the success of predicted accuracy with the ability to interpret events in human-understandable ways (Vilone and Longo 2020; Barredo Arrieta et al. 2020). In recent years, XAI has been reviewed in term of different levels of explanations, such as observation, action, and contrastive action. Recent XAI review papers based on methods, and the capability level of explanations, are shown in Table 1.

Table 1.

Summary of XAI review research on the method and capable level of explanation

References Research Objective Method Capability level of Explanation
Observation Action Contrast
Machlev et al. (2022) and Islam et al. (2022) Trends and challenges of XAI applications Explainable Black-Box
Nimmy et al. (2022) and Patrício et al. (2022) XAI methods for operational risk management Explainable Black-Box
Alicioglu and Sun (2022) Trends and challenges of XAI visual analytics Feature representation
Zhou et al. (2022) and Xiao et al. (2022) XAI methods for events representation Feature representation
Warren et al. (2022) XAI methods for contrastive action events Counterfactuals
Glymour et al. (2019) XAI methods for cause-effect discovery Structured-based learning

Each capability level offers different ways to help decision-makers understand high-stake events, but the levels are dependent on each other. For example, a system cannot explain the action level if the observation level is unknown. It is therefore important to understand how each level is utilized in high-stakes decision-making systems and how they are interconnected.

Observation-based XAI utilizes evidential events in the environment to explain how two events might co-occur, but may not influence each other. For instance, the number of seriously ill in-patients was higher during the COVID-19 pandemic, leading to a high correlation between the two events, but one did not cause the other. Evidence-level-based explanation employs human-like intelligence to monitor which events seem to co-occur frequently. It lets XAI develop advanced knowledge of how and why events occur, and explains how current phenomena happen. For instance, networking to the previous example, older people and diabetics are susceptible to COVID-19, which caused the number of severely ill in-patients to increase during the pandemic.

Action-based XAI explains how the control of one event may prevent or monitor another event. The control in our example is based on the hypothesis, “If the illness of older people and diabetics is controlled by vaccination, then the numbers of severely ill in-patients should decrease”. This statement will be generated when XAI sees enough observable events and understand what causes them. In other words, the action level needs to employ causal-effect understanding to produce explainable statements.

Contrastive action-based XAI represents alternative events that have not happened because an action did not occur. However, the system can reason with counterfactuals such as, “If the action had been taken, then the situation would have been different”. For example, “If a COVID-19 vaccine had not been given to older people and diabetics, then the number of severely ill in-patients would have grown exponentially”. XAI can intelligently explain what might have happen even though no such action was carried out, which expresses information to help decision-makers understand a situation’s hidden knowledge.

These three capability levels offer human-like explanations that can drive high-stakes decision-making systems. However, recent XAI studies have not examined the trends and directions of these levels, and how they are connected to each other to achieve human-like intelligence. Therefore, our study reviews the fundamental principles of intelligence that XAI must possess to help decision-makers deal with high-stakes situations. These include observations, actions, and counterfactuals known as contrastive actions. As such, our works is in the causal AI category included in Fig. 1.

Review methodology of high-stakes decision making

XAI needs causal thinking to choose actions based on ground truth, and to adjust the ground truth based on the success or failure of those actions. It models human-like agent intelligence to make sense of observational data from the dynamic environment (Rohmer 2020). This may allow XAI to imitate the human ability to understand high-stakes situations and decide on the best actions to take. The research questions are:

  1. What are the available observations that will help explain high-stakes decision-making?

  2. What desirable knowledge produces human-like explanations?

  3. What mechanisms express observations and desirable knowledge?

The papers which address these questions are filtered using criteria based on search protocol, inclusion and exclusion criteria, and data extraction and analysis.

Search protocol

Our investigation considers literature on high-stakes decision-making published in English between 2010 and 2020. The literature on causal computing attempts to understand why such events affect human beings and how to prevent those events. Our study focuses on high-stakes events and medical emergencies in high-cost environments, such as road accidents, epidemics, and disasters, which may have life-or-death consequences. High-stakes decision-making in first aid and medical emergencies is interdisciplinary, covering topics such as decision science, emergency medical services, disaster management, risk reduction, and safety and security science.

These terms (or their synonyms) of each field may appear in the papers’ titles, abstracts, or as keywords, and searches were carried out of the standard academic databases, including Elsevier, Springer, Nature, ACM, IEEE, Taylor and Francis, Wiley, MDPI, and IGI-Global. The keywords criterion in Fig. 1 resulted in the retriever of 3237 papers.

Inclusion and exclusion criteria

Paper selection followed the guidelines in the literature review (Guillemin et al. 1993; Randolph 2009). A “quality” criterion selected 1,264 papers whose research methods were both sound and repeatable based on a study of titles, abstracts, keywords, and conclusions. A “domain” criterion focused on whether the papers examined high-cost environments or life-and-death situations, which reduced the number of relevant papers to 701 papers. An “AI” criterion looked for papers on available observations, human-like intelligence, and knowledge for supporting high-stakes decision-making, which further reduced the number of papers to 516 papers. The selected process is shown in a simplified form in Fig. 2.

Fig. 2.

Fig. 2

Procedures for paper inclusion

The distribution of keywords concerned with high-stake decision-making in the papers is shown in Fig. 3.

Fig. 3.

Fig. 3

Relevant keywords concerned with high-stakes decision-making.

The most common keywords are high-risk situations, risk management, and emergency management. Also relevant are disaster management, natural disasters, disaster events, critical events, and event crisis management. The use of direct keywords, such as medical emergency and high-stakes decisions, is still in its infancy in human-like intelligence research, which makes it challenging to discover work for dealing with high-stakes events.

Data extraction and analysis

Our aim is to highlight how recent causal computing applications deal with excessive observational data, uncertain environments, and unknown patterns of knowledge. We do this by considering applications that focus on high-stakes decision-making aligned with: (1) the investigation of available observations, (2) the categorization of intelligent approaches, and (3) the characterization of desirable knowledge. Relevant applications were found in two ways: (1) by studying ten years of the top-ten citations of the most influential papers in these fields, and (2) by examining work published during 2020 that reflected the domain’s current concerns.

Systematic review results

Our results are categorized according to (1) available observations, (2) the characterization of desirable knowledge, and (3) how the intelligent approach could contribute to our research.

Available observation for XAI

Observational data allows people to understand how and why events have happened, and decision-making on policies and actions occurs after seeing such data. Observational data acts as human-like sensory input to the XAI, with several varieties producing desirable knowledge. Table 2 lists the sources used by the applications described in the top-ten cited papers, which Table 3 shows the applications reported in recent papers.

Table 2.

Top-ten cited research papers use of observational data

Contribution Research proposal Data sources Citations
Reddy et al. (2010) Event Classification System for Abnormal Activity in Transportation GPS, Accelerometer 912
Yin et al. (2012) Situation Awareness Classification System for Impact Assessment in Emergency Social Media 711
Muro-de-la-Herran et al. (2014) Human Gait Classification System for Patient Monitoring and Early Diagnosis Physical Sensors, Images 708
Kim and Hastak (2018) Event Pattern Recognition for Natural Disaster Analysis Social Media 578
Crooks et al.( 2013) Spatial and Temporal Detection System for Impact Assessment in Emergency Social Media 449
Wang et al. (2013) Event Pattern Recognition System for Abnormal Activity in Transportation GPS 322
Kryvasheyeu et al. (2016) Spatial and Temporal Detection System for Impact Assessment in Emergency Social Media 315
Middleton et al. (2014) Spatial and Temporal Detection System for Impact Assessment in Emergency Social Media 306
Takahashi et al. (2015) Situation Awareness Classification System for Natural Disaster Analysis Social Media 271
Latonero and Shklovski (2011) Situation Awareness Classification System for Natural Disaster Analysis Social Media 236

Table 3.

Latest research papers use of observational data

Contribution Research proposal Data source Available online
Kersten and Klan (2020) Spatial and Temporal Detection System for Impact Assessment in Emergency Social media 29 Sep 2020
Madichetty (2020) Event Detection and Pattern Recognition System for Natural Disaster Analysis Social media 8 Aug 2020
Loynes et al. (2020) Spatial and Temporal Detection System for Impact Assessment in Emergency Social media 10 Jul 2020
Sahoh and Choksuriwong (2020) Event Detection System for Man-made Disaster Analysis Social media 9 Jun 2020
Yu et al. (2020) Situation Awareness Classification System for Natural Disaster Analysis Meteorological record 15 Apr 2020
Kuo et al. (2020) Waiting Time Prediction System for Medical First Aid Visited patient record 12 Apr 2020
Wang et al. (2020) Event Detection System for Man-made Disaster Analysis Scenario generation 6 Mar 2020
Kavota et al. (2020) Spatial and Temporal Detection System for Impact Assessment in Emergency Social media 14 Jan 2020
Formosa et al. (2020) Event Detection System for Abnormal Activity in Transportation Traffic data 10 Jan 2020
Ghafarian and Yazdi (2020) Event Detection System for Man-made and Natural Disaster Analysis Social media 2 Nov 2019

Table 2 shows that the primary source of observation data is social media in high-stake applications that assess impacts in an emergency or natural disaster. Also, data extracted from physical sensors, such as GPSs, accelerometers, and surveillance cameras, are mainly employed for pattern recognition and classification.

Table 3 shows that social media remains the most popular observational data source. The observations represent real-time social movement, the free-cost maintenance of infrastructure, and the use of flexible data types. The meteorological records, visited patient records, scenario generation, and traffic data are not real-time sources, but are highly credible because they were prepared by statistical and data analytics. This makes them suitable for high-quality prior knowledge modeling, but this data may not be applicable to the real-time processing required in high-stakes decision systems.

Although various observational data sources are utilized in high-stake decision-making, a recent trend is the use of a single either real-time or statistical data source. Statistical data (Marcot and Penman 2019) and real-time data (Song et al. 2020) play an essential role in designing causal models, which suggests that multi-source fusion at an earlier stage in a serious event will supply more available knowledge to high-stakes decision-making (Xiao 2019).

Characterization of desirable knowledge for XAI

Desirable knowledge are the conclusions made by XAI or people after seeing observational data. It is employed to better understand the real-world problem, and to let XAI and people take actions effectively. The most common types are characterized by the questions: What, Where, When, Who, Why, and How (Lawless et al. 2020). Indeed, it is a fundamental skill of human intelligence to interpret situations by answering such questions, and XAI must mimic this faculty to make sense of observational data.

Table 4 lists the most cited papers published between 2010 and 2020 that examine these types of questions.

Table 4.

The top-ten cited papers that consider critical questions

Contribution Research proposal Critical question Citations
Yates and Paquette (2011) Critical Factor Study for Natural Disaster Analysis How, Why 1079
Zhou et al. (2011) Critical Management System for Natural Disaster Analysis What, How 357
McGuire and Silvia (2010) Critical Factor Study for Emergency Management Strategies How, Why 297
Koliba et al. (2011) Critical Factor Study for Emergency Management Strategies How, Why 195
Frazier et al. (2013) Spatial and Temporal Study for Emergency Management Strategies Where, When, How 161
Tan et al. (2015) Spatial and Temporal Detection System for Impact Assessment in Emergency Where, What 146
Brugger et al. (2013) Critical Factor Study for Emergency Management Strategies How, What, Who 139
Wang et al. (2016) Spatial and Temporal Detection System for Impact Assessment in Emergency Where, What, When 119
Seppänen et al. (2013) Management System for Emergency Management Strategies Where, How, When 118
Tashakkori et al. (2015) Spatial Detection System for Impact Assessment in Emergency Where 110

The “How and Why” group of questions is the most influential in the domain of disaster management, where decision-makers need to explain the reasons for high-stake events. However, the “Where, When, and What” group has been most intensively researched since decision-makers must also assess the impact of emergency management. Table 5 lists the top ten most cited papers on critical questions from 2020.

Table 5.

2020 research papers on critical questions

Contribution Research proposal Critical question Available online
Huang et al. (2020) Spatial Detection System for Natural Disaster Analysis What, When, Where 28 Nov 2020
Löchner et al. (2020) Situation Awareness System for Emergency Management Strategies What 26 Nov 2020
Frazier et al. (2020) Critical Factor Study for Natural Disaster Analysis What, How 19 Nov 2020
Kroll et al. (2020) Situation Awareness System for Impact Assessment in Emergency Where, When 20 Jun 2020
Kankanamge et al. (2020) Critical Factor Study for Impact Assessment in Emergency How, Why 26 Mar 2020
Fekete (2020) Spatial Detection System for Emergency Management Strategies Where 23 Feb 2020
Albrecht et al. (2020) Spatial and Temporal Detection System for Natural Disaster Analysis Where, When 31 Jan 2020
Valenzuela et al. (2020) Situation Awareness Study for Natural Disaster Analysis How, Why 29 Jan 2020
Cheikhrouhou et al. (2020) Spatial and Temporal Detection System for Impact Assessment in Emergency Where, What, When 13 Jan 2020
Bruni et al. (2020) Temporal Detection System for Impact Assessment in Emergency When 12 Nov 2019

Table 5 shows that the trends of Table 4 are going, with the “Where, When, and What” group still paid more attention than the “How and Why” group. In addition, recent research aims to predict, detect, and classify high-stake events in terms of location and time. Also, “How and Why” questions are uncovering the high-stake factors based on statistical descriptions in order to explain the motivation behind events. Unfortunately, researchers have not yet considered how to compute this information automatically by using XAI.

The “Where, When, and What” group focuses on object identifications but does not interpret how some objects are detected while others are not. The “How and Why” group might produce reasons for object identifications, but system-based object interpretations are still at the early stage of development. The challenge is how to design intelligent approaches that provide “Where, When, and What” answers and also generate contextual reasons from “How and Why” questions.

Intelligent approach for XAI

An intelligent approach employs an automatic mechanism to express knowledge that XAI and people use to understand physical reality based on causal assumptions. It relies on tractable algorithms that must be transparent and testable, which both humans and machines understand, because the information can be exchanged, transferred, and the assumptions may need to be modified if its conclusions conflict with real-life events (Pearl et al. 2016).

High-stakes decision-making systems covert observational data into knowledge based on human-like understanding that XAI and ordinary people can interpret in the same manner. Machine learning (ML) can model the experience as knowledge, and automatically learns new tasks and improves its performance using observational data to produce desirable knowledge to support the decision-making process (Al-Asadi and Tasdemir 2022). Such a model can speed up and simplify time-consuming and labor-intensive tasks.

Table 6 lists the top-ten cited papers that utilize machine learning in high-stakes decision-making applications.

Table 6.

The top-ten cited research Papers using intelligent approaches

Contribution Research proposal Intelligent approach Citations
Khandani et al. 2010) Event Detection System for Risk Analysis Supervised ML 411
Marjanović et al. (2011) Spatial Detection System for Impact Assessment in Emergency Supervised ML 287
de Albuquerque et al. (2015) Spatial Detection System for Impact Assessment in Emergency Generalized Linear Model 269
Barboza et al. (2017) Event Detection System for Risk Analysis in Business Supervised ML 258
Gonzalez et al. (2016) Event Detection System for Environmental Monitoring Analysis Template Matching Model 255
Taylor et al. (2016) Event Detection System for Medical Emergency Management Supervised ML 229
Peng et al. (2011) Event Clustering System for Risk Analysis in Emergency Unsupervised ML 176
Delir Haghighi et al. (2013) Event Explaining System for Medical Emergency Management Unsupervised ML 166
Ghorbanzadeh et al. (2019) Spatial Detection System for Natural Disaster Analysis Supervised ML 123
Ragini et al. (2018) Spatial Detection System for Natural Disaster Analysis Unsupervised ML 110

The most cited papers in the last decade describe event and location detection systems. Supervised machine learning is the most popular approach for the critical analysis of high-cost and life-and-death environments. Table 7 lists the top-ten papers using the intelligence approach published in 2020.

Table 7.

The latest research papers using intelligence approaches

Contribution Research proposal Intelligent approach Available online
Bai et al. (2020) Situation Awareness System for Real-time Assessment Supervised ML 21 Nov 2020
Eligüzel et al. (2020) Event Detection System for Natural Disaster Analysis Supervised ML 10 Sep 2020
Ferner et al. (2020) Event and Location Detection System for Impact Assessment Unsupervised ML 25 Jul 2020
Devaraj et al. (2020) Disaster Management System for Emergency Strategies Supervised ML 20 Jul 2020
Raza et al. (2020) Situation Awareness System for Impact Assessment Supervised ML 27 Jun 2020
Kumar et al. (2020) Event Detection System for Natural Disaster Analysis Supervised ML 16 Jan 2020
Fan et al. (2020) Spatial Detection System for Natural Disaster Analysis Supervised ML 10 Jan 2020
Chen et al. (2020) Spatial Detection System for Impact Assessment Supervised ML 19 Dec 2019
Anbarasan et al. (2020) Event Detection System for Natural Disaster Analysis Supervised ML 19 Nov 2019
Zahra et al. (2020) Event Detection System for Natural Disaster Analysis Semi-Supervised ML 27 Sep 2019

Table 7 shows that the use of intelligence in event and location detection is growing, and that supervised machine learning is mostly applied in the domain of natural disasters.

Although these tables confirm that intelligence has been intensively studied in high-stakes decision-making, the tables also indicate that “how and why” event interpretation and argumentation are still going unnoticed.

Supervised machine learning utilizes curve-fitting technologies such as Support Vector Machines (SVM), K-Nearest Neighbors (KNN), and Artificial Neural Networks (ANN). It models enormous training datasets, and directly matches their labels to let an intelligent approach that answers Where, When, and What questions effectively. However, it cannot give the reasons for How such answers are discovered or Why another explanation was not proposed. Curve-fitting technologies are black-box models which produce parameters that perfectly fit data, but they cannot explain how the results are computed because their functions are too complex for people or XAI to understand. This lack becomes troublesome when the model produces incorrect answers that conflict with physical reality because the system cannot explain why and how a situation has happened. This shifts the responsibility to the decision-makers to manually determine the How and Why of the situation. Fitting observational data with blind computing algorithms that ignore how nature works in physical reality is the cause of these problems.

The shifting data science towards causal science

Black-box models are fine in a low-cost environment such as weather predictions, but are insufficient for high-cost environments involving life-or-death decisions. Such situations need more than data summarization—they need an interpretation of How and Why such data was generated. For example, the report, “There is a 97% prediction that COVID-19 patients will respond to medicine and be curable”, will naturally cause people to ask for the reasons why there is a 3% fatality chance, and how to avoid that possibility (Worldometer 2022). These “causal questions” cannot be answered by black-box model-based agents (Pearl 2019).

Miller (2019) has discussed this problem, and believes that current XAI -based agents lack a cognitive understanding of reality, instead relying on enormous data and complex algorithms like curve-fitting technologies. This means that decision-makers cannot understand the motivation behind the resulting model parameters, and they should be prioritized and used. Rudin (2019) has argued that when XAI is employed as a part of high-stakes decision-making systems, then curve-fitting technologies should not be employed to aid decision-makers. She suggests that only cognitive agents aligned with human-like intelligence, which can rationally communicate with decision-makers, should be employed. XAI-based agents must shift away from the paradigm of data science towards causal science in high-stakes environments.

Causal science is a computing paradigm concerned with knowledge discovery that cannot be naively derived from pure data since it must understand how and why such data is generated (Pearl and Mackenzie 2018). It lets XAI support high-stakes decision-making because it can interpret evidence in the manner of human-like intelligence (Sahoh and Choksuriwong 2021a).

Bengio (2017) and Schölkopf et al. (2021) believe that data summarization is not enough to help humans explain the reasons behind events because it does not represent physical reality as human-like understanding. The causal model generalizes the black-box model by encoding scientific assumptions of human knowledge behind the how and why of events, which lets XAI interpret the causal effects of an event even when environmental conditions change. They suggest that the causal model is likely to be the future of XAI since it offers machine learning technologies close to human-like intelligence.

To construct a causal model for XAI, three principal ingredients must be considered: (1) an understanding of how nature works, (2) desirable answers to causal questions, and (3) available observations from the environment. These can be employed to infer knowledge for interpreting catastrophic consequences in the present, or to understand unchosen actions from the past. Causal inference has emerged to utilize these ingredients, which lets XAI performs actions close to human-like intelligence.

An XAI-based architecture for high-stakes decision making

Causal knowledge is a key requirement for answering causal questions in high-stakes decision-making. Section 3 described how recent research had divided into three aspects: (1) gathering available data from the environment, (2) approaches for modelings intelligence, and (3) how to obtain desirable knowledge, but without considering the intelligence mechanism that produces the causal knowledge to integrate these aspects. In this respect, XAI needs a new engine to convert available data into causal knowledge automatically.

Recent advances in XAI based on the cause-and-effect interpretation provide a technology to manufacture such knowledge (Korb and Nicholson 2010). Consequently, we supplement the three aspects using a cause-and-effect interpretation that lets XAI produce knowledge based on human-like intelligence. Our architecture for high-stakes decision-making behavior is shown in Fig. 4.

Fig. 4.

Fig. 4

An overview of our XAI System for high-stakes dcision-making

Figure 4 consists of two layers: the environment and the XAI-based engine. The environment represents the physical reality in which high-stake events may occur, and the XAI-based engine understands that environment.

The environment consists of two components: (1) data sensing collecting available evidence as signals from IoT devices (Martinez-Hernandez and Dehghani-Sanij 2019), online news (Wang and Stewart 2015), social media (Zhou et al. 2021), global positioning systems (GPSs) (Dolejš et al. 2020), or traffic reports (Mujalli et al. 2016), and (2) decision-making planning and response to a situations (Planella Conrado et al. 2016). The decision-makers understand the situation by examining knowledge extracted from the evidence. Such evidence cannot be used directly, so the engine converts it into knowledge first.

The XAI-based engine consists of three components: (1) evidence identification, (2) cause-effect determination, and (3) knowledge interpretation. Evidence identification converts evidence into information (Zhu et al. 2019), acting in an analogous way to human senses for hearing, seeing, smelling, or touch. The cause-effect determination for the information is based on past likelihood (Sahoh and Choksuriwong 2021b), which expresses how likely it is that two events can co-occur and predicts what will happen in the next phase. This models how causal details of an event can be influenced by other events (Chen et al. 2019; Duan et al. 2019). Cause-effect determination infers potential effects by changing the causes, but cannot offer counterfactual knowledge to help decision-makers ask about events that never occurred. Knowledge interpretation fulfills that gap by imagining that something can happen. For example, “What would have been if the current evidence had not occurred or some other event had been observed?”. Knowledge interpretation based on a causal inference engine can potentially generate useful answers.

Future directions in first aid and medical emergencies

Causal inference lets XAI imitate human-like intelligence by interpreting the motivations behind actions in the environment. Its goal is to produce answers to causal questions attributed to prior knowledge through evidence (Schölkopf 2019).

A causal inference engine is a human-like intelligence-driven mechanism that connects a high-stakes decision system to physical reality. It converts evidence into causal knowledge to figure out how and why an event occurs.

The XAI driven by this engine must possess the three abilities listed in Pearl’s hierarchy: association, intervention, and counterfactuals (Bareinboim et al. 2020). Each level plays a different role in helping a decision-maker deeply understand a high-stake situation. To discuss the challenges aligned with the system architecture, we employ the example from Sect. 5, “There is a 97% prediction that COVID-19 patients can be medicated and are curable”, to examine how causal inference can intuitively interpret high-stake situations.

Observation-based XAI (association)

What associated factors cause the mortality of COVID-19 patients to mortality? This question requires people to observe the correlation between events during an outbreak. People will see what events most change with varying COVID-19 mortality, and convert the evidence into statistical data. For example, statistics for the covariate causes of death can be broken down into hypertension (68.7%), diabetes (47.6%), age (47.1%), pulmonary disease (36.6%), and cerebrovascular problems (23.3%) (Cho et al. 2021). This is a principal ability of humans and some classification and detection systems, for modeling what they see in nature and how frequently events co-occur. This technique also enables XAI to observe how the world works, and helps decision-makers to answer the statistical questions shown in Table 8.

Table 8.

Questions based on statistical descriptions at the association level

Questions of association Algebraic functions for the association questions
What do the statistics summarize about the covariate causes of COVID-19 death rates? P(Ypatient = death | X = diabetes, …, cerebrovascular)
Does diabetes correlate to COVID-19 patient mortality? P(Ypatient = death | X = diabetes)
Can patients survive if they are old? P(Ypatient = ¬death | X = old adult)

An association level fulfills evidence identification in our proposed system architecture, by offering relevant evidence as initial input to the causal inference engine. Supervised Machine Learning (e.g., SVM, KNN, and ANN) can then fit a correlation between two events effectively.

However, the association level can only offer correlation between events, but cannot answer causal questions posed by the decision-maker, such as “What will happen to COVID-19 patients if they are suffering from a pulmonary disease and the doctor connects them to an oxygen respirator?”. Causal-effect interpretation plays a key role in answering questions of this type by using the initial data produced by an association.

Action-based XAI (intervention)

Given the situation “In ten minutes, the ambulance service team will deliver COVID-19 patients with chronic diabetes”, XAI must produce an information interpretation about the possibilities of future outcomes. Intervention is independent of the strength of the statistical descriptions that the model learned in the past, but without causal assumptions, an XAI cannot produce reasons for potential outcomes based only on recent evidence. For example, given the causal assumptions X = diabetes may cause Z2 = immune system failure because of Z1 = fluctuated blood sugar, which can lead to Y =mortality, then the interventions of future outcomes for various causal questions are as shown in Table 9.

Table 9.

Causal questions for future outcomes at the intervention level

Causal intervention questions Algebraic functions for the intervention questions
What will happen if the doctor cannot treat diabetes? P(Ypatient= death | do(X= diabetes), Z)
If the doctor can handle fluctuated blood sugar, will the COVID-19 patients survive? P(Ypatient= ¬death | do(X= ¬fluctuated blood sugar), Z)
What will happen if the doctor can deal with the immune system failure of COVID-19 patients?” P(Ypatient= death | do(X= ¬immune system failure), Z)

Intervention models about what causes an effect imitate human-like understanding. For example, in traditional statistics, the Markov equivalence → → Y, ← Y and ← → Y are identical (when X is conditionally independent of Y given Z), but is semantically dissimilar in a causal interpretation (Barber 2011). Intervention works like an assignment system based on inequality that allows XAI to reason how an action is influenced and performed. It can give the reasons for potential outcomes that can help decision-makers plan and act proactively. For example, immune system failure that causes death does not mean that death causes the immune system to fail (Liu et al. 2020).

In general, intervention employs Randomized Control Trials (RCTs) (Kuang et al. 2020), which randomly sets particular circumstances to be constant (e.g., Fix X= pulmonary disease, hypertension, or cerebrovascular) and then observes the outcomes. However, it may not be possible to arbitrarily control a high-cost environment in this way (e.g., by forcing patients to suffer from pulmonary disease, hypertension, or cerebrovascular illnesses). However, XAI with causal inference lets decision-makers simulate the future effects of interference based on RCTs. Intervention allows decision-makers to design RCTs conditions for different scenarios that will produce an effective plan for dealing with unknown patterns. This fosters the need for cause-effect determination in our proposed system architecture. Potential implementation technologies include (Reinforce Learning) Causal Bayesian Networks, Markov Decision Processes, and Partially Observable Markov Decision Processes.

Intervention deals with causal questions about future events, but cannot answer retrospection questions about past actions. For example, suppose that doctors are notified that COVID-19 mortality correlates with immune system failure because of high blood sugar, and want to ask, “What would have happened if the doctor had controlled the fluctuation of blood sugar?”. Such questions need another level to help decision-makers understand the possible world.

Contrastive action-based XAI (counterfactuals)

Counterfactuals let XAI simulate alternative consequences that never happened (Pearl 2012). It lets decision-makers answer “why” questions and imagine events that might have happened.

For example, consider “The COVID-19 death was caused by the fluctuation of blood sugar that caused the patient’s immune system to fail”. We do not want to ask about potential outcomes because the event has finished, but want to ask why something different did not happen. “why” questions can be written using counterfactual if-clauses as illustrated in Table 10.

Table 10.

Causal questions at counterfactual level

Counterfactual questions Algebraic functions for the counterfactual questions
What would have happened if the doctor had controlled the COVID-19 patient’s blood sugar fluctuations? P(Ypatient = death | ¬Y’patient = survival, ¬X’ = stable blood sugar, Z)
What would have happened if blood sugar had been stable, but the doctor was still unable to prevent immune system failure?” P(Ypatient = death | ¬Y’patient = survival, ¬X’1 = stable blood sugar, ¬X’2 = immune system failure), Z)
What would have happened if blood sugar levels were fluctuating, and the doctor had boosted the immune system? P(Ypatient = death | ¬Y’patient = survival, ¬X’1 = stable blood sugar, ¬X’2 = improvement immune system, Z)

The three why-questions are based on the fact of a COVID-19 death (e.g., Ypatient = death) but utilize counterfactuals to imagine what would have happened if prior actions had been different (e.g., ¬X’ and ¬Y’). Counterfactuals let XAI deal with events in different scenarios, unlike intervention which handles the problem of the average likelihood with prior knowledge and estimates of future outcomes. Counterfactuals supply alternative knowledge which can help the decision-maker understand a problem from different viewpoints.

These questions cannot be answered only from evidence at the association level. Causal assumptions are also needed, such as cause-and-effect interpretations from the intervention level (e.g., the earlier counterfactuals need Z from the intervention). This shows how a high-cost environment related to life-and-death decisions must employ counterfactuals to understand a high-stakes situation through why-questions. Prosperi et al. (2020) have discussed the practicality of counterfactuals in healthcare, while Huitfeldt et al. (2019) proposed the use of counterfactuals in epidemiology.

Counterfactuals serve the need for knowledge interpretation in our proposed system architecture. Structural Causal Models (SCM) are a new paradigm to encode counterfactuals and produce causal knowledge. Kumor et al. (2020) proposed methods for determining causation based on SCM, while Karimi et al. (2020) and von Kügelgen et al. (2020) described a causal analysis that produces knowledge in uncertain situations. Counterfactuals are a recent introduction to XAI research, and more contributions are needed, especially in the high-stakes decision domain.

Discussions and open challenges

Associations, interventions, and counterfactuals depend on each other. For example, XAI can manufacture an answer at the intervention level only if the association level has been generated. In other words, if we cannot observe the evidence in an environment, then it is impossible to discuss or determine the causal effects behind it, regardless of how expert we are. Likewise, the counterfactual level can produce answers only if cause-effect determinations have been developed at the intervention level. This means that if we cannot encode prior knowledge based on cause-and-effect, then we cannot reasonably imagine possible worlds at the counterfactual level.

High-stake decision-making systems need more research contributions from various fields. We invite decision scientists, social scientists, safety and security scientists, and computer scientists and engineers to help humanity avoid dangerous and hazardous situations. We strongly believe that causal inference will play an important role in future XAI technology for supporting improved high-stakes decision-making.

The research challenges of XAI in high-stakes decision-making systems are concerned with developing and integrating associations, interventions, and counterfactuals, especially in sectors that impact society, such as disease outbreaks and emergency management. In what follows, we consider a few challenging domains that could be used to guide research on XAI-based concepts for high-stakes decision-making. They all relate to human safety and security impacting life and death situations.

  • (A)

    Global food security

    The assessment of food supply chain sustainability plays an essential role in balancing demand and supply. Without a proper understanding of food security, a global food crisis may arise because exponential population growth is out of balance with world food production, much worse by uncertain climate change and economics (Lieber et al. 2020). According to the United Nations, 25,000 people, including over 10,000 children, die from hunger daily (Holmes 2022). However, the dynamic environment causes observational data to be overloaded, insufficient, unstructured, and incomplete, making the management process harder to handle. XAI can help observe critical events, determine the best actions, and produce counterfactual knowledge. XAI for real-time monitoring and environmental tracking can alleviate food insecurity (at the observation level), and help decision-makers take actions at the right time and place.

  • (B)

    The aging society

    Older people need intensive care and require close monitoring to prevent serious accidents (Abdollahi et al. 2022). Deterioration in physiology over time can lead to problems with balance, causing individuals to suffer from fall-related injuries. According to the World Health Organization (World Health Organization 2021), around 684,000 deaths are caused by falls, so there is a real need for technologies to track and monitor environmental changes. Currently, there are no intervention systems that can answer “what if” questions to reduce risk and help design effective plans. Intervention using XAI is a challenging problem because accidents are unknown patterns based on personal conditions. Moreover, the XAI learning process lacks sufficient training data. Nevertheless, the design and development of intervention systems based on cause-and-effect offer exciting innovations.

  • (C)

    Emergency management

    Intelligent Transportation Systems (ITS) suffer from data overload, with decision-makers unable to understand emergency events (Eskandari Torbaghan et al. 2022), with approximately 1.3 million people dying each year from road traffic accidents (World Health Organization 2022). ITS needs a new infrastructure that can observe relevant factors, interpret severe conditions, and analyze situations that might occur. Suitable ITS for complex environments must characterize road infrastructure, vehicle types, user behavior, and usage. They will utilize prior knowledge from experts to interpret and produce emergency information for decision-makers, which suggests the use of an XAI-based ecosystem.

Conclusions

This paper has examined high-stakes events with a low probability of occurring but a high cost. The consequences of such events are catastrophic for safety and security and may even damage societies. High-stakes events need specialized knowledge produced by XAI to help experts make rational and comprehensive decisions.

We have investigated the role of XAI in high-stakes decision systems for first aid and medical emergencies, which are both time-sensitive and lacking knowledge, and so make it easy to arrive at poor decisions. We have reviewed high-stakes decision-making research based on using an (1) intelligent approach to convert (2) available observations into (3) desirable knowledge. We have highlighted the shifting of data science towards causal science, with a corresponding move away from traditional AI towards XAI, in order to benefit a community increasingly encountering high-stakes situations. We proposed that XAI be used to assemble intelligence, observational data, and causal knowledge, thereby enhancing software agents with human-like intelligence. This suggests that future research methodologies and techniques will be based on associations, interventions, and counterfactuals, which align with Pearl’s hierarchy. Finally, we discussed XAI’s potential to introduce human-like intelligence into high-stakes decision systems, leading to impressive future applications.

We hope that this work may help young researchers better understand trends and future directions for the XAI in high-stakes decision systems, and help practitioners develop new AI applications to cope with high-stakes events.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Abdollahi H, Mahoor M, Zandie R, et al. Artificial emotional intelligence in socially assistive robots for older adults: a pilot study. IEEE Trans Affect Comput. 2022 doi: 10.1109/TAFFC.2022.3143803. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Al-Asadi MA, Tasdemir S. Predict the value of football players using FIFA video game data and machine learning techniques. IEEE Access. 2022;10:22631–22645. doi: 10.1109/ACCESS.2022.3154767. [DOI] [Google Scholar]
  3. Albrecht C, Elmegreen B, Gunawan O, et al. Next-generation geospatial-temporal information technologies for disaster management. IBM J Res Dev. 2020;64:1–12. doi: 10.1147/JRD.2020.2970903. [DOI] [Google Scholar]
  4. Alicioglu G, Sun B. A survey of visual analytics for explainable artificial intelligence methods. Comput Gr. 2022;102:502–520. doi: 10.1016/J.CAG.2021.09.002. [DOI] [Google Scholar]
  5. Anbarasan M, Muthu BA, Sivaparthipan CB, et al. Detection of flood disaster system based on IoT, big data and convolutional deep neural network. Comput Commun. 2020;150:150–157. doi: 10.1016/j.comcom.2019.11.022. [DOI] [Google Scholar]
  6. Bai H, Yu H, Yu G, Huang X. A novel emergency situation awareness machine learning approach to assess flood disaster risk based on Chinese Weibo. Neural Comput Appl. 2020 doi: 10.1007/s00521-020-05487-1. [DOI] [Google Scholar]
  7. Barber D. Bayesian reasoning and machine learning. 1. Cambridge: Cambridge University Press; 2011. [Google Scholar]
  8. Barboza F, Kimura H, Altman E. Machine learning models and bankruptcy prediction. Expert Syst Appl. 2017;83:405–417. doi: 10.1016/j.eswa.2017.04.006. [DOI] [Google Scholar]
  9. Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, et al. Explainable explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform Fusion. 2020;58:82–115. doi: 10.1016/j.inffus.2019.12.012. [DOI] [Google Scholar]
  10. Bernardi S, Gentile U, Nardone R, Marrone S. Advancements in knowledge elicitation for computer-based critical systems. Future Gen Comput Syst. 2020 doi: 10.1016/j.future.2020.03.035. [DOI] [Google Scholar]
  11. Brugger H, Durrer B, Elsensohn F, et al. Resuscitation of avalanche victims: evidence-based guidelines of the international commission for mountain emergency medicine (ICAR MEDCOM). Intended for physicians and other advanced life support personnel. Resuscitation. 2013;84:539–546. doi: 10.1016/j.resuscitation.2012.10.020. [DOI] [PubMed] [Google Scholar]
  12. Bruni ME, Khodaparasti S, Beraldi P. The selective minimum latency problem under travel time variability: an application to post-disaster assessment operations. Omega (United Kingdom) 2020;92:102154. doi: 10.1016/j.omega.2019.102154. [DOI] [Google Scholar]
  13. Chaudhuri N, Bose I. Exploring the role of deep neural networks for post-disaster decision support. Decis Support Syst. 2020;130:113234. doi: 10.1016/j.dss.2019.113234. [DOI] [Google Scholar]
  14. Cheikhrouhou O, Koubaa A, Zarrad A. A cloud based disaster management system. J Sens Actuator Netw. 2020;9:6. doi: 10.3390/jsan9010006. [DOI] [Google Scholar]
  15. Chen N, Liu W, Bai R, Chen A. Application of computational intelligence technologies in emergency management: a literature review. Artif Intell Rev. 2019;52:2131–2168. doi: 10.1007/s10462-017-9589-8. [DOI] [Google Scholar]
  16. Chen J, Li Q, Wang H, Deng M. A machine learning ensemble approach based on random forest and radial basis function neural network for risk evaluation of regional flood disaster: a case study of the Yangtze River Delta, China. Int J Environ Res Public Health. 2020;17:49. doi: 10.3390/ijerph17010049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Chikaraishi M, Garg P, Varghese V, et al. On the possibility of short-term traffic prediction during disaster with machine learning approaches: an exploratory analysis. Transp Policy. 2020;98:91–104. doi: 10.1016/j.tranpol.2020.05.023. [DOI] [Google Scholar]
  18. Cho SI, Yoon S, Lee HJ. Impact of comorbidity burden on mortality in patients with COVID-19 using the Korean health insurance database. Sci Rep. 2021 doi: 10.1038/s41598-021-85813-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Correia TP, Corsi AC, Quintanilha JA. Big data for natural disasters in an urban railroad neighborhood: a systematic review. Smart Cities. 2020;3:202–211. doi: 10.3390/smartcities3020012. [DOI] [Google Scholar]
  20. Crooks A, Croitoru A, Stefanidis A, Radzikowski J. #Earthquake: Twitter as a distributed sensor system. Trans GIS. 2013;17:124–147. doi: 10.1111/j.1467-9671.2012.01359.x. [DOI] [Google Scholar]
  21. de Albuquerque JP, Herfort B, Brenning A, Zipf A. A geographic approach for combining social media and authoritative data towards identifying useful information for disaster management. Int J Geogr Inf Sci. 2015;29:667–689. doi: 10.1080/13658816.2014.996567. [DOI] [Google Scholar]
  22. Delir Haghighi P, Burstein F, Zaslavsky A, Arbon P. Development and evaluation of ontology for intelligent decision support in medical emergency management for mass gatherings. Decis Support Syst. 2013;54:1192–1204. doi: 10.1016/j.dss.2012.11.013. [DOI] [Google Scholar]
  23. Devaraj A, Murthy D, Dontula A. Machine-learning methods for identifying social media-based requests for urgent help during hurricanes. Int J Disaster Risk Reduct. 2020;51:101757. doi: 10.1016/j.ijdrr.2020.101757. [DOI] [Google Scholar]
  24. Dolejš M, Purchard J, Javorčák A. Generating a spatial coverage plan for the emergency medical service on a regional scale: empirical versus random forest modelling approach. J Transp Geogr. 2020;89:102889. doi: 10.1016/j.jtrangeo.2020.102889. [DOI] [Google Scholar]
  25. Dornschneider S. High-stakes decision-making within complex social environments: a computational model of belief systems in the Arab Spring. Cogn Sci. 2019 doi: 10.1111/cogs.12762. [DOI] [PubMed] [Google Scholar]
  26. Duan Y, Edwards JS, Dwivedi YK. Artificial intelligence for decision making in the era of big data—evolution, challenges and research agenda. Int J Inf Manag. 2019;48:63–71. doi: 10.1016/j.ijinfomgt.2019.01.021. [DOI] [Google Scholar]
  27. Eligüzel N, Çetinkaya C, Dereli T. Comparison of different machine learning techniques on location extraction by utilizing geo-tagged tweets: a case study. Adv Eng Inform. 2020;46:101151. doi: 10.1016/j.aei.2020.101151. [DOI] [Google Scholar]
  28. Eskandari Torbaghan M, Sasidharan M, Reardon L, Muchanga-Hvelplund LCW. Understanding the potential of emerging digital technologies for improving road safety. Accid Anal Prev. 2022;166:106543. doi: 10.1016/J.AAP.2021.106543. [DOI] [PubMed] [Google Scholar]
  29. Fan C, Wu F, Mostafavi A. A hybrid machine learning pipeline for automated mapping of events and locations from social media in disasters. IEEE Access. 2020;8:10478–10490. doi: 10.1109/ACCESS.2020.2965550. [DOI] [Google Scholar]
  30. Fekete A. Critical infrastructure cascading effects. Disaster resilience assessment for floods affecting city of Cologne and Rhein-Erft-Kreis: J Flood Risk Manag; 2020. [Google Scholar]
  31. Ferner C, Havas C, Birnbacher E, et al. Automated seeded latent dirichlet allocation for social media based event detection and mapping. Inform (Switzerland) 2020;11:376. doi: 10.3390/INFO11080376. [DOI] [Google Scholar]
  32. Formosa N, Quddus M, Ison S, et al. Predicting real-time traffic conflicts using deep learning. Accid Anal Prev. 2020;136:105429. doi: 10.1016/j.aap.2019.105429. [DOI] [PubMed] [Google Scholar]
  33. Frazier TG, Thompson CM, Dezzani RJ, Butsick D. Spatial and temporal quantification of resilience at the community scale. Appl Geogr. 2013;42:95–107. doi: 10.1016/j.apgeog.2013.05.004. [DOI] [Google Scholar]
  34. Frazier TG, Wood EX, Peterson AG. Residual risk in public health and disaster management. Appl Geogr. 2020;125:102365. doi: 10.1016/j.apgeog.2020.102365. [DOI] [Google Scholar]
  35. Ghafarian SH, Yazdi HS. Identifying crisis-related informative tweets using learning on distributions. Inf Process Manage. 2020 doi: 10.1016/j.ipm.2019.102145. [DOI] [Google Scholar]
  36. Ghorbanzadeh O, Blaschke T, Gholamnia K, et al. Evaluation of different machine learning methods and deep-learning convolutional neural networks for landslide detection. Remote Sens. 2019;11:196. doi: 10.3390/rs11020196. [DOI] [Google Scholar]
  37. Glymour C, Zhang K, Spirtes P. Review of causal discovery methods based on graphical models. Front Genet. 2019;10:524. doi: 10.3389/FGENE.2019.00524/BIBTEX. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Gonzalez L, Montes G, Puig E, et al. Unmanned aerial vehicles (UAVs) and artificial intelligence revolutionizing wildlife monitoring and conservation. Sensors. 2016;16:97. doi: 10.3390/s16010097. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Guillemin F, Bombardier C, Dorcas B. Cross-cultural adaptation of health-related quality of life measures: literature review and proposed guidelines. J Clin Epidemiol. 1993;46:1417–1432. doi: 10.1007/s10714-006-0272-7. [DOI] [PubMed] [Google Scholar]
  40. Gunning D, Stefik M, Choi J, et al. XAI-Explainable artificial intelligence. Sci Robot. 2019 doi: 10.1126/scirobotics.aay7120. [DOI] [PubMed] [Google Scholar]
  41. Hagras H. Toward human-understandable, explainable AI. Computer. 2018;51:28–36. doi: 10.1109/MC.2018.3620965. [DOI] [Google Scholar]
  42. Huang L, Liu G, Chen T, et al. Similarity-based emergency event detection in Social Media. J Saf Sci Resil. 2020 doi: 10.1016/j.jnlssr.2020.11.003. [DOI] [Google Scholar]
  43. Huitfeldt A, Stensrud MJ, Suzuki E. On the collapsibility of measures of effect in the counterfactual causal framework. Emerg Themes Epidemiol. 2019;16:1–5. doi: 10.1186/s12982-018-0083-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Islam MR, Ahmed MU, Barua S, Begum S. A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl Sci. 2022;12:1353. doi: 10.3390/APP12031353. [DOI] [Google Scholar]
  45. Jung D, Tuan VT, Tran DQ, et al. Conceptual framework of an intelligent decision support system for smart city disaster management. Appl Sci (Switzerland) 2020 doi: 10.3390/app10020666. [DOI] [Google Scholar]
  46. Kahn BE, Baron J. An exploratory study of choice rules favored for high-stakes decisions. J Consumer Psychol. 1995;4:305–328. doi: 10.1207/s15327663jcp0404_01. [DOI] [Google Scholar]
  47. Kankanamge N, Yigitcanlar T, Goonetilleke A. How engaging are disaster management related social media channels? The case of australian state emergency organisations. Int J Disaster Risk Reduct. 2020;48:101571. doi: 10.1016/j.ijdrr.2020.101571. [DOI] [Google Scholar]
  48. Kaveh A, Javadi SM, Moghanni RM. Emergency management systems after disastrous earthquakes using optimization methods: a comprehensive review. Adv Eng Softw. 2020;149:102885. doi: 10.1016/j.advengsoft.2020.102885. [DOI] [Google Scholar]
  49. Kavota JK, Kamdjoug JRK, Wamba SF. Social media and disaster management: case of the north and south Kivu regions in the Democratic Republic of the Congo. Int J Inf Manag. 2020;52:102068. doi: 10.1016/j.ijinfomgt.2020.102068. [DOI] [Google Scholar]
  50. Kersten J, Klan F. What happens where during disasters? A workflow for the multifaceted characterization of crisis events based on Twitter data. J Conting Crisis Manag. 2020;28:262–280. doi: 10.1111/1468-5973.12321. [DOI] [Google Scholar]
  51. Khandani AE, Kim AJ, Lo AW. Consumer credit-risk models via machine-learning algorithms. J Bank Finance. 2010;34:2767–2787. doi: 10.1016/j.jbankfin.2010.06.001. [DOI] [Google Scholar]
  52. Kim J, Hastak M. Social network analysis: characteristics of online social networks after a disaster. Int J Inf Manag. 2018;38:86–96. doi: 10.1016/j.ijinfomgt.2017.08.003. [DOI] [Google Scholar]
  53. Koliba CJ, Mills RM, Zia A. Accountability in governance networks: an assessment of public, private, and nonprofit emergency management practices following hurricane katrina. Public Adm Rev. 2011;71:210–220. doi: 10.1111/j.1540-6210.2011.02332.x. [DOI] [Google Scholar]
  54. Korb KB, Nicholson AE. Bayesian artificial intelligence. 2. Boca Raton: CRC Press; 2010. [Google Scholar]
  55. Kroll V, Mackenzie AK, Goodge T, et al. Creating a hazard-based training and assessment tool for emergency response drivers. Accid Anal Prev. 2020;144:105607. doi: 10.1016/j.aap.2020.105607. [DOI] [PubMed] [Google Scholar]
  56. Kryvasheyeu Y, Chen H, Obradovich N, et al. Rapid assessment of disaster damage using social media activity. Sci Adv. 2016;2:1–12. doi: 10.1126/sciadv.1500779. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Kuang K, Li L, Geng Z, et al. Causal inference. Engineering. 2020;6:253–263. doi: 10.1016/j.eng.2019.08.016. [DOI] [Google Scholar]
  58. Kumar A, Singh JP, Dwivedi YK, Rana NP. A deep multi-modal neural network for informative Twitter content classification during emergencies. Ann Oper Res. 2020 doi: 10.1007/s10479-020-03514-x. [DOI] [Google Scholar]
  59. Kuo YH, Chan NB, Leung JMY, et al. An integrated approach of machine learning and systems thinking for waiting time prediction in an emergency department. Int J Med Inform. 2020;139:104143. doi: 10.1016/j.ijmedinf.2020.104143. [DOI] [PubMed] [Google Scholar]
  60. Latonero M, Shklovski I. Emergency management, twitter, and social media evangelism. Int J Inform Syst Crisis Response Manage. 2011;3:1–16. doi: 10.4018/jiscrm.2011100101. [DOI] [Google Scholar]
  61. Lawless W, Mittu R, Sofge D. Human-machine shared contexts. 1. San Diego: Academic Press; 2020. [Google Scholar]
  62. Lecue F. On the role of knowledge graphs in explainable AI. Semant Web. 2020;11:41–51. doi: 10.3233/SW-190374. [DOI] [Google Scholar]
  63. Lieber M, Chin-Hong P, Kelly K, et al. A systematic review and meta-analysis assessing the impact of droughts, flooding, and climate variability on malnutrition. Glob Public Health. 2020 doi: 10.1080/17441692.2020.1860247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Liu W, Wang Z, Liu X, et al. A survey of deep neural network architectures and their applications. Neurocomputing. 2016;234:11–26. doi: 10.1016/j.neucom.2016.12.038. [DOI] [Google Scholar]
  65. Liu H, Chen S, Liu M, et al. Comorbid chronic diseases are strongly correlated with disease severity among COVID-19 patients: a systematic review and meta-analysis. Aging Dis. 2020;11:668–678. doi: 10.14336/AD.2020.0502. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Löchner M, Fathi R, Schmid D, et al. Case study on privacy-aware social media data processing in disaster management. ISPRS Int J Geo-Inf. 2020;9:709. doi: 10.3390/ijgi9120709. [DOI] [Google Scholar]
  67. Loynes C, Ouenniche J, De Smedt J. The detection and location estimation of disasters using Twitter and the identification of non-governmental organisations using crowdsourcing. Ann Oper Res. 2020 doi: 10.1007/s10479-020-03684-8. [DOI] [Google Scholar]
  68. Machlev R, Heistrene L, Perl M, et al. Explainable artificial intelligence (XAI) techniques for energy and power systems: review, challenges and opportunities. Energy AI. 2022 doi: 10.1016/j.egyai.2022.100169. [DOI] [Google Scholar]
  69. Madichetty S. Identification of medical resource tweets using majority voting-based ensemble during disaster. Social Netw Anal Min. 2020;10:66. doi: 10.1007/s13278-020-00679-y. [DOI] [Google Scholar]
  70. Marcot BG, Penman TD. Advances in bayesian network modelling: integration of modelling technologies. Environ Model Softw. 2019;111:386–393. doi: 10.1016/j.envsoft.2018.09.016. [DOI] [Google Scholar]
  71. Marjanović M, Kovačević M, Bajat B, Voženílek V. Landslide susceptibility assessment using SVM machine learning algorithm. Eng Geol. 2011;123:225–234. doi: 10.1016/j.enggeo.2011.09.006. [DOI] [Google Scholar]
  72. Martinez-Hernandez U, Dehghani-Sanij AA. Probabilistic identification of sit-to-stand and stand-to-sit with a wearable sensor. Pattern Recognit Lett. 2019;118:32–41. doi: 10.1016/j.patrec.2018.03.020. [DOI] [Google Scholar]
  73. McGuire M, Silvia C. The effect of problem severity, managerial and rrganizational capacity, and agency structure on intergovernmental collaboration: evidence from local emergency management. Public Adm Rev. 2010;70:279–288. doi: 10.1111/j.1540-6210.2010.02134.x. [DOI] [Google Scholar]
  74. Middleton SE, Middleton L, Modafferi S. Real-time crisis mapping of natural disasters using social media. IEEE Intell Syst. 2014;29:9–17. doi: 10.1109/MIS.2013.126. [DOI] [Google Scholar]
  75. Miller T. Explanation in artificial intelligence: insights from the social sciences. Artif Intell. 2019;267:1–38. doi: 10.1016/j.artint.2018.07.007. [DOI] [Google Scholar]
  76. Mohan P, Mittal H. Review of ICT usage in disaster management. Int J Inform Technol (Singapore) 2020;12:955–962. doi: 10.1007/s41870-020-00468-y. [DOI] [Google Scholar]
  77. Mujalli RO, López G, Garach L. Bayes classifiers for imbalanced traffic accidents datasets. Accid Anal Prev. 2016;88:37–51. doi: 10.1016/j.aap.2015.12.003. [DOI] [PubMed] [Google Scholar]
  78. Muro-de-la-Herran A, Garcia-Zapirain B, Mendez-Zorrilla A. Gait analysis methods: an overview of wearable and non-wearable systems, highlighting clinical applications. Sensors. 2014;14:3362–3394. doi: 10.3390/s140203362. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Nimmy SF, Hussain OK, Chakrabortty RK, et al. Explainability in supply chain operational risk management: a systematic literature review. Knowl Based Syst. 2022;235:107587. doi: 10.1016/J.KNOSYS.2021.107587. [DOI] [Google Scholar]
  80. Oroszi T. A preliminary analysis of high-stakes decision-making for crisis leadership. J Bus Contin Emerg Plan. 2018;11:335–359. [PubMed] [Google Scholar]
  81. Pearl J. Handbook of structural equation modeling. New York: Guilford press; 2012. The causal foundations of structural equation modeling; pp. 68–91. [Google Scholar]
  82. Pearl J. The seven tools of causal inference, with reflections on machine learning. Commun ACM. 2019;62:54–60. doi: 10.1145/3241036. [DOI] [Google Scholar]
  83. Pearl J, Glymour MM, Jewell NP. Causal inference in statistics: a primer. 1. New York: Wiley; 2016. [Google Scholar]
  84. Peng Y, Zhang Y, Tang Y, Li S. An incident information management framework based on data integration, data mining, and multi-criteria decision making. Decis Support Syst. 2011;51:316–327. doi: 10.1016/j.dss.2010.11.025. [DOI] [Google Scholar]
  85. Planella Conrado S, Neville K, Woodworth S, O’Riordan S. Managing social media uncertainty to support the decision making process during emergencies. J Decis Syst. 2016;25:171–181. doi: 10.1080/12460125.2016.1187396. [DOI] [Google Scholar]
  86. Prosperi M, Guo Y, Sperrin M, et al. Causal inference and counterfactual prediction in machine learning for actionable healthcare. Nat Mach Intell. 2020;2:369–375. doi: 10.1038/s42256-020-0197-y. [DOI] [Google Scholar]
  87. Ragini JR, Anand PMR, Bhaskar V. Big data analytics for disaster response and recovery through sentiment analysis. Int J Inf Manag. 2018;42:13–24. doi: 10.1016/j.ijinfomgt.2018.05.004. [DOI] [Google Scholar]
  88. Randolph J. A guide to writing the dissertation literature review. Pract Assessment Res Eval. 2009;14:1–13. [Google Scholar]
  89. Raza M, Awais M, Ali K, et al. Establishing effective communications in disaster affected areas and artificial intelligence based detection using social media platform. Future Gen Comput Syst. 2020;112:1057–1069. doi: 10.1016/j.future.2020.06.040. [DOI] [Google Scholar]
  90. Reddy S, Mun M, Burke J, et al. Using mobile phones to determine transportation modes. ACM Trans Sens Netw. 2010;6:1–27. doi: 10.1145/1689239.1689243. [DOI] [Google Scholar]
  91. Rohmer J. Uncertainties in conditional probability tables of discrete bayesian belief networks: a comprehensive review. Eng Appl Artif Intell. 2020;88:103384. doi: 10.1016/j.engappai.2019.103384. [DOI] [Google Scholar]
  92. Sahoh B, Choksuriwong A. Automatic semantic description extraction from social big data for emergency management. J Syst Sci Syst Eng. 2020;29:412–428. doi: 10.1007/s11518-019-5453-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Sahoh B, Choksuriwong A. Interpretable artificial intelligence: a perspective of granular computing. Berlin: Springer; 2021. Beyond deep event prediction: deep event understanding based on explainable artificial intelligence; pp. 91–117. [Google Scholar]
  94. Sahoh B, Choksuriwong A. A proof of concept and feasibility analysis of using social sensors in the context of causal machine learning based emergency management. J Ambient Intell Hum Comput. 2021 doi: 10.1007/s12652-021-03317-3. [DOI] [Google Scholar]
  95. Scholkopf B, Locatello F, Bauer S, et al. Toward causal representation learning. Proc IEEE. 2021;109:612–634. doi: 10.1109/JPROC.2021.3058954. [DOI] [Google Scholar]
  96. Seo T, Bayen AM, Kusakabe T, Asakura Y. Traffic state estimation on highway: a comprehensive survey. Annu Rev Control. 2017;43:128–151. doi: 10.1016/j.arcontrol.2017.03.005. [DOI] [Google Scholar]
  97. Seppänen H, Mäkelä J, Luokkala P, Virrantaus K. Developing shared situational awareness for emergency management. Saf Sci. 2013;55:1–9. doi: 10.1016/j.ssci.2012.12.009. [DOI] [Google Scholar]
  98. Song X, Zhang H, Akerkar RA, et al. Big data and emergency management: concepts, methodologies, and applications. IEEE Trans Big Data. 2020;14:1–24. doi: 10.1109/tbdata.2020.2972871. [DOI] [Google Scholar]
  99. Stylianou K, Dimitriou L, Abdel-Aty M. Big data and road safety: a comprehensive review. Oxford: Elsevier Inc; 2019. [Google Scholar]
  100. Sun W, Bocchini P, Davison BD. Applications of artificial intelligence for disaster management. Nat Hazards. 2019 doi: 10.1007/s11069-020-04124-3. [DOI] [Google Scholar]
  101. Takahashi B, Tandoc EC, Carmichael C. Communicating on Twitter during a disaster: an analysis of tweets during Typhoon Haiyan in the Philippines. Comput Hum Behav. 2015;50:392–398. doi: 10.1016/j.chb.2015.04.020. [DOI] [Google Scholar]
  102. Tan L, Hu M, Lin H. Agent-based simulation of building evacuation: combining human behavior with predictable spatial accessibility in a fire emergency. Inf Sci. 2015;295:53–66. doi: 10.1016/j.ins.2014.09.029. [DOI] [Google Scholar]
  103. Tashakkori H, Rajabifard A, Kalantari M. A new 3D indoor/outdoor spatial model for indoor emergency response facilitation. Build Environ. 2015;89:170–182. doi: 10.1016/j.buildenv.2015.02.036. [DOI] [Google Scholar]
  104. Taylor RA, Pare JR, Venkatesh AK, et al. Prediction of in-hospital mortality in emergency department patients with sepsis: a local big data-driven, machine learning approach. Acad Emerg Med. 2016;23:269–278. doi: 10.1111/acem.12876. [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Tonmoy FN, Hasan S, Tomlinson R. Increasing coastal disaster resilience using smart city frameworks: current state, challenges, and opportunities. Front Water. 2020;2:3. doi: 10.3389/frwa.2020.00003. [DOI] [Google Scholar]
  106. Valenzuela VPB, Esteban M, Takagi H, et al. Disaster awareness in three low risk coastal communities in Puerto Princesa City, Palawan, Philippines. Int J Disaster Risk Reduct. 2020;46:101508. doi: 10.1016/j.ijdrr.2020.101508. [DOI] [Google Scholar]
  107. Wang W, Stewart K. Spatiotemporal and semantic information extraction from web news reports about natural hazards. Comput Environ Urban Syst. 2015;50:30–40. doi: 10.1016/j.compenvurbsys.2014.11.001. [DOI] [Google Scholar]
  108. Wang Z, Lu M, Yuan X, et al. Visual traffic jam analysis based on trajectory data. IEEE Trans Vis Comput Graph. 2013;19:2159–2168. doi: 10.1109/TVCG.2013.228. [DOI] [PubMed] [Google Scholar]
  109. Wang Z, Ye X, Tsou MH. Spatial, temporal, and content analysis of Twitter for wildfire hazards. Nat Hazards. 2016;83:523–540. doi: 10.1007/s11069-016-2329-6. [DOI] [Google Scholar]
  110. Wang D, Wan K, Ma W. Emergency decision-making model of environmental emergencies based on case-based reasoning method. J Environ Manage. 2020;262:110382. doi: 10.1016/j.jenvman.2020.110382. [DOI] [PubMed] [Google Scholar]
  111. Xiao F. Multi-sensor data fusion based on the belief divergence measure of evidences and the belief entropy. Inform Fusion. 2019;46:23–32. doi: 10.1016/j.inffus.2018.04.003. [DOI] [Google Scholar]
  112. Xu Z, Liu Y, Yen N, et al. Crowdsourcing based description of urban emergency events using social media big data. IEEE Trans Cloud Comput. 2016;8:1–11. [Google Scholar]
  113. Yates D, Paquette S. International journal of information management. New York: Elsevier Ltd; 2011. Emergency knowledge management and social media technologies: a case study of the 2010 Haitian earthquake; pp. 6–13. [Google Scholar]
  114. Yin J, Lampert A, Cameron M, et al. Using social media to enhance emergency situation awareness. IEEE Intell Syst. 2012;27:52–59. doi: 10.1109/MIS.2012.6. [DOI] [Google Scholar]
  115. Yu X, Li C, Zhao WX, Chen H. A novel case adaptation method based on differential evolution algorithm for disaster emergency. Appl Soft Comput J. 2020;92:106306. doi: 10.1016/j.asoc.2020.106306. [DOI] [Google Scholar]
  116. Zahra K, Imran M, Ostermann FO. Automatic identification of eyewitness messages on twitter during disasters. Inf Process Manage. 2020;57:102107. doi: 10.1016/j.ipm.2019.102107. [DOI] [Google Scholar]
  117. Zhou Q, Huang W, Zhang Y. Identifying critical success factors in emergency management using a fuzzy DEMATEL method. Saf Sci. 2011;49:243–252. doi: 10.1016/j.ssci.2010.08.005. [DOI] [Google Scholar]
  118. Zhou L, Wu X, Xu Z, Fujita H. Emergency decision making for natural disasters: an overview. Int J Disaster Risk Reduct. 2018;27:567–576. doi: 10.1016/j.ijdrr.2017.09.037. [DOI] [Google Scholar]
  119. Zhou D, Yuan J, Si J. Health issue identification in social media based on multi-task hierarchical neural networks with topic attention. Artif Intell Med. 2021 doi: 10.1016/J.ARTMED.2021.102119. [DOI] [PubMed] [Google Scholar]
  120. Zhou J, Liu L, Wei W, Fan J. Network representation learning: from Preprocessing, feature extraction to node embedding. ACM Comput Surv (CSUR) 2022;55:103. doi: 10.1145/3491206. [DOI] [Google Scholar]
  121. Zhu X, Zhang G, Sun B. A comprehensive literature review of the demand forecasting methods of emergency resources from the perspective of artificial intelligence. Nat Hazards. 2019;97:65–82. doi: 10.1007/s11069-019-03626-z. [DOI] [Google Scholar]
  122. Bareinboim E, Correa JD, Ibelind D, Icard T (2020) On Pearl’s hierarchy and the foundations of causal inference
  123. Bengio Y (2017) The consciousness prior. arXiv 1–7
  124. Holmes J (2022) Losing 25,000 to hunger every day|United Nations. https://www.un.org/en/chronicle/article/losing-25000-hunger-every-day. Accessed 4 Mar 2023
  125. Karimi A-H, von Kügelgen J, Schölkopf B, Valera I (2020) Algorithmic recourse under imperfect causal knowledge:a probabilistic approach. arXiv
  126. Kumor D, Cinelli C, Bareinboim E (2020) Efficient identification in linear structural causal models with auxiliary cutsets. PMLR
  127. Patrício C, Neves JC, Teixeira LF (2022) Explainable Deep learning methods in medical diagnosis: a survey. 10.48550/arxiv.2205.04766
  128. Pearl J, Mackenzie D (2018) The book of why: the new science of cause and effect. Basic Books 402
  129. Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.  Nat Mach Intell 1:206–215. 10.1038/s42256-019-0048-x [DOI] [PMC free article] [PubMed]
  130. Schölkopf B (2019) Causality for machine learning. arXiv
  131. Vilone G, Longo L (2020) Explainable artificial intelligence: a systematic review
  132. von Kügelgen J, Gresele L, Schölkopf B (2020) SIMPSON’S paradox in COVID-19 case fatality rates: a meditation analysis of age-Related causal effects. arXiv [DOI] [PMC free article] [PubMed]
  133. Warren G, Keane MT, Byrne RMJ (2022) Features of explainability :How users understand counterfactual and causal explanations for categorical and continuous features in XAI
  134. Worldometer (2022) Coronavirus death rate (COVID-19). https://www.worldometers.info/coronavirus/. Accessed 20 Jun 2022
  135. Xiao K, Qian Z, Qin BA et al (2022) A Survey of Data Representation for Multi-Modality Event Detection and Evolution.Applied Sciences 2022, Vol 12, Page 2204 12:2204.10.3390/APP12042204
  136. World Health Organization (2021) Falls. https://www.who.int/news-room/fact-sheets/detail/falls. Accessed 20 Jun 2022
  137. World Health Organization (2022) Road traffic injuries. https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries. Accessed 20 Jun 2022

Articles from Journal of Ambient Intelligence and Humanized Computing are provided here courtesy of Nature Publishing Group

RESOURCES