Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2025 Mar 8;45(8):2232–2242. doi: 10.1111/risa.70009

A framework for evolving assumptions in risk analysis

Kendrick Hardaway 1,2,, Roger Flage 3
PMCID: PMC12411124  PMID: 40055989

Abstract

Risk assessment can be used to evaluate the risks of complex systems and emerging technologies, such as the human–climate nexus and automation technologies, and to inform pathways and policies. Due to the interconnected and evolutionary features of such topics, risk analysts must navigate the dynamics of changing assumptions and probabilities in the risk assessment. However, the current risk analysis approach neglects to a large extent an explicit consideration of these dynamics, either oversimplifying complex systems or neglecting the likely human response to emerging technologies. In this article, we outline why the evolutionary dynamics of assumptions and probabilities in a risk assessment must receive close attention, and then we provide a possible framework through which to consider the dynamics. Ultimately, we propose a formal approach to conceptualizing and implementing the risk description with respect to feedback loops and complex adaptive systems.

Keywords: assumptions, complex systems, feedback loops, interventions, risk assessment, risk science

1. INTRODUCTION

Risk assessment informs decisions from military and business strategies (Karmperis et al., 2014; Tuncel & Alpan, 2010) to medical treatments (Guzik et al., 2020) and climate‐related policy (Ostwald et al., 2012). Due to the influence risk assessments have on such a wide array of decisions, researchers and practitioners have suggested and iterated possible conceptualizations of risk and risk assessment frameworks to establish a common and consistent approach (Aven, 2010, 2012; Logan et al., 2021). Within the risk assessment process, making the approach clear—including the chosen assumptions and scope—is a necessary step for clarifying the strengths and limitations of a risk assessment. As risk analysts take on increasingly complex topics (e.g., artificial intelligence and climate change), determining acceptable risk over various time horizons and between differing types of risk (e.g., health, financial, safety, and environmental) places more importance on risk measurement and communication.

An assumption is something that is accepted as true without proof (Cambridge, 2024). In a risk analysis context, assumptions can be conceptualized as part of the background knowledge of a risk description (Aven, 2013b). While knowledge can generally be understood as justified beliefs (Aven, 2016), assumptions can be understood as chosen justified beliefs (Flage & Askeland, 2020). This reflects that—although informed by and preferably approximating justified beliefs—assumptions do not necessarily express actual justified beliefs; nevertheless, the risk analysis is carried out as if they were.

Both beliefs and their justification can change, implying that knowledge evolves with time. However, a change in knowledge does not necessarily change the assumptions made in a risk analysis. Still, there may come a point where a revision of the assumptions is deemed necessary. This highlights the importance of adopting a systematic approach to evaluating and revising assumptions—or, said another way, evolving assumptions over time.

Glette‐Iversen et al. (2023) present a new approach for evaluating the need for a new (updated) risk assessment, reflecting that new knowledge, as well as changes in systems, phenomena, or values, could alter the underlying premises of the initial risk assessment. The present work builds on that of Glette‐Iversen et al. (2023), focusing on and formalizing the notion that new knowledge could alter the underlying premises of an initial risk assessment.

In this article, we introduce a framework through which we can explicitly approach the evolution of assumptions over time in a risk assessment. The framework formalizes the relationship between the broader background knowledge and the assumptions of a risk analysis. This provides an explicit notation for evolving assumptions, and we illustrate the impact of applying the framework to a simple risk analysis. Though applied to a simplified risk analysis, the framework suggested in the following sections aims to assist the handling of assumptions in risk analysis of increasingly complex topics such as artificial intelligence and global environmental sustainability.

Complex systems are inherently challenging to analyze due to the presence of feedback loops. Feedback loops are recurring interactions where the output of a process influences the behavior of the process itself (Forrester, 1997). Feedback loops are a primary mechanism through which complexity arises and are, as such, a hallmark of complex systems (Meadows, 2008). Complex systems are characterized by interconnected components, the interactions of which create emergent behavior, understood as behavior that cannot be explained by the individual component behavior alone (Ladyman et al., 2013). Understanding feedback loops is essential for analyzing, predicting, and managing complex systems.

In a risk analysis setting, feedback loops can occur both in the systems and phenomena being studied and in the processes used to address risk. For example, in climate systems, feedback loops such as the melting of polar ice reducing the Earth's albedo can intensify global warming. Similarly, the way risk is assessed and managed can introduce feedback loops, such as regulations mitigating one risk while inadvertently creating others. Understanding feedback loops in both risk generation and risk treatment allows for more adaptive risk management approaches.

Feedback loops are one source of change to risk assessment assumptions, along with interventions and external shocks. External shocks can unexpectedly alter the conditions under which a risk assessment was initially conducted, requiring a reassessment of its assumptions. Similarly, interventions can introduce new dynamics into the system, shifting how risks are generated or managed. These factors illustrate the need for continuous evaluation of assumptions and the implementation of frameworks to adapt to evolving circumstances.

In the remaining sections of the article, we first provide some background on risk analysis of complex systems and risk analysis assumptions (Section 2). We then establish what we mean by the risk concept and elaborate on the role of time with reference to Logan et al. (2021) (Section 3). Next, we suggest ways that we can frame how assumptions evolve over time and use a simple example case to illustrate the impact on a risk assessment (Section 4). Then, we discuss the implications of our proposed framing and future steps for its application (Section 5).

2. BACKGROUND

2.1. Risk analysis of complex systems

Several approaches have been used to try to capture risk in complex, dynamic systems. Recent research in risk science has put forward general principles (Haimes, 2018), new conceptualizations and associated notation (Logan et al., 2021), and specific strategies for dynamic aspects of risk and uncertainty (Glette‐Iversen et al., 2023) in complex systems. Haimes (2018) provides a rigorous synthesis of the literature on complex interconnected and interdependent systems of systems (SoS) and risk science. The author points out that much of risk analysis was developed for single and static systems. This has led to limitations for applying risk assessments on increasingly complex topics.

Using three case studies, Haimes (2018) outlines key considerations that must be made in risk analysis of emergent complex SoS. Interactions between systems and sub‐systems are addressed in detail, but a surprising omission is any explicit discussion on feedback loops, though the article discusses interconnection at length. The article's main contribution appears to be updating Kaplan and Garrick's three questions and presenting 10 guiding principles for using risk analysis in complex systems. Though Haimes (2018) provides 10 principles and many key considerations, we do not aim to tackle all of them in the scope of this article. We do, however, highlight two statements from Haimes here:

  1. “Although the time frame is at best assumed implicit in [Kaplan and Garrick's three questions (Kaplan & Garrick, 1981)] when addressing a single system, it must be made explicit in the context of Complex SoS.”

  2. “Uncertainty analysis becomes even more imperative in risk analysis of emergent complex SoS.” […] “Sources of uncertainty and lack of understanding of the complexity associated with one subsystem of SoS would likely result in (i) adherence to unrealistic assumptions…”

Haimes (2018) provides a roadmap for unifying complex systems and risk analysis, but the specific details of assessing risk for complex systems are left for future research.

In Logan et al. (2021), the authors introduced a conceptual contribution for addressing Haimes’ suggestion to make time explicit in risk assessments. With several case studies, the authors demonstrate both the benefit of making time explicit in risk assessments and a notation with which to communicate time explicitly in risk assessments. We review this article in more detail in Section 3.2. It provides a definitive starting point for us to tackle Haimes’ other suggestion regarding uncertainty and assumptions when evaluating risk in complex systems. For us to avoid adhering to unrealistic assumptions, there must be a systematic approach to evaluating and updating assumptions.

2.2. Risk analysis assumptions

Prior work on assumptions and uncertainty in risk assessments provides a starting place for us to evaluate how to avoid adhering to unrealistic assumptions when evaluating risk in complex and dynamic systems. As described in Section 1, knowledge can be conceptualized as justified beliefs (Aven, 2016), and Flage and Askeland (2020) describe how assumptions can be conceptualized as chosen justified beliefs. This implies that assumptions may deviate from what is believed, which is often the case in practice, where an assumption could be made to simplify the analysis.

Relatedly, assumption deviation risk and the uncertainty of assumptions have been well‐studied (Aven, 2013a; Flage, 2019; Khorsandi & Aven, 2017), but the way in which assumptions take on a dynamic or evolving nature itself has received less attention. This is specifically relevant to Haimes’ warning about “adherence to unrealistic assumptions” because adherence implies committing to an unchanging assumption over time. What remains unclear are the ways in which assumptions change over time and how to deal with this in a risk analysis. This is a subtle but important difference. Rather than how assumptions will differ from an initial static assumption, how do we make an assumption that is inherently dynamic, and what ways can we anticipate assumptions being dynamic?

To improve the objectivity of risk assessments, the selection of assumptions must be as objective and rigorous as possible. Simultaneously, for practical purposes, risk analysts must acknowledge complexity in assumption selection without infinitely regressing. Thus far, no formal way of expressing how assumptions could feasibly evolve has been developed, leaving risk analysts to their own devices in determining the parameters for assumption selection. Certainly, expert judgment and subjectivity will still be a vital part of assumption selection, but a formalized way of selecting assumptions can help provide objectivity and help in comparison cases between competing risk assessments. A framework that outlines the ways assumptions can evolve would provide risk analysts with a clear way of both considering how assumptions can evolve and communicating how they determined which ones to include, particularly when analyzing risk in complex systems.

3. CONCEPTUALIZING RISK

3.1. The risk concept

In this section, we outline how we conceptualize risk in the present article, and we place the conceptualization in alignment with the The Society for Risk Analysis Glossary (Aven et al., 2018). This establishes a clear starting place for us to then evaluate the role of assumptions and their evolution in risk assessment. We apply the conceptualization to a tangible example: the use of a staircase.

The risk definition can be expressed as Risk = (A,C,U). To illustrate, we consider the risk related to the activity of using a staircase. If this activity is carried out, events (A) and consequences (C) could occur as a result of this. U signifies uncertainty about what events will occur and what will be the consequences of those events. In the staircase example, it could represent that, in the present, we do not know if one or more persons will fall while using the staircase, or if they will get hurt when they fall, or how slippery the individual stairs are at any given moment. We do not know how these aspects will change over time. The definition allows us to handle the concept of risk, but it does not—nor is it meant to—act as a measurement of risk.

Instead, the risk description operationalizes the measurement of the risk concept. One of the several alternative risk descriptions in the SRA glossary (Aven et al., 2018) is defined to comprise specified events (A′), specified consequences (C′), a measure of uncertainty (Q), and the knowledge on which the assessment of consequences and uncertainty is based (K). It provides specificity and scope by explicitly clarifying what is being considered in the risk assessment. Thus, the risk description can be expressed as risk description = (A′, C′, Q, K).

Let us perform a (simple) risk assessment regarding the number of arms and legs broken by people that fall on a staircase. In this case, we can start with a specified event (A′): a person falls down the stairs. Next, we can identify the specified consequence of interest (C′): the number of broken arms or legs. After defining these two features, quantifying the risk assessment requires making certain assumptions (as part of the knowledge, K) and will involve some uncertainty measurement, expressed with Q. Thus, we can write the risk description as follows:

A′: A person falls down the stairs.

C′: Number of broken arms or legs.

Q: We express the uncertainty using probability based on empirical evaluations of staircase use and falling data.

K: The knowledge on which (A′, C′, Q) are based, including assumptions, observational data, surveys, and expert judgment; specifically, a set of explicit assumptions (H).

Perhaps our first assumption (H1) to be made is how many people use the stairs. The assumption could be based on the average of observational data collection over a long period of time, expert judgment, or a short observation window (such as 1 h) that is then extrapolated. Each way varies in its strength of knowledge. The number of people will change from staircase to staircase, but for the purposes of this example, we will assume 200 people use our staircase each day.

Next, we need to determine the frequency of someone falling. To assess this, we may first need to establish what factors could impact a person falling, which could be determined during a stage of research on staircase use. For instance, we may determine that the stairs’ slipperiness is a major factor in the probability of someone falling and breaking an arm or leg. This means our likelihood of falling should be based on various levels of slipperiness. In turn, we need to identify some assumptions regarding how slippery the stairs are and how often they are that slippery.

Although we could extend this back to assumptions regarding weather conditions, floor materials, and so on, we can assume for now that the stairs are always in the same slippery condition and the slippery condition has a BCRA slipperiness factor of excess slipperiness (0.20 < μ < 0.39) (SPA, 2024).

Given the slipperiness, we can assign rates (frequencies) of a person falling and breaking an arm or leg. For excess slipperiness, we assume 38 in 10,000 people will fall and break an arm or leg (Crist, 2017). We can list our explicit assumptions so far:

H1: There are 200 person‐trips on the staircase each day.

H2: The staircase has excess slipperiness.

H3: 38 in 10,000 people fall and break an arm or leg at excess slipperiness levels.

H4: A person takes 10 trips per day.

(Implicit assumptions not explicitly written above—like the presence of an elevator, the length of the staircase, or that slipperiness impacts falling—are also made here. We will discuss implicit assumptions more in Sections 3.2 and 4.)

Based on assumptions H3 and H4, we can expect 38 falls per 100,000 person‐trips. Therefore, we can estimate that the risk of a person falling and breaking an arm or leg could happen: 0.076 falls on the staircase in 1 day. For a more rigorous technical analysis, we could introduce uncertainty (expressed with Q) considerations into each assumption to consider several scenarios or ranges of values.

This example illustrates the standard (albeit very simplified) process of conducting a risk assessment.

3.2. The role of time

Recently, Logan et al. (2021) demonstrated the necessity for explicitly including time in the risk concept and the risk description, addressing Haimes’ first statement from Section 1 about making time explicit in risk analysis of Complex SoS. First, the risk definition was revised by Logan et al. (2021) to include the time interval [0, τ] over which an activity (α) is observed and the time horizon (η) in which consequences are observed after an event (A). Thus, the revised version of the risk definition can be expressed as Risk=(A,C,U)(ατ,η). Similarly, the authors revised the risk description that is then expressed as Risk=(A,C,Q,K)(ατ,η).

Our staircase example in Section 3.1 implicitly evaluated the activity of staircase use over 1 day. To make time explicit, we could state an activity time interval of [0,24] hours and a consequence observation time horizon of one week (to give time for x‐rays and a formal diagnosis of a broken bone).

Once time is explicit, we can also decide to consider the staircase example over a longer time interval. For instance, let us change the time interval from [0,24] hours to [0,365] days.

Making the time horizon and time intervals explicit, then varying them from an interval of 1 day to 365 days, will affect the probabilities (p as part of Q) of both the specified events and specified consequences. The specified events and specified consequences could change with different time parameters as well, but—for simplicity—we limit the staircase example to only one specified event and one specified consequence. However, in what ways exactly do the probabilities captured in Q change? Additionally, how is our knowledge (K) influenced by our choice in time parameters?

At this stage, we can make one of two choices. We can decide that the assumptions (and probabilities predicated on the assumptions) are unchanging over time (i.e., 200 people will use the staircase every day no matter what). Based on our assumptions, we can expect 38 falls in 100,000 people‐trips. With unchanging assumptions regarding usage and slipperiness, we could expect 27.74 falls/year.

3.3. Next step

When we begin to consider time in the risk description, we must acknowledge how certain assumptions and probabilities could be subject to changing over time. In fact, when attempting to mitigate risks, our interventions often directly affect assumptions and probabilities over time. To determine how initial assumptions may change over time, we must determine the variety of ways initial assumptions could be impacted. Once this is completed, establishing a clear and operational way of including the change in assumptions and communicating that in the risk assessment is necessary (Beard, 2004). In the following section, we propose several ways that the initial assumptions and probabilities can change over time, along with a way to formalize the process in a risk assessment.

4. EVOLUTION OF ASSUMPTIONS AND PROPOSED CONCEPT

Initial assumptions are typically made because there is a lack of knowledge. However, we can recognize that our knowledge will change over time, influencing how we understand and act on the given risk. Because assumptions are related to the level of knowledge, we can anticipate how assumptions may change based on how the system in question will change and how the level of knowledge about the system will change over that same time. As we observe the system, we may intervene in the system to mitigate risk. We may notice interactions in the system that were previously ignored or misunderstood. Something might happen to the system that fundamentally alters the relevance of the initial assumptions. In each case, there is an informational feedback loop between learning and acting on that learning.

Informational, or knowledge‐based, feedback loops center on the interaction occurring between the risk analyst or decision‐maker and the system in question. The value in recognizing the informational feedback loop for risk assessments is that it clarifies where a risk analyst expects assumptions to evolve due to new information and where they do not, or at least where they consider the evolution in assumptions. Thus, though an assumption is necessary initially, there is a recognition that the assumption will change and some acknowledgment for how.

We consider three ways that initial assumptions and the probabilities that are based on them can change over time: interventions, external shocks, and feedback loops (Figure 1). Interventions here refer to intentional system changes due to either imagined or observed consequences, whereas external shocks refer to unintentional system changes. Feedback loops represent internal system dynamics or phenomenological features of a system that, depending on their inclusion or not, will affect the way an assumption changes over time. All three examples—in a way—represent a form of informational feedback loops, with the assumptions requiring updating due to new ways of understanding the interactions with the system.

FIGURE 1.

FIGURE 1

The three main ways that assumptions can evolve, as proposed in this article.

These three ways can impact the initial assumptions, and they require assumptions of their own to be added to the risk description. The assumptions related to these three ways can be termed “revised assumptions.” They are defined and elaborated with examples in Sections 4.14.3.

4.1. Interventions

When dealing with an activity (α) taking place over time and with consequences (C) occurring over time, that time provides humans a chance to intervene or mitigate risk. Interventions (M) involve an intentional change to the system enacted by human agents to mitigate risk. Such a change may affect all parts of the risk description. For example, the intervention could make an initially specified event impossible or make a new event possible. Or the intervention could invalidate an assumption and require a new assumption to be made, thereby also altering any probability that is based on this assumption.

Interventions can be either pre‐defined or novel responses to consequences that occur over the time interval considered. Pre‐defined responses are identified in the risk description, whereas novel responses are identified after the risk description and while observing the activity over time. Currently, the influence of interventions on assumptions specifically and the risk description more broadly remains to be conceptually clarified. We now apply the conceptual risk description with interventions included to the staircase example.

In the staircase example, several possible interventions can serve as demonstrations. First, an intervention could be the regular drying of stairs to try reducing the slipperiness coefficient. Second, assuming the staircase was outside, an intervention could be the installation of a cover to keep weather from causing the stairs to be slippery. Third, an intervention could be the installation of an elevator. We can list them like so:

M1: Dry the stairs every 5 days.

M2: Build a cover over the staircase.

M3: Install an elevator nearby.

At the time of establishing the risk description, before the activity takes place, these interventions have not yet taken place and must be seen as assumptions and, as such, part of the background knowledge (K). If one (or all) of these interventions took place, at least one initial assumption would be fundamentally altered. Regularly drying the stairs would affect the assumptions H1 (the staircase daily usage) and H2 (the staircase slipperiness), and it would need to occur indefinitely to do so. Building a cover over the staircase would affect assumption H2 (slipperiness) and would be a one‐time intervention. Installing an elevator near the staircase would affect assumption H1 (usage). Let us apply the regular drying intervention (M1) to our staircase risk assessment and note what has changed (highlighted in italics):

Q: We express the uncertainty using probability based on empirical evaluations of staircase use and falling data with respect to how time parameters and revised assumptions affect probabilities.

K: The knowledge on which (A′,C ′,Q) are based, including assumptions, observational data, surveys, and expert judgment; specifically, the explicit assumptions (H):

H1 |M1: The initial assumption (H1) was that there are 200 people‐trips on the staircase each day. Given that the intervention M1 is made, usage drops to 175 people‐trips because the staircase is briefly closed for drying.

H2 |M1: The initial assumption (H 2 ) was that the staircase has excess slipperiness. Given that the intervention M1 is made, the staircase has satisfactory friction for the following three days. Since drying occurs every fifth day, the staircase has excess slipperiness on the fourth day (1 out of every five days).

The updated risk description above illustrates two features: (1) how an intervention can affect the way an initial assumption changes over time; and (2) how an intervention—or revised assumption—cannot stand alone but instead requires an initial assumption to be relevant. An assumption about regularly drying stairs is not notable in the risk assessment unless there already exists an initial assumption about slipperiness. Furthermore, by explicitly defining the interventions that could alter the initial assumptions or probabilities, we can also identify some tacit initial assumptions that were made. For instance, three tacit initial assumptions were: Regular drying was not already taking place, the staircase was outside, and no elevator already existed. Thus, defining interventions explicitly helps clarify the initial assumptions existing in and affecting the current risk assessment.

Returning to the staircase example with interventions included, we now have conditional changes to assumptions H1 and H2 based on the example intervention, and we have added a fifth assumption. We are uncertain when the intervention will be implemented, so we can consider this uncertainty probabilistically; we elaborate on two forms of implementation in the following paragraphs. For simplicity, let us assume the intervention occurs halfway through the period even though it may be more realistic that the intervention is conditional on the specified events or specified consequences occurring. If the intervention is implemented halfway into the time interval, then we can calculate the expected number of falls as shown here:

200peopletripsday×38falls100,000peopletrips×183daysyear+200peopletripsday×5falls100,000peopletrips×35×182daysyear+175peopletripsday×5falls100,000peopletrips×15×182daysyear+200peopletripsday×38falls100,000peopletrips15×182daysyear=18.08fallsyear.

This example demonstrates the effectiveness of the intervention, reducing the expected number of falls per year from 27 to 18, given that decision‐makers intervened halfway through the year. It demonstrates how slipperiness and usage can change over the course of 1 year given an intervention, which can be an important feature to understanding the risk of the staircase over time. The example also demonstrates how the moment of implementation is a key consideration in how assumptions evolve over time and, eventually, when comparing between interventions. Considering an intervention's uncertainty through probabilities can be better operationalized by codifying the intervention into two categories.

Regarding the aspect of when an intervention is implemented, we suggest distinguishing between two types of interventions: semi‐dynamic and fully dynamic interventions. Semi‐dynamic interventions represent human responses that occur at predictable thresholds or in response to predictable specified consequences (Figure 2). These are pre‐defined rules, where depending on the occurrence of a pre‐specified event or consequence, some pre‐defined interventions are implemented. In our staircase example, deciding to install an elevator once 27 falls have occurred would make the intervention semi‐dynamic. Or, deciding to regularly dry stairs after a certain number of falls had occurred would make the intervention semi‐dynamic. However, what if the intervention was more vaguely expected? We might dry stairs or install an elevator but without parameters. This could be considered a fully dynamic intervention. Relatively, semi‐dynamic interventions would be simpler to model and have a smaller range of uncertainty—expressed as probabilities—than fully dynamic interventions.

FIGURE 2.

FIGURE 2

A visual demonstration of a semi‐dynamic intervention and its effect on initial assumptions.

Fully dynamic interventions represent human responses to specified events and assumptions that could feasibly occur, but when or why the intervention would take place remains unclear (Figure 3). We expect these interventions to be made in the future depending on how the activity develops and the associated consequences. Interventions that could occur in response to consequences not yet specified may also fall in this category. For instance, midway through observing our activity, a new specified consequence of the stairs breaking could become significant enough that the previously suggested interventions aimed at slipperiness do not apply as much as those aimed at structural stability. A possible intervention could be defined in the beginning that may be dynamic to undefined consequences, such as the regular maintenance of the stairs. Introducing such an intervention has a greater uncertainty since it is not tied to a specific pre‐defined consequence, though we might expect such an intervention to be employed at some point. Differentiating between semi‐dynamic and fully dynamic interventions is important for how uncertainty is handled for each intervention type (e.g., fully dynamic interventions may have greater probability ranges or alternative probability distributions).

FIGURE 3.

FIGURE 3

A visual demonstration of a fully dynamic intervention and its effect on initial assumptions.

Additionally, interventions can be both one‐time interventions and ongoing interventions.

Building a cover and installing an elevator are one‐time interventions, whereas regularly drying stairs is an ongoing intervention. The distinction is important for how interventions interact with feedback loops in the risk assessment. A one‐time intervention has a one‐time impact on an initial assumption unless it affects a feedback loop that exists, and an ongoing intervention can fluctuate over time, causing the initial assumption to dynamically change over time. Ongoing interventions not only can impact a feedback loop but can create new ones as well. An intervention may never take place due to changing public perception, government policy, or feasibility. In the next subsection, feedback loops and their impacts on initial assumptions are explained, specifically addressing ways that initial assumptions can change over time without human intervention in the system.

4.2. Feedback loops

Feedback loops are cyclical interactions between initial assumptions and any of the following: events, consequences, or other assumptions. Because feedback loops are informational and phenomenological, specified consequences can occur several times in our activity time interval due to separate feedback loops or events. In the current linear conception of time shown in Logan et al. (2021), events lead to consequences that lead to new events in a sequential chain. However, feedback loops introduce the case that events and consequences can be cyclical rather than distinctly sequential. Thus, feedback loops are not subject to time, though some feedback loops may shift in magnitude over time. This dynamic state makes it difficult to precisely pinpoint “risk”; thus, the need for making feedback loops explicit in the risk description.

In the staircase example, a simple illustration of a feedback loop is provided in Figure 4. In the figure, we observe the feedback loop between the usage assumption (H1) and both the specified event of falling and the specified consequence of breaking a limb.

FIGURE 4.

FIGURE 4

An inherent feedback loop in the staircase example. This is a balancing feedback loop.

With this simple feedback loop in mind, we can once again modify the risk assessment to account for how initial assumptions may change over time. To focus on feedback loops, we do not include the interventions introduced earlier. Additionally, we assume that each fall reduces staircase usage by 5%, and each day without a broken limb from a fall would increase usage (up to a maximum of 200 people‐trips each day) by 0.65%. Using L to denote feedback loops, we write the risk description like so:

K: The knowledge on which (A′,C′,Q) are based, using observational data, surveys, and expert judgment. Includes the assumptions (H) used for (A′,C′,Q):

H1|L1: There are 200 people‐trips on the staircase each day. Once the feedback loop is considered, the number changes dynamically over time relative to the number of falls and broken limbs experienced.

Feedback loops require differential equations to model exactly, but the following coarse estimates can demonstrate the point we aim to make here. If the first fall occurs on Day 13, then Day 14 would experience 190 people‐trips (a 5% reduction in usage). Each day afterward without a fall and broken limb would add trips back based on the number of people‐trips occurring. For instance, Day 14's 190 people‐trips performed safely would increase usage up to about 191 people‐trips (0.65% of 190 added to 190), steadily increasing back to 200 people‐trips if no more falls occurred. The time at which the consequence of falling and breaking a limb occurs plays a key role in the final result of a risk assessment that includes feedback loops. If several falls occurred in quick succession, usage could drop as low as 50 people‐trips per day or 0 people‐trips per day. If the latter, usage would not return to 200 people‐trips per day given our assumptions. If the falls were spread out so that usage could always return to 200 people‐trips per day, then we would expect a slightly different result regarding the number of falls over the year. The feedback loop illustrated here is a consequential feedback loop that inherently exists in the system, but we suggest that at least one other style of feedback loop is useful to identify in risk assessments.

We suggest referring to these feedback loops as consequential feedback loops to distinguish them from the overarching informational feedback loop that occurs as the risk analyst learns more about the system over time. A consequential feedback loop involves existing or inherent feedback loops that must be considered, such as the usage and falling relationship in the staircase example.

These feedback loops can be strengthened or weakened with certain measures, but it is difficult to create or destroy them. For example, it would be difficult to destroy the feedback loop between usage without extreme measures such as somehow not letting anyone find out that another person fell and broke a limb. Even in such a scenario, the feedback loops between usage and falling are not actually created nor destroyed, as people simply assume the staircase is safer than it really is. These types of feedback loops can directly cause an initial assumption to evolve as time progresses.

The informational feedback loop introduced at the beginning of Section 4 is centered on the interaction between decision‐makers or risk analysts and the system in question. It encompasses all three of the ways assumptions can evolve (interventions, feedback loops, and external shocks). To briefly help distinguish this from the consequential feedback loop just described, these feedback loops focus on learning over time (observing assumptions changing, the system evolving). This learning then informs how the risk assessment evolves in parallel. In the staircase example, as assumptions about future usage and falling data become realized for the system, those assumptions can be updated. Perhaps after the first 5 years, it becomes clear that usage is more like 250 people‐trips on the staircase each day, or that falling impacts usage differently over time than previously considered. Such informational feedback would alter the risk assessment and the recommendations regarding the staircase. This would become even more pronounced if the system were interacting with decisions informed by the risk assessment, potentially requiring ongoing and dynamic system monitoring. When introducing a knowledge‐based feedback loop to our risk assessment, we expect our assumptions to evolve with updated information, simply by explicitly acknowledging that our assumptions will evolve with specific information. Unlike a consequential feedback, knowledge‐based feedback loops can be “created” or “destroyed.” The value for a risk analyst doing this is that it clarifies where they expect assumptions to evolve due to new information and where they do not, or at least where they consider the evolution.

Including feedback loops is particularly important when combining models with risk assessments. Feedback loops based on models can be implicitly understood, explicitly identified, or completely unknown. They are important because of risk across scales (risk that builds and occurs all at once in a regime shift, tipping point, or something similar). In systems where risk occurs due to a positive or reinforcing feedback loop effect, this framework—although applied to a simple example in this manuscript—has more significant implications for such dynamic systems because it explicitly identifies which feedback loops were included in the calculation and how they impact the risk measurement.

4.3. External shocks

External shocks are a third way we can categorize the evolution of assumptions. An external shock is something that occurs outside the system boundary in question but fundamentally alters the system. A simple way to imagine this for our staircase example is if an asteroid hits the staircase. In this case, the staircase system has been impacted by a larger, encompassing system (the planetary system).

Such an absolute shift in the system would make the question of falling on a staircase no longer relevant. Our system—the staircase—would no longer exist. When evaluating risk in complex systems, boundaries must be drawn (Haimes, 2018). However, by identifying possible external shocks, we can acknowledge those boundaries and that the system in question (even if made up of many sub‐systems itself) is subject to external impacts. Determining which external shocks to include in the risk analysis must be left to the subjective expertise of the risk analyst. For instance, the demolition of the building in 20 years may be a more likely external shock to the staircase than an asteroid hitting the staircase.

We must acknowledge that the demolition of the building could feasibly be an intervention (see Section 4.1), but this is only the case if the demolition is intentional. For instance, the building—or staircase—burning to the ground would be a version of an unintentional demolition. The intentionality is important because it can influence the uncertainty or strength of knowledge. The burning of the staircase will completely change assumptions regarding falls and accidents, but it would not have appeared in either the interventions or feedback loop categories.

It may not seem obvious why external shocks are a necessary category. Why do we care how our assumptions change if the very thing (the staircase) for which we are quantifying risk no longer exists? External shocks (denoted with X) can become critical with minor changes to our staircase example. For example, if we had two staircases by which we were measuring falls, then the burning of one of them would become quite notable. Or, for another example, the external shock could be more subtle. An external shock might introduce a hidden variable into the system: an earthquake that unhinges the railing without notice or a fire that weakens two or three steps. Both such instances would impact our assumption regarding the number of people who fall.

Let us measure the risk of people falling on the staircase one final time.

We introduce the possibility of an external shock: an earthquake. This changes our falling percentage assumption.

X1 : The location of the staircase is subject to an earthquake threatening its structural foundation, which could cause a temporarily unsupported railing. An unsupported railing would increase the likelihood of a person falling and injuring a limb.

This external shock will alter our assumptions about falling because it reveals a tacit assumption regarding the railing's role in our assessment of risk. Previously, we did not have an explicit assumption about the railing, but if we introduce an external shock that would make the railing a contributor to risk, then we must identify a new explicit assumption (H 6). Nonetheless, uncertainty remains and would have to be accounted for in Q and K. For instance, it is uncertain when such a shock would occur or how long it would impact the system, and we can still apply sensitivity tests to the values we choose.

K: The knowledge on which (A′,C′,Q) are based, using observational data, surveys, and expert judgment. Includes the assumptions (H) used for (A′,C′,Q):

H 6 | X 1: The railing is stable and does not contribute to people falling. Once we consider the external shock, a loose railing will cause 2 in 10,000 more people to fall and break an arm or leg at excess slipperiness when a railing is loose.

For the calculation, let us say that it takes 10 days for the railing to be fixed. This would result in an expected value of 27.78 falls in 1 year, a little higher than the number of expected falls in Section 3.2.

In this staircase example, external shocks may still seem a trivial addition. Yet, in complex systems where surprises occur between and within systems, identifying them can prove useful. For instance, if we were interested in measuring risk for traffic casualties, a spike in international oil or natural gas prices could fundamentally change our assumptions about who is driving and how often. Such an external shock would change our risk measurement. This could be particularly important when comparing the risk between transportation futures.

Finally, we note that external shocks could impact assumptions in a way that increases or decreases risk.

5. DISCUSSION AND CONCLUSION

5.1. Implications

We have shown that there are three ways an initial assumption can evolve, and we have demonstrated how the revised assumptions impact the final risk measurement. We have shown that revised assumptions impact initial assumptions via affecting the probability of A′, the set of A′, the probability of C′, and the range of C′. However, revised assumptions are often tacit assumptions, remaining undocumented, unacknowledged, or both. Making revised assumptions explicit is an important step for clarifying how initial assumptions can evolve over time, for comparing risk mitigation strategies, and for establishing a greater empirical grounding for the risk assessment.

A key feature for differentiating initial and revised assumptions is that revised assumptions require initial assumptions to be meaningful. A revised assumption cannot stand alone. If a revised assumption appears to be an initial assumption, it is likely the case that the initial assumption has simply been made tacitly in the risk assessment. Thus, by approaching risk assessments and assumptions with the proposed framework, overlooked or tacit assumptions may become revealed.

Perhaps, the most significant value of this framework is that it more clearly delineates what has been included or excluded in our risk assessment assumptions. It is valuable for comparing risk assessments because it clarifies whether one risk assessment or another included key interventions, feedback loops, or external shocks that ultimately impact the overall risk measurement. If one risk assessment of the staircase included the installation of an elevator, whereas another considered all three interventions, or if one risk assessment introduced three feedback loops into the assumptions, whereas another included only one feedback loop, then this makes it clear how those are considered and included. As risk science is applied to increasingly complex topics such as artificial intelligence and climate change, providing a clear articulation of these assumptions becomes increasingly necessary.

Finally, the approach is valuable for providing more empirical grounding in emerging and complex systems by explicitly identifying the interactions and considerations in the risk assessment. By explicitly tying certain interventions and feedback loops to assumptions, we clarify risk mitigation strategies. Drying stairs impacted our assumption about the slipperiness of the stairs, whereas an elevator could impact our assumption about the usage of the stairs. Through the practice of identifying how an intervention or feedback loop will impact our assumptions, we can gain a firmer understanding of where exactly we are intervening in the system. It can also reveal areas in which we may intervene that had not previously been considered (such as: what is an intervention that impacts our assumption regarding how many trips an individual takes each day?).

5.2. Future steps

The proposed framework has been applied to a simple case study, but its application to other systems—particularly complex systems—is a next step. Furthermore, at the nexus of policy and engineering, there are methods that appear similar to this framework. Comparing the proposed framework to other approaches directly (such as DMDU or assumption‐based planning frameworks) could be a way to evaluate strengths and weaknesses in the framework's current state.

Additionally, a key aspect that must be reconciled next is the uncertainty associated with revised assumptions. Interventions, feedback loops, and external events will each have various types of uncertainty. A rigorous consideration of the relationship between revised assumptions and uncertainty in the risk assessment would be an insightful next analysis.

This framework introduced three categories of change that assumptions experience, but it has not outlined how to determine which interventions, feedback loops, or external shocks should appear in the risk assessment. Future research could explore whether there are promising strategies to determine this during consultation with experts, simulations, models, and data collection. Strategies that represent emergence—such as serious games or agent‐based simulation—may be useful for such a need.

Beyond these next steps, we can think of several questions that remain: How are assumptions related to one another? Can we identify relationships between interventions and feedback loops? What about relationships between interventions and external shocks? Can we say positive feedback loops are harbingers of interventions? Considering the social amplification of risk framework acknowledges feedback loops, are there connections that can be drawn between this framework and that one?

5.3. Summary

This article proposes a conceptual framework for risk analysts to use when evaluating how assumptions can evolve in risk assessments of systems with varying complexity. Haimes (2018) identified a gap in how risk assessments handle assumptions in complex, dynamic systems that realistically will involve assumptions that change. From there, we put forward a framework to both methodologically consider how assumptions evolve and the ways in which we can incorporate them explicitly in the risk assessment. Using a simplified case study, we illustrated how evolving assumptions impact measured risk. The framework offers three primary benefits for risk analysts: (1) makes clear what has been included and excluded in the risk assessment; (2) can highlight gaps in understanding and reveal new places to intervene; and (3) helps avoid adhering to unrealistic assumptions. This framework can help risk science be more definitive in future research and practice. Having now established the framework for a simple system, we plan to next apply this framework to evaluate risks of emerging technologies and systems with high complexity.

ACKNOWLEDGMENTS

The authors are grateful to a reviewer for many useful comments and suggestions to the original version of the article. We also acknowledge Kylanna Hardaway for her contributions to the figures in the manuscript.

Hardaway, K. , & Flage, R. (2025). A framework for evolving assumptions in risk analysis. Risk Analysis, 45, 2232–2242. 10.1111/risa.70009

DATA AVAILABILITY STATEMENT

The author has provided the required data availability statement unless the article type is exempt and, if applicable, included functional and accurate links to said data therein.

REFERENCES

  1. Aven, T. (2010). On how to define, understand and describe risk. Reliability Engineering & System Safety, 61, 250–259. 10.1016/j.ress.2010.01.011 [DOI] [Google Scholar]
  2. Aven, T. (2012). The risk concept—Historical and recent development trends. Reliability Engineering & System Safety, 99, 33–44. 10.1016/j.ress.2011.11.006 [DOI] [Google Scholar]
  3. Aven, T. (2013a). Practical implications of the new risk perspectives. Reliability Engineering & System Safety, 115, 136–145. 10.1016/j.ress.2013.02.020 [DOI] [Google Scholar]
  4. Aven, T. (2013b). Probabilities and background knowledge as a tool to reflect uncertainties in relation to intentional acts. Reliability Engineering and System Safety, 119, 229–234. 10.1016/j.ress.2013.06.044 [DOI] [Google Scholar]
  5. Aven, T. (2016). Risk assessment and risk management: Review of recent advances on their foundation. European Journal of Operational Research, 253(1), 1–13. 10.1016/j.ejor.2015.12.023 [DOI] [Google Scholar]
  6. Aven, T. , Ben‐Haim, Y. , Andersen, H. B. , Cox, T. , Droguett, E. L. , Greenberg, M. , Guikema, S. , Kroger, W. , Renn, O. , Thompson, K. M. , & Zio, E. (2018). Society for Risk Analysis Glossary. https://www.sra.org/wp‐content/uploads/2020/04/SRA‐Glossary‐FINAL.pdf
  7. Beard, A. N. (2004). Risk assessment assumptions. Civil Engineering and Environmental Systems, 21(1), 19–31. 10.1080/10286600310001605489 [DOI] [Google Scholar]
  8. Cambridge . (2024). Cambridge dictionary of English language. Cambridge. https://dictionary.cambridge.org/no/ordbok/engelsk/assumption [Google Scholar]
  9. Crist, C. (2017). Injuries on stairs occur in all age gropus and abilities. Cambridge. https://www.reuters.com/article/us‐health‐injuries‐stairs‐idUSKBN1CE1Z4/ [Google Scholar]
  10. Flage, R. (2019). Implementing an uncertainty‐based risk conceptualisation in the context of environmental risk assessment, with emphasis on the bias of uncertain assumptions. Civil Engineering and Environmental Systems, 36(2–4), 149–171. 10.1080/10286608.2019.1702029 [DOI] [Google Scholar]
  11. Flage, R. , & Askeland, T. (2020). Assumptions in quantitative risk assessments: When explicit and when tacit? Reliability Engineering & System Safety, 197, 106799. 10.1016/j.ress.2020.106799 [DOI] [Google Scholar]
  12. Forrester, J. (1997). Industrial dynamics. Journal of the Operational Research Society, 48(10), 1037–1041. 10.1057/palgrave.jors.2600946 [DOI] [Google Scholar]
  13. Glette‐Iversen, I. , Flage, R. , & Aven, T. (2023). Extending and improving current frameworks for risk management and decision‐making: A new approach for incorporating dynamic aspects of risk and uncertainty. Safety Science, 168, 106317. 10.1016/j.ssci.2023.106317 [DOI] [Google Scholar]
  14. Guzik, T. J. , Mohiddin, S. A. , Dimarco, A. , Patel, V. , Savvatis, K. , Marelli‐Berg, F. M. , Madhur, M. S. , Tomaszewski, M. , Maffia, P. , D'Acquisto, F. , Nicklin, S. A. , Marian, A. J. , Nosalski, R. , Murray, E. C. , Guzik, B. , Berry, C. , Touyz, R. M. , Kreutz, R. , Wang, D. W. , … McInnes, I. B. (2020). COVID‐19 and the cardiovascular system: Implications for risk assessment, diagnosis, and treatment options. Cardiovascular Research, 116(10), 1666–1687. 10.1093/cvr/cvaa106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Haimes, Y. Y. (2018). Risk modeling of interdependent complex systems of systems: Theory and practice. Risk Analysis, 38(1), 84–98. 10.1111/risa.12804 [DOI] [PubMed] [Google Scholar]
  16. Kaplan, S. , & Garrick, B. J. (1981). On the quantitative definition of risk. Risk Analysis, 1(1), 11–27. 10.1111/j.1539-6924.1981.tb01350.x [DOI] [PubMed] [Google Scholar]
  17. Karmperis, A. C. , Sotirchos, A. , Tatsiopoulos, I. , & Aravossis, K. (2014). Risk assessment techniques as decision support tools for military operations. Journal of Computations & Modelling, 4, 67–81. [Google Scholar]
  18. Khorsandi, J. , & Aven, T. (2017). Incorporating assumption deviation risk in quantitative risk assessments: A semi‐quantitative approach. Reliability Engineering & System Safety, 163, 22–32. 10.1016/j.ress.2017.01.018 [DOI] [Google Scholar]
  19. Ladyman, J. , Lambert, J. , & Wiesner, K. (2013). What is a complex system? European Journal for Philosophy of Science, 3, 33–67. 10.1007/s13194-012-0056-8 [DOI] [Google Scholar]
  20. Logan, T. M. , Aven, T. , Guikema, S. , & Flage, R. (2021). The role of time in risk and risk analysis: Implications for resilience, sustainability, and management. Risk Analysis, 41(11), 1959–1970. 10.1111/risa.13733 [DOI] [PubMed] [Google Scholar]
  21. Meadows, D. (2008). Thinking in systems: A primer. Diana Wright (Ed.), Chelsea Green Publishing, Earthscan. ISBN: 9781603580557. [Google Scholar]
  22. Ostwald, J. , Bryant, B. P. , Ortiz, D. S. , Fischbach, J. R. , Hoover, M. , & Johnson, D. R. (2012). Coastal Louisiana Risk Assessment Model: Technical Description and 2012 Coastal Master Plan Analysis Results. RAND Corporation. https://policycommons.net/artifacts/4830298/coastal‐louisiana‐risk‐assessment‐model/5666952/ [Google Scholar]
  23. SPA, C. G. (2024). Methods for slipperiness—Structures. CIPA GRES. https://www.cipagres.it/en/content/14‐metodi‐per‐la‐scivolosita‐strutture [Google Scholar]
  24. Tuncel, G. , & Alpan, G. (2010). Risk assessment and management for supply chain networks: A case study. Computers in Industry, 61(3), 250–259. 10.1016/j.compind.2009.09.008 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The author has provided the required data availability statement unless the article type is exempt and, if applicable, included functional and accurate links to said data therein.


Articles from Risk Analysis are provided here courtesy of Wiley

RESOURCES