Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2021 Nov 30;10(3-4):127–137. doi: 10.1007/s41469-021-00107-1

Measuring organizational resilience as a performance outcome

Ekin Ilseven 1,, Phanish Puranam 2
PMCID: PMC8631567

Abstract

Interest in investigating organizational resilience has surged due to the increasing number of unexpected shocks and disruptions in the global economy. It is more important than ever to have well defined ways of measuring organizational resilience as a precursor to understanding its antecedents. In this article, we discuss the assumptions (regarding choices of counterfactuals and time intervals) needed to operationalize organizational resilience as a performance outcome and identify the minimal set of variables that can be used to estimate the resilience of an organization. We highlight the importance of the choice of time window (rule-based vs. variable) and counterfactuals (absolute vs. relative) to measure resilience.

Keywords: Organizational resilience, Time series analysis


Organizational resilience remains an important topic in both the research and practice of organization design, and it is studied through a diverse set of perspectives and methods (see Table 1). In this note, we propose an approach to measuring organizational resilience that is scalable and generalizable across contexts. Such an approach may prove useful to test the qualitative conclusions drawn from a large variety of inductive studies about resilient organizations that have accumulated in the literature (Van der Vegt et al. 2015).

Table 1.

Empirical works that were published since 2000 in 11 top management journals and operationalized organizational resilience (see Appendix for more information)

Unit of analysis Study Sample/Data Conceptualization Adversity type Measurement Effects
Individual to team Stuart and Moore (2017 NHL data, 2005–2007 Performance outcome Injury events Changes in team performance from injury and rate of recovery Performance drops more and recovers slower when illicit member is injured
Individual to organization Youssef and Luthans (2007)

Study 1: 1032 employees from 135 orgs

Study 2: 232 emp. From 32 orgs

Personal trait Continuous—no sudden adversity Block and Kremen’s (1996) Ego-Resiliency Scale Resilience contributes to job satisfaction and work happiness. No effect on org. perf. observed
Individual to organization Peterson et al. (2009) 105 CEOs of tech firms. 46 start-up, 56 established firm Personal trait Continuous—no sudden adversity Psychological Resilience Scale (Wagnild and Young 1993) loading to higher order factor CEO resilience improves firm performance through transformational leadership
Individual to organization Shin et al. (2012) 279 employees of an IT Company in South Korea, 3 weeks before, 5 and 22 months after restructuring Personal Trait Organizational change Block and Kremen’s (1996) Ego-Resiliency Scale Resilience affects commitment to change through positive affect, further contributing to support for change and decreased turnover
Individual to organization Buyl et al. (2019) 92 U.S. commercial banks CEOs, Compustat, 2006–2014 Performance outcome 2008 Financial crisis Extent of drop and time to recovery of YoY quarterly ROA CEO narcissim and bank’s pre-shock riskiness don’t affect drop, but decrease recovery time
Individual to organization Sajko et al. (2021) 269 S&P1500 CEOs, 2003–2011 Performance outcome 2008 Financial crisis Extent of drop and time to recovery of stock prices CEO greed and CSR don’t affect drop, but affect recovery time in expected direction. Several other significant control variables are included
Organization Ortiz-de-Madojana and Bansal (2016) 195 companies in KLD 400. 1994–2008 Performance outcome Continuous—no sudden adversity Financial stock volatility, stronger sales growth, survival SEPs contribute to lower volatility, higher sales growth, survival, and do not lead to lower short-term performance
Organization DesJardine et al. (2019) 963 U.S. public firms, 2008–2013 Performance outcome 2008 Financial crisis Extent of percentage drop and time to recovery of stock prices (benchmark against Sept 16, 2008) Strategic Social and Environmental Practices (SEP) decrease drop and time to recovery. Tactical SEP affect only recovery
Organization Dai et al. (2017) 724 parent MNEs, 1162 subsidiaries. 1987–2006 Organizational trait Occurrence of war Political Risk Services scores, number of war afflicted countries (subsidiaries) in portfolio, MNE sales Resilience increases exit from country, however slower when MNEs are exposed to war
Community Rao and Greve (2018) 597 municipalities in Norway, 1920–1950 Outcome following disaster Spanish Flu and Spring Frost Extent of drop and time to recovery of cooperative foundings Greater mortality due to Spanish flu decreases cooperative founding relative to spring frost
County Dutta (2017) 58 counties in California, 1990–2010 Outcome following disaster Natural disaster (categorized into novelty and complexity) Number of foundings of human services organizations a year after disaster Greater voluntary association diversity increases founding rates post-disaster

Our literature review revealed 19 empirical works, out of which 11 has operationalized resilience either as a dependent (outcome) or independent variable (trait). The table illustrates the diversity in operationalizations and empirical methods

The scope of our analysis is limited by three choices we make. First, our focus is on organizational (rather than individual) resilience, though we take a broad perspective on what an organization is (Puranam 2018). The measure of organizational resilience we propose here can be applied to organizations that are smaller (e.g., divisions, departments and teams) or larger (e.g., alliances, eco-systems, meta-organizations) than a single firm.

Second, our focus is on measuring organizational resilience as an observable outcome, and we are agnostic to the antecedents that produce it. The literature on the mechanisms that underlie organizational resilience is vast, beginning at least with Thompson’s (1967) distinction between buffering vs. adaptation, and recent reviews provide a detailed account of how those seminal ideas have been developed (e.g., Williams et al., 2017; Mithani, 2020). Accordingly, we do not offer any deep elaboration on the antecedents of these outcomes and encourage further research to do so.

Third, we propose to measure resilience in terms of changes to organizational performance after unexpected adversity (Weick and Sutcliffe 2001; Lengnick-Hall and Beck 2009). This means that our approach is ideally suited to measure organizational resilience when time series data on organizational performance are available.

Organizational resilience as a performance outcome

Typically, an organization’s performance after an unexpected adversity (such as the entry of a competitor, Argyres et al. 2015; an unexpected terror attack, Kendra and Wachtendorf 2003; an extreme weather event, Dutta 2017; an epidemic, Rao and Greve 2018; to consider a few) will exhibit a drop in performance at the onset of adversity, as well as a possible recovery. Obviously, the incidence and the magnitude of the adversity experienced by an organization should not be measured by the same performance metric being used to assess its resilience, to avoid circularity.

Intuitively, there are at least four separately measurable components of resilience as an outcome: (a) The magnitude and (b) the rate of the drop in performance, and (c) the magnitude and (d) the rate of recovery in performance after the adverse event. Several labels have been used to describe these measurable components derived from performance trajectories which have been discussed by Ayyub (2014) in detail (see also Tang 2019). For instance, the rate of drop in performance ranges from graceful to brittle as it increases. Robustness is described as the residual performance, associated with the magnitude of the drop of performance. Lack of full recovery in magnitude indicates scarring. In certain cases, the magnitude of performance recovery in the post-shock period may lead to higher performance than in the pre-shock period and is labelled antifragility (e.g., Taleb 2012; Kupers and Mullie 2014; Martin 2020).

While these notions are intuitive, when used in isolation, they make implicit assumptions about the window over which observations take place, and the counterfactual being used to evaluate performance. To make such assumptions explicit, we build on previous work (such as Tierney and Bruneau 2007; Cimellaro et al. 2016; Zhang et al. 2019; Tang 2019) and define a measure of organizational resilience as the negatively signed cumulative performance difference between a hypothetical performance trajectory that would have been realized in a world without the shock π0 and the realized performance trajectory π (see the area between dashed and solid curves in Fig. 1). In formal terms, this corresponds to the integral:

Rα=-tpretpostπ0t-πt;αdt, 1

where tpre and tpost correspond to moments in time before the shock happens and after the shock happens, π0 corresponds to the counterfactual performance, π to the actual performance, and α denotes a set of parameters associated with organizational mechanisms determining resilience. The negative sign ensures that the more the realized performance matches or exceeds the counterfactual, the greater the organizational resilience and vice versa.

Fig. 1.

Fig. 1

Measurement of absolute revenue resilience of Martin Mariette Materials Inc. for illustration purposes. Pre-shock period is used for future forecasting to construct counterfactual. The 2008 Financial Crisis impacts the company in Q2 2008. After a 50% drop in its revenues, the company returns to forecasted revenues in Q2 2014, whereas its revenue recover to pre-shock levels in Q3 2013. For high seasonality and growth companies, forecasting will offer more accurate measures. Accordingly, in the post-shock period, the company improves on its pre-shock trend and experiences a larger growth (anti-fragile outcome)

The four intuitive measures (drop, rate of drop, recovery and rate of recovery) can be shown to be approximations of R and positively correlated with it (please see Appendix). However, there may be negative correlations among them—implying that researchers who use different subsets of these measures may reach opposite conclusions about the same firms. As an instance, drop of performance and time to recovery are two variables measured in several works (Table 1). However, time to recovery is the product of a subset of variables we have derived, given that Time to Rec. = Drop × (1/Rate of Drop + 1/Rate of Rec.). There are two important consequences of using these two variables: First, it omits the possibility that organizations may recover beyond the previous performance, leading to an incomplete description of resilience by omitting the fourth component of the R measure (the level of recovery). Second, when not adjusted for the drop, it becomes a measure of the sum of correlations between the drop of performance and the inverse rate of drop and the inverse rate of recovery; this runs into the risk of drawing inconsistent conclusions from measurements. We argue that, if all four variables are used together, they constitute a minimal set of variables that can capture an unbiased (even if approximate) measure of the resilience of an organization. Therefore, a key injunction from our analysis is that researchers should ideally aim to use all four measures, or at least make explicit their assumptions about the measures they are not capturing, in order make the resilience measurement more transparent.

In its general form, this measure directs our attention to two explicit choices, namely, the time interval (tpre, tpost), and the counterfactual trajectory, π0(t). Below we discuss the implications of these choices for observed organizational resilience. We summarize the following discussion in Table 2, highlighting the assumptions, advantages, limitations, and conditions according to which one assumption may be more suitable than the other.

Table 2.

Summary table for our discussion of the underlying assumptions in measuring organizational resilience

Assumption of counterfactual (π0) Counterfactual Assumption Advantages Limitations Preferred over the alternative when…
Absolute resilience Either forecasting or synthetic controls Forecasted/synthetic data is predictive of future performance trajectory Organization-specific measurement

∙ Need to control for dynamic contextual factors to explain the measurement

∙ Requires pre-shock time series

… shocks are cross-sectionally localized (allows construction of synthetic controls)

… shocks are temporally localized (limits shock-specific dynamics)

Relative resilience Population average Population was affected homogenously

∙ Captures population specific contextual factors

∙ Uses existing data

Only comparative to the population … shocks large non-linear effects on organizational performance (e.g. return of piled up demand in post-shock period, see airline industry post-COVID19) Population dynamics are assumed to capture this effect
Assumption of time window (tpre, tpost) Time window Advantages Limitations Preferred over the alternative when…
Rule-based

Rule based methods

∙ e.g. Shock onset: time step after which there are N consecutive performance decline

∙ e.g. Shock end: time at which 100% recovery to counterfactual

Scalable implementation due to standardization

Rules may omit or misestimate aspects of resilience, e.g. use of 100% recovery…

∙ Omits antifragility/scarring outcome

∙ Misestimates rate of recovery for non-zero trend or seasonal data

… data quality (noise and granularity) is high

… shock effect is identifiable and comparable across units of analysis

Variable time Varied time window (depending on data granularity and availability; e.g. 1/3/5 years; 4/6/8 quarters) going further into past and future, beyond the shock period Captures short-mid-long term resilience outcomes, each potentially highlighting different mechanisms of resilience

∙ Time window is not sharply defined (typical model selection criteria such as AIC etc. are less helpful)

∙ Risk of mixing different shocks (higher compared to rule-based)

… there are theoretical grounds for mechanisms being effective in different time-scales (e.g. diversification in the immediate aftermath, and absorptive capacity in the long-run)

… shock onset and end are difficult to pinpoint

We further offer a general guidance regarding when one assumption may be more valid compared to the other

Counterfactuals and time windows in measuring organizational resilience

Choice of counterfactual π0(t)

Measuring the absolute resilience of an organization requires determining a counterfactual which describes the performance of an organization as if the shock did not take place. This approach is for example visualized by Ayyub (2014); the labels arise from the comparison of a realized performance curve with an established counterfactual one. In the organizational context, when the exposure to the adverse event cannot be randomized, the determination of absolute resilience relies on forecasting methods. For example, in the context of regional economic organization, Sensier et al. (2016) use macro business cycles to forecast expected performances and determine the drop of performance at each cycle to measure resilience. In Fig. 1, we see an illustration of this: Martin Marietta Materials Inc. (MMM) is an S&P500 company in the construction materials industry per GICS categorization, coded as 151020. As the quarterly revenue curve (solid line) shows, during the 2008 Financial Crisis, the company suffered from a 50% drop in revenues in the course of almost 2 years. To assess the absolute resilience, we used Holt-Winters exponential smoothing method to forecast what would have happened if the shock had not occurred (dashed line beginning in 2008-Q1). In this case, we observe that the actual curve surpasses the forecasted one in 2014-Q1, indicating that MMM experienced an anti-fragile outcome after this shock. However, such forecasting methods rely on time series data up to the moment of the shock and strictly assume that a model derived from past data predicts future outcomes well enough; forecasts for prolonged times may not be able to fulfill this.

An alternative approach measures the relative resilience of an organization, where the counterfactual is the observed performance of a group of organizations that have faced the same adversity. For instance, one can compare the performance of a firm to the industry average, in an industry that has been affected by an adversity befalling all its firms at the same point in time. The relative resilience of firms to the COVID19 pandemic or the government regulatory restrictions that arose to cope with it can easily be measured in this way. Measurement of relative resilience is considerably easier as the counterfactual curve is derived from observable performance. The counterfactual group can be also constructed using synthetic controls (Abadie et al. 2010; Tirunillai and Tellis 2017; Conti and Valentini 2018) ahead of a shock to establish more accurate counterfactuals after the shock period. In this measurement, it is assumed that the only heterogeneity in shock response arises from the underlying resilience related capabilities (the α parameters in Eq. 1), not due to the differing levels of shocks across organizations. Otherwise, organizations that are hit by smaller shocks may appear to be more resilient, although they just faced less adversity.

In Fig. 2a (top), we illustrate the measurement of relative resilience for MMM and its peers, Vulcan Materials Co. (VM) and James Hardie Industries PLC (JHI). A closer look at the industry of MMM shows that it is rather concentrated; the top 3 companies with more than 10% market share cover almost 80% of the market in the time period of interest. Since the impact of the shock will differ across different market shares, we focus on these top three companies to establish the counterfactual through averaging. Adjusting the performance for pre-shock performance (2007-Q4),1 we see that during the shock period the market leader VM has taken advantage of its market position and continued to perform better compared to its peers. However, its relative resilience declines drastically over the longer term. Meanwhile, JHI, MMM’s next closest rival falls below the population response and only to recover strongly in the post-shock period. Overall, MMM’s relative resilience remains unperturbed. In Fig. 2b, we further illustrate how the choice of peers matters. The resilience of all three companies is considerably underestimated when an industry-wide average is used instead of the peer based one. To minimize such errors in choosing the appropriate peer group, their bottom-up determination through synthetic control approaches may prove to be much more appropriate for these purposes. For comparison purposes with the absolute resilience measure, we also illustrate the results following from Eq. (1) for the three companies in Fig. 2a (bottom). We observe that the choice of counterfactual can impact the analysis, MMM turns out to be the most resilient almost throughout the whole observation period. Meanwhile, VM’s early resilience remains both for relative and absolute measures.

Fig. 2.

Fig. 2

a (Top) Measurement of relative resilience using Eq. (1). All companies with more than 5% market share (top three companies registered under GICS Industry code 151020 cover more than 75% of the mining and quarrying industry. 3rd one is dropped as its downturn (and eventual bankruptcy) during the financial crisis is because of idiosyncratic reasons. Martin Marietta Materials Inc. is resilient during the shock, overall maintaining its performance during and beyond the shock. The market leader Vulcan Materials Co. shows a resilient response in the short term, only to fall behind peer response in the long run. Finally, James Hardie Industries PLC shows a highly resilience response in the long run, although in the immediate short term it performs worst in the peer group. (Bottom) As comparison, we show the absolute resilience measure of each company. MMM appears to be the most resilient compared with its own forecasted counterfactual, whereas rest of its peers continue to underperform. This illustrates how the counterfactual choice can impact analysis. b Measurement of relative resilience using Eq. (1) using all companies categorized under the same 6-digit GICS code. Resilience is drastically lower for the top companies in the long run, whereas increased in the short-run. More specifically, we see that MMM and James Hardie Industries PLC show almost non-negative resilience measure. However, MMM remains scarred in the long run which is not the case in a (top)

Choice of time interval (tpre, tpost)

We see two potential ways of determining the interval of measurement. Many researchers apply a rule-based choice of time window, such as the time to full recovery to determine the time window. However, its implementation poses several problems that are typical of time series. For example, fast-growing organizations tend to grow fast in the post-shock period as well, which shortens the time to full recovery. Here fast-growth is a firm-specific parameter that is not part of α. Another issue arises from seasonality effects as the seasonal fluctuations make it difficult to pinpoint the start and the end of the shock period. When we revisit Fig. 1, we see that MMM revenue stream demonstrates a positive trend and strong seasonality, maxima frequently coinciding with summers and minima with winters. First, the onset and the end of the shock will depend on which quarter is being considered, Q2s showing quicker recovery than Q4s. Meanwhile, the pre-shock trend and the post-shock trend do not seem to differ significantly; determining the shock duration through the moment of full recovery (without considering the trend) may lead to an overestimation of resilience by 1 year. This has been also noted by Tang (2019). Finally, rules that assume the shock period being over at the full recovery directly limit the assessment of the post-shock magnitude of performance recovery as they may be realized long after the recovery period. Using the time to full recovery by definition misses out on the anti-fragile outcome of MMM which is observed past 2014. As such, the ease of the implementation of “time to full recovery” windows comes with crude and potentially misleading approximations to an organization’s resilience.

An alternative to this rule-based choice is the consideration of time windows that stretch both into the past or the future of the shock period. This ensures a better estimation of the magnitude of recovery in the post-shock period. Moreover, one could consider short-term, mid-term, and long-term resilience outcomes that could offer a more nuanced understanding of an organization’s resilience. In Fig. 2a (top), using Eq. (1), we see that the market leader’s resilient response in the short- to mid-term did not last forever. JHI’s post-shock performance, meanwhile, indicates anti-fragility, with strong post-shock revenue growth. On the other hand, the choice of these time windows would have to pay attention to avoiding the occurrence of other shocks in the past and the future.

The resilience outcome should be assessed on a single shock basis to avoid construct validity problems. The time window should also be sufficiently localized to capture the resilience response. Nokia’s decline in performance due to competition from Blackberry, Microsoft, and Apple in 2004 lasted a long time (Doz and Wilson 2017). After almost a decade, we observe that Nokia is once again a rising player in the semiconductor and infrastructure markets. Nevertheless, these two performance outcomes are not associated with a response to the same unexpected shock. This tension between the length of the observation window and the need to exclude other adverse events is very similar to that arising in using cumulative abnormal returns on share prices when conducting event studies (e.g., Christie 1983). As with event studies, it may be useful to report resilience measures for different time windows to assess the robustness of conclusions.

While the time window to study organizational resilience to any particular shock should not accidentally include other shocks, somewhat paradoxically it is useful to study multiple shocks to estimate the organizational parameters captured by α in (1) in an unbiased manner—to say something confidently about what the mechanisms and antecedents to an organization’s resilience are. This is because an empirical study that considers a single shock will ultimately be equivalent to conducting a cross-sectional study—we cannot be sure if the observed resilience is due to observed organizational features or unobserved heterogeneity. In contrast, observing resilience to multiple shocks is equivalent to estimating a fixed effect in panel data, which allows for control of all stable unobserved heterogeneity at the panel level. Table 1 shows that Dutta (2017) and Rao and Greve (2018) are rare instances of works comparing the resilience of organizations across multiple shocks. It is important to note that in such a case the interpretation of an organization’s resilience will differ depending on the nature of the multiple shocks: The literature differentiates between general and specific resilience (Nykvist and Von Heland 2014). When recurring shocks are of the same nature, we measure specific resilience. On the other hand, currently, many scholars question whether resilience to the 2008 Financial Crisis is predictive of resilience to the 2020 Coronavirus healthcare crisis. Such studies may inform us regarding the general resilience of an organization. Finally, we also highlight the fact that studies identifying the antecedents of organizational resilience (α) through such a longitudinal method have to also assume stability in the antecedents over the time frame considered to make consistent inferences.

Measurement and research design

So far we have set forth a rather technical discussion of how to measure organizational resilience, which we summarized in Table 2. In addition to this, we would like to raise several caveats regarding its application in the context of larger research design and address fundamental blocks of organizational resilience research, namely, the organization, the shock, and the performance. The organization-shock pair constitutes the main unit of analysis as the performance outcome can only be generated by such a pair. Accordingly, their properties require close scrutiny in the larger research design context.

  1. Consistent unit of analysis The initial response to the 2020 Coronavirus pandemic included some companies—such as General Motors, L’Oreal, and Dyson—taking on crisis (pandemic) specific business activities thanks to their economies of scope. More drastically, some companies engaged in divestitures (e.g., GE divesting GECAS and Dell divesting VMWare) and some others filed for bankruptcy (e.g., Hertz). Beyond these observations, Lin et al. (2006) find that 25 out of 80 organizations observed in crisis have changed their organizational design. These responses to crises direct our attention to the consistency of the unit of analysis. Researchers will need to specify the identity of the organizational entity whose resilience they’re interested in; the identity of this entity has to be stable and its performance should remain measurable and consistent before and after the shock.

  2. Appropriate performance metric Markman and Venzin (2014) document that six major banks performed the best along seven different performance metrics during the 2008 Financial Crisis. This observation demonstrates that an organization can be assessed along many performance metrics and the choice will determine the interpretation accordingly. Once again, the research context will be the ultimate guide in choosing the performance metric of interest (e.g., see Modica and Reggiani’s (2015) review documenting various metrics used in regional economics). In the organizational context, we would like to point out that organizations may differ in their purposes and their performance metric of concern may vary accordingly. Performance feedback and aspiration level theory (Greve 1998) indicates that organizations will respond to downturn in some performance metrics more than to some others. Considering such context specific central performance metrics may be useful leads to follow in choosing the appropriate one.

  3. Shock duration can range from short-term perturbations to long-lasting environmental shifts. We argue that for short-term perturbations, where the organizational environment mostly returns to its original state, forecasting methods facilitating the measurement of absolute resilience will be suitable. On the other hand, for long-lasting systemic shifts measurement of relative resilience may be more preferable as other methods of building counterfactuals may become questionable in their long-term accuracy. The nature of the organization-shock pair will determine the applicability of these methods. For instance, during the 2008 Financial Crisis, many companies were hit by the lack of financial resources and lower demand due to the recession. On the other hand, banks faced a series of regulation changes corresponding to long term environmental shifts. Such shock related factors play an important role in choosing the appropriate method as illustrated in the last column of Table 2.

We encourage future empirical research to elaborate carefully on these research design elements as well as discuss other properties of these elements that may be impactful in measuring organizational resilience, eventually contributing to and extending the earlier discussions by Carpenter et al. (2001) and Powley et al. (2020).

Conclusion

Several conceptualizations of resilience in terms of performance outcomes following an unexpected adversity are available in the literature (e.g., robustness, anti-fragility, brittleness) but as we have shown, they involve implicit assumptions about time windows and counterfactuals; counterfactuals can be built through forecasting leading to absolute measures or peer-based estimations leading to relative measures. Meanwhile, time windows can be determined in rule-based methods or can be varied for richer interpretations. These choices have a significant effect on what can be measured and how they should be interpreted. These will, however, depend on their suitability for the research question. A field such as strategic management may value competitive dynamics more and accordingly emphasize relative resilience, whereas organization design may rather be interested in the relationship between certain organizational design choices and absolute resilience. Furthermore, we pointed out three important caveats regarding the match between the measurement and the research design: The consistency of the unit of analysis, the relevance of the performance metric, and consideration of the shock duration (among other properties) are important research design related factors to consider in measurement and have to be discussed at least transparently. The minimal set of variables derived from Eq. (1) may open up a fruitful research direction aimed at capturing the configurational nature of organizational resilience, more specifically the correlation structure among all four variables of the minimal set. We are optimistic about future theoretical developments in this direction, which may complement research on the mechanisms that produce resilience as an outcome.

Appendix

Deriving the minimal set of variables approximating resilience (R)

For generalizability, we first denote with Δπ the performance difference between the counterfactual and the actual performance, i.e., Δπ = (π0t-πt;α) the integrand of Eq. 1, which can be applied both for estimation of actual and relative organizational resilience. Next, we separate the components of the integral in Eq. 1 and re-write as follows:

Rα=-tpretonsetΔπt;αdt-tonsettminΔπt;αdt-tmintrecΔπt;αdt-trectafragΔπt;αdt-tafragtpostΔπt;αdt. A1

The first integral term captures the performance difference between the time when measurement begins and the shock onsets (tpre, tonset), the second term from shock onset until performance reaches its minimum (tonset, tmin), the third term from when minimum performance is reached until the performance fully recovers (tmin, trec), the fourth term from full recovery to attaining a steady antifragile state (trec, tafrag), and the last one from equilibration to when the measurement of resilience ends (tafrag, tpost). Assuming a steady state before the onset, Δπ is zero until shock onset and Δπtonset=0.

To a first order approximation, we can use trapezoidal rule (a method of quadrature) to estimate the second term:

tonsettminΔπt;αdt=tmin-tonsetΔπtmin2=Δπtmin22ρd, A2

where ρd denotes the average deterioration rate following the shock. The second step follows from the fundamental theorem of calculus:

Δπtmin=Δπtonset+tmin-tonsettonsettminρdtdttmin-tonset=tmin-tonsetρd, A3

where we used Δπtonset=0 and the second ratio term corresponds to the average deterioration rate. Given that Δπtonset=Δπtrec=0, we apply the same approximation to the second and third terms as well to obtain

Rα=-Δπtmin22ρd-Δπtmin22ρr+Δπtafrag22ρr-tafragtpostΔπt;αdt, A4

where ρr denotes recovery rate, and for simplification, we assumed that the average recovery rate after reaching minimum remains the same until the antifragile state is reached even after total recovery is realized. The last integral corresponds to a measurement far into the future after the shock and should be minimized if possible by choosing an appropriate tpost. In this approximation, we see all four intuitive measures offer an approximation to the full integral we propose: Rate of performance drop (ρd), magnitude of performance drop (Δπ(tmin)), rate of performance recovery (ρr) and the level to which the performance recovers (Δπ(tafrag)).

It is important to note that this linear approximation does not capture descriptions that incorporate non-linearity, such as those offered by Ayyub (2014). For example, the author describes a sudden drop in performance as brittle and a slower drop as graceful. Similarly, he visualizes different recovery trajectories, some accelerating and some other decelerating.

However, the trapezoidal rule does offer further insight into the error that can appear due to the first order approximation. For example, the absolute value of the error for the first integral corresponds to

ErrortonsettminΔπt;αdtΔπtmin312ρd3×maxttonset,tminΔπ¨t, A5

where Δπ¨t denotes the second derivative of the performance trajectory. Trajectories such as brittleness or gracefulness differ along the second derivative and higher order approximations will be necessary to reduce such errors, essentially requiring a larger set of variables than the four we have proposed.

Details about literature search

We conducted our literature search by searching for the word “resilience”, “resiliency”, or “resilient” on the online platforms of 11 leading management journals, namely, Strategic Management Journal, Academy of Management Journal, Academy of Management Review, Academy of Management Annals, Administrative Science Quarterly, Organization Science, Management Science, Journal of Management, Journal of Management Studies, Organizational Studies, Strategic Organization. We screen only for those research articles published in the time period of 2000–2021 and where the keyword is mentioned either in the title or in the abstract. Our sample included 44 articles, 20 of which had the search term in its title, and an additional 24 in their abstract.

Funding

Not applicable.

Availability of data and materials

Not applicable.

Declarations

Competing interests

No competing interests to declare.

Footnotes

1

The condition Δπtonset=0 in Eq. (A2) in Appendix is not fulfilled for the counterfactual based on peer average as Δπtonset=π¯tonset-π(tonset)0, π¯ being the peer average performance. We propose to remove this initial non-zero difference from the integrand and capture only the non-linear response, which is also implemented in our examples.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Ekin Ilseven, Email: ekin.ilseven@insead.edu.

Phanish Puranam, Email: phanish.puranam@insead.edu.

References

  1. Abadie A, Diamond A, Hainmueller J. Synthetic control methods for comparative case studies: estimating the effect of California’s tobacco control program. J Am Stat Assoc. 2010;105(490):493–505. doi: 10.1198/jasa.2009.ap08746. [DOI] [Google Scholar]
  2. Argyres N, Bigelow L, Nickerson JA. Dominant designs, innovation shocks, and the follower's dilemma. Strateg Manag J. 2015;36(2):216–234. doi: 10.1002/smj.2207. [DOI] [Google Scholar]
  3. Ayyub BM. Systems resilience for multihazard environments: definition, metrics, and valuation for decision making. Risk Anal. 2014;34(2):340–355. doi: 10.1111/risa.12093. [DOI] [PubMed] [Google Scholar]
  4. Block J, Kremen AM. IQ and ego-resiliency: conceptual and empirical connections and separateness. J Personal Soc Psychol. 1996;70(2):349. doi: 10.1037/0022-3514.70.2.349. [DOI] [PubMed] [Google Scholar]
  5. Buyl T, Boone C, Wade JB. CEO narcissism, risk-taking, and resilience: an empirical analysis in US commercial banks. J Manag. 2019;45(4):1372–1400. [Google Scholar]
  6. Carpenter S, Walker B, Anderies JM, Abel N. From metaphor to measurement: resilience of what to what? Ecosystems. 2001;4(8):765–781. doi: 10.1007/s10021-001-0045-9. [DOI] [Google Scholar]
  7. Christie A (1983) On information arrival and hypothesis testing in event studies (Working paper)
  8. Cimellaro GP, Renschler C, Reinhorn AM, Arendt L. PEOPLES: a framework for evaluating resilience. J Struct Eng. 2016;142(10):04016063. doi: 10.1061/(ASCE)ST.1943-541X.0001514. [DOI] [Google Scholar]
  9. Conti R, Valentini G. Super partes? Assessing the effect of judicial independence on entry. Manag Sci. 2018;64(8):3517–3535. doi: 10.1287/mnsc.2017.2794. [DOI] [Google Scholar]
  10. Dai L, Eden L, Beamish PW. Caught in the crossfire: dimensions of vulnerability and foreign multinationals' exit from war-afflicted countries. Strateg Manag J. 2017;38(7):1478–1498. doi: 10.1002/smj.2599. [DOI] [Google Scholar]
  11. DesJardine M, Bansal P, Yang Y. Bouncing back: building resilience through social and environmental practices in the context of the 2008 global financial crisis. J Manag. 2019;45(4):1434–1460. [Google Scholar]
  12. Doz Y, Wilson K. Ringtone: exploring the rise and fall of Nokia in mobile phones. Oxford: Oxford University Press; 2017. [Google Scholar]
  13. Dutta S. Creating in the crucibles of nature’s fury: associational diversity and local social entrepreneurship after natural disasters in California, 1991–2010. Adm Sci Q. 2017;62(3):443–483. doi: 10.1177/0001839216668172. [DOI] [Google Scholar]
  14. Greve HR. Performance, aspirations, and risky organizational change. Admin Sci Q. 1998;43:58–86. doi: 10.2307/2393591. [DOI] [Google Scholar]
  15. Kendra JM, Wachtendorf T. Elements of resilience after the world trade center disaster: reconstituting New York City's Emergency Operations Centre. Disasters. 2003;27(1):37–53. doi: 10.1111/1467-7717.00218. [DOI] [PubMed] [Google Scholar]
  16. Kupers R, Mullie R. Turbulence: a corporate perspective on collaborating for resilience. Amsterdam: Amsterdam University Press; 2014. p. 177. [Google Scholar]
  17. Lengnick-Hall CA, Beck TE. Resilience capacity and strategic agility: prerequisites for thriving in a dynamic environment. San Antonio: UTSA, College of Business; 2009. pp. 39–69. [Google Scholar]
  18. Lin Z, Zhao X, Ismail KM, Carley KM. Organizational design and restructuring in response to crises: Lessons from computational modeling and real-world cases. Organ Sci. 2006;17(5):598–618. doi: 10.1287/orsc.1060.0210. [DOI] [Google Scholar]
  19. Markman GM, Venzin M. Resilience: lessons from banks that have braved the economic crisis—and from those that have not. Int Bus Rev. 2014;23(6):1096–1107. doi: 10.1016/j.ibusrev.2014.06.013. [DOI] [Google Scholar]
  20. Martin RL. When more is not better: overcoming America's obsession with economic efficiency. Boston: Harvard Business Press; 2020. [Google Scholar]
  21. Mithani MA. Adaptation in the face of the new normal. Acad Manag Perspect. 2020;34(4):508–530. doi: 10.5465/amp.2019.0054. [DOI] [Google Scholar]
  22. Modica M, Reggiani A. Spatial economic resilience: overview and perspectives. Netw Spat Econ. 2015;15(2):211–233. doi: 10.1007/s11067-014-9261-7. [DOI] [Google Scholar]
  23. Nykvist B, Von Heland J. Social-ecological memory as a source of general and specified resilience. Ecol Soc. 2014 doi: 10.5751/ES-06167-190247. [DOI] [Google Scholar]
  24. Ortiz-de-Mandojana N, Bansal P. The long-term benefits of organizational resilience through sustainable business practices. Strateg Manag J. 2016;37(8):1615–1631. doi: 10.1002/smj.2410. [DOI] [Google Scholar]
  25. Peterson SJ, Walumbwa FO, Byron K, Myrowitz J. CEO positive psychological traits, transformational leadership, and firm performance in high-technology start-up and established firms. J Manag. 2009;35(2):348–368. [Google Scholar]
  26. Powley EH, Caza BB, Caza A, editors. Research handbook on organizational resilience. Cheltenham: Edward Elgar Publishing; 2020. [Google Scholar]
  27. Puranam P. The microstructure of organizations. Oxford: Oxford University Press; 2018. [Google Scholar]
  28. Rao H, Greve HR. Disasters and community resilience: Spanish flu and the formation of retail cooperatives in Norway. Acad Manag J. 2018;61(1):5–25. doi: 10.5465/amj.2016.0054. [DOI] [Google Scholar]
  29. Sajko M, Boone C, Buyl T. CEO greed, corporate social responsibility, and organizational resilience to systemic shocks. J Manag. 2021;47(4):957–992. [Google Scholar]
  30. Sensier M, Bristow G, Healy A. Measuring regional economic resilience across Europe: operationalizing a complex concept. Spat Econ Anal. 2016;11(2):128–151. doi: 10.1080/17421772.2016.1129435. [DOI] [Google Scholar]
  31. Shin J, Taylor MS, Seo MG. Resources for change: the relationships of organizational inducements and psychological resilience to employees' attitudes and behaviors toward organizational change. Acad Manag J. 2012;55(3):727–748. doi: 10.5465/amj.2010.0325. [DOI] [Google Scholar]
  32. Stuart HC, Moore C. Shady characters: the implications of illicit organizational roles for resilient team performance. Acad Manag J. 2017;60(5):1963–1985. doi: 10.5465/amj.2014.0512. [DOI] [Google Scholar]
  33. Taleb NN. Antifragile: things that gain from disorder. New York: Random House Incorporated; 2012. [Google Scholar]
  34. Tang J (2019) Quantitative assessment of resilience in complex systems (Doctoral dissertation, ETH Zurich)
  35. Thompson JD. Organizations in action. New York: McGraw Hill; 1967. [Google Scholar]
  36. Tierney K, Bruneau M. Conceptualizing and measuring resilience: A key to disaster loss reduction. TR news. 2007;17(250):14–15. [Google Scholar]
  37. Tirunillai S, Tellis GJ. Does offline TV advertising affect online chatter? Quasi-experimental analysis using synthetic control. Mark Sci. 2017;36(6):862–878. doi: 10.1287/mksc.2017.1040. [DOI] [Google Scholar]
  38. Van Der Vegt GS, Essens P, Wahlström M, George G. Managing risk and resilience. Acad Manag J. 2015;58(4):971–980. doi: 10.5465/amj.2015.4004. [DOI] [Google Scholar]
  39. Wagnild GM, Young HM. Development and psychometric. J Nurs Meas. 1993;1(2):165–178. [PubMed] [Google Scholar]
  40. Weick KE, Sutcliffe KM. Managing the unexpected. San Francisco: Jossey-Bass; 2001. [Google Scholar]
  41. Williams TA, Gruber DA, Sutcliffe KM, Shepherd DA, Zhao EY. Organizational response to adversity: fusing crisis management and resilience research streams. Acad Manag Ann. 2017;11(2):733–769. doi: 10.5465/annals.2015.0134. [DOI] [Google Scholar]
  42. Youssef CM, Luthans F. Positive organizational behavior in the workplace: the impact of hope, optimism, and resilience. J Manag. 2007;33(5):774–800. [Google Scholar]
  43. Zhang L, Zeng G, Li D, Huang HJ, Stanley HE, Havlin S. Scale-free resilience of real traffic jams. Proc Natl Acad Sci USA. 2019;116(18):8673–8678. doi: 10.1073/pnas.1814982116. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Journal of Organization Design are provided here courtesy of Nature Publishing Group

RESOURCES