Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2022 Sep 16;127(18):e2022JD037521. doi: 10.1029/2022JD037521

Which Sudden Stratospheric Warming Events Are Most Predictable?

Dvir Chwat 1, Chaim I Garfinkel 1,, Wen Chen 2,3, Jian Rao 4
PMCID: PMC9540765  PMID: 36248185

Abstract

The predictability of Northern Hemisphere sudden stratospheric warming (SSW) events is considered in 10 subseasonal to seasonal (S2S) forecast models for 16 major SSWs that have occurred since 1998, a larger sample size than has been considered by previous works. The four factors that most succinctly distinguish those SSWs with above average predictability are a preconditioned vortex prior to the SSW, an active Madden‐Julian Oscillation with enhanced convection in the West Pacific, the Quasi‐Biennial Oscillation phase with easterlies in the lower stratosphere, and the vortex morphology (displacement more predictable). Two of these factors appear to not have been considered in previous works focusing on a large sample of events. Most of these effects are not statistically significant at the 95% level due to the still relatively small sample size, though all would exceed a 90% criteria at least marginally. Combined, however, they account for 40% of the inter‐event spread in SSW predictability, thus indicating that SSWs with favorable precursors are significantly more predictable.

Keywords: sudden warmings, MJO, ENSO, subseasonal predictability

Key Points

  • Predictability of sudden stratospheric warmings (SSWs) is compared among 10 subseasonal to seasonal (S2S) models for 16 events

  • Four factors distinguish SSWs with above average predictability: (a) enhanced West Pacific convection due to the Madden‐Julian Oscillation

  • (b) Quasi‐Biennial Oscillation easterlies near 50 hPa; (c) displacement SSWs; and (d) stratospheric preconditioning before the event

1. Introduction

During a major sudden stratospheric warming (SSW), stratospheric westerly winds in the circumpolar region reverse to easterly winds and temperatures rise over the pole by tens of degrees (Baldwin et al., 2021; Butler et al., 2015; Charlton & Polvani, 2007; Schoeberl, 1978). SSWs are typically followed by anomalous cold air outbreaks in northern Eurasia and enhanced precipitation over Southern Europe and parts of East Asia (Garfinkel et al., 2017; Karpechko et al., 2018; Kolstad et al., 2010; Kretschmer et al., 2018; Lehtonen & Karpechko, 2016; Thompson et al., 2002). As the characteristic time scale of a major SSW and its surface impact extends for several months, accurately predicting SSWs would open a window of opportunity for more reliable probabilistic predictability of surface weather anomalies on subseasonal time scales (Baldwin et al., 2003; Sigmond et al., 2013; Tripathi et al., 2015).

The factors governing the predictability of SSW events are only partially known. Previous work has found that predictability can range from several days to near a month depending on the model used and the specific SSW focused on (Karpechko, 2018; Noguchi et al., 2016; Rao et al., 20192020; Taguchi, 2014; Tripathi et al., 2016). This wide spread may reflect differences in the predictability of different events, in the skill of different forecast systems, and on the method used to quantify successful prediction. For example, raising the model‐lid has been shown to lead to an improved predictability of SSWs (Marshall & Scaife, 2010), and the high‐top subseasonal to seasonal (S2S) models examined by Rao et al. (2019), Domeisen et al. (2020), and Rao et al. (2020) typically performed better at capturing SSWs than the low‐top models. Some studies have suggested that split SSWs are more difficult to forecast than displacement SSWs (Domeisen et al., 2020; Taguchi, 2016a20182020), though because of the limited sample size the statistical significance of this effect is relatively weak. Accurately capturing the anomalous wave flux in both the troposphere and lower stratosphere that usually precedes SSWs has also been pinpointed as important for SSW predictability (Karpechko et al., 2018; Mukougawa et al., 2005; Taguchi, 2016b2018; Tripathi et al., 2016). Relatedly, the easterly deceleration leading up to SSWs was more predictable in the two models considered by Garfinkel and Schwartz (2017) for SSWs preceded by the phase of the Madden Julian Oscillation (MJO) with enhanced convection in the western Pacific, the phase that has been shown to lead to more SSWs overall (Garfinkel et al., 2012).

Additional factors have been noted to help induce a SSW, though their role for predictability is still not clear. For example, a stronger waveguide for Rossby waves due to a poleward shifted and accelerated vortex accompanied by weaker westerlies in the subtropics leads to a better defined surf‐zone and precedes many SSWs (Baldwin & Holton, 1988; Lawrence & Manney, 2020), though whether such SSWs are more predictable has not been explored in a large sample of SSWs. In addition, the easterly phase of the Quasi‐Biennial Oscillation (QBO) leads to more SSWs, and while case studies have suggested that SSWs during the easterly phase of the QBO may be more predictable (Rao et al., 201920202021), this relationship has not been explored in a large sample of SSWs.

The S2S Prediction project (Vitart et al., 2017) has recently made available a large number of hindcasts and accompanying operational forecasts covering the past few decades. These simulations are all initialized with observed sea surface temperatures and the atmospheric state, and as they are used operationally, they can be compared directly to observed variability during the duration of their forecast. There are two previous studies which contrasted multi‐model predictability of different specific SSWs in the S2S database. Taguchi (2018) considered 4 NH SSWs in the hindcasts of 9 models, while Taguchi (2020) considered 10–11 NH SSWs in the hindcasts of 4 models, and noticed that the predictability of the SSW varies with event types (vortex split or displacement), the model considered, and the ability to represent the anomalous heat flux. Here we revisit the S2S database and considering ten models and 16 different major SSWs, a larger sample than any previous study, we attempt to answer the following question: what distinguishes SSWs that were well‐forecasted from those that were poorly forecasted?

We demonstrate that regardless of the metric used, predictability for SSWs varies from less than five days to almost 20 days depending on the SSW in question. This spread in predictability is associated with a range of factors, including two which appear to have been seldom demonstrated before in such a large sample of events: preconditioning of the vortex and lower stratospheric easterly QBO conditions.

2. Data and Methods

We focus on the 10 modeling centers that have contributed to the S2S Prediction project (Vitart et al., 2017) with output at 10 hPa—the Australian Bureau of Meteorology (BoM), the European Centre for Medium‐Range Weather Forecasts (ECMWF), the China Meteorological Administration (CMA), the United Kingdom Met. Office (UKMO), the National Center for Environmental Prediction (NCEP), the Korean Meteorological Agency (KMA), the Japan Meteorological Agency (JMA), the Institute of Atmospheric Sciences and Climate of the National Research Council of Italy (ISAC‐CNR), Environment and Climate Change Canada (ECCC), and Meteo France (CNRM). Table 1 summarizes the models analyzed in this study. We use the high‐top version of CMA starting in 2004 when its hindcasts are first available, and the low‐top version earlier when the high‐top version is unavailable. These various models differ in the quality of their representation of the stratosphere: the stratosphere is less well resolved in BoM and ISAC‐CNR as compared to the other models (Table 1). Note that we use the high‐top version of ECCC, and download the once‐weekly hindcasts issued both in 2020 and in 2021 to increase temporal resolution to twice‐weekly. For the UKMO, we downloaded hindcasts for the operational model in use during 2015 and the winter of 2019/2020, and for the ECMWF, we downloaded data for the model version in use during 2016 and the winter of 2019/2020 (CY41R1/CY41R2 and CY46R1). Real‐time forecasts are used for the three SSWs since 2018.

Table 1.

For the UKMO, We Downloaded Hindcasts for the Operational Model in Use During 2015 and the Winter of 2019/2020, and for the European Centre for Medium‐Range Weather Forecasts, We Downloaded Data for the Model Version in Use During 2016 and the Winter of 2019/2020 (CY41R1/CY41R2 and CY46R1)

S2S model experiments chosen model (ensemble members) vertical levels model top
CMA: BCC‐CPS‐S2Sv1 (4) 40 0.5 hPa
CMA: BCC‐CPS‐S2Sv2 (4) 56 0.1 hPa
NCEP (4) 64 0.02 hPa
ECMWF2016 (11) 91 0.01 hPa
ECMWF2019 (11) 91 0.01 hPa
BoM (33) 17 10 hPa
UKMO2015 (3) 85 85 km
UKMO2019 (7) 85 85 km
KMA (3) 85 85 km
Me´te´o France: CNRM‐CM 6.1 (10) 91 0.01 hPa
CNR‐ISAC (5) 54 6.8 hPa
ECCC: GEPS6 (4) 45 0.1 hPa
JMA: GEPS1701 (5) 128 0.01 hPa

Note. These versions are considered separately. Note that the low top CMA version is used for 1998 through 2003, and the high‐top CMA version since 2004.

We focus on 16 SSWs that have occurred since 1998, ten of which occurred in the period common to all models (1999–2009). These SSWs are listed in Figure 1. For each event, we also consider the El Nino‐Southern Oscillation (ENSO), MJO, and QBO phase immediately before the event. The ENSO state is characterized using the observed Niño3.4 index extracted from monthly mean ERSSTv5 data (Huang et al., 2017) for the calendar month which contains the onset date of the SSW. The MJO state is defined following Wheeler and Hendon (2004), and specifically we compute the average amplitude and phase using the two Real‐time Multivariate MJO Indices from 5 to 15 days before the SSW in order to characterize the MJO state preceding a SSW (motivated by Garfinkel et al., 2012). If the amplitude is below 1.0, then the MJO is considered to be inactive. The QBO state is characterized using the observed zonal mean zonal wind at 50 hPa in monthly mean NCEP CDAS reanalysis data for the calendar month which contains the day of the SSW. The characterization of a SSW as either split or displacement, and also the onset date of the event, follows Table 1 of Cohen and Jones (2011) for earlier events, Tripathi et al. (2016) for the 2013 event, and Rao et al. (2020) and Rao et al. (2021) for the three most recent events. All of these aforementioned factors that may potentially lead to enhanced predictability are poorly correlated with each other for the 16 SSW events considered here (the highest correlation in absolute value is −0.32 between the vortex precursor and the QBO), and hence these factors are all treated as independent external drivers that can lead to vortex predictability. The maximum correlation is even lower if we remove the two most marginal SSW events on 30 December 2001 and 18 January 2003 in which winds barely reversed to easterlies (Figure 1). (The suddenness metric introduced later is well correlated with the QBO, however the suddenness metric is not well correlated with predictability as defined using the hit rate metric.) Note that the first day of easterly winds can differ among reanalysis products, however for these events modern reanalyses agree to within 1 day of each other (e.g., Butler et al., 2017).

Figure 1.

Figure 1

Summary of predictability for all 16 Sudden stratospheric warmings (SSWs) and 10 models considered in this work. The largest number of days before the SSW in which at least half of the hindcast ensemble members still simulate a SSW (working backwards from the actual SSW date) is indicated. A “0” indicates that this criterion is not met by any initialization prior to the SSW onset date. Institute of Atmospheric Sciences and Climate (ISAC) and National Center for Environmental Prediction (NCEP) hindcasts are not archived on the subseasonal to seasonal servers for the 2013 event. Note that the low‐top version of China Meteorological Administration (CMA) is used for SSWs in 2003 or earlier, as the hindcasts for the high‐top CMA begin only in 2004. The median predictability excludes Bureau of Meteorology, and also CMA before 2004, as these models are known to suffer from large mean state biases (Lawrence et al., 2022; Schwartz & Garfinkel, 2020; Schwartz et al., 2022). Also indicated are the El Nino‐Southern Oscillation (ENSO), Quasi‐Biennial Oscillation (QBO), and Madden Julian Oscillation (MJO) conditions preceding the SSW, as well as the SSW morphology and peak easterly winds in the 2 weeks after the onset. For ENSO and the QBO, these are color‐coded based on their sign (with neutral in green). For the MJO, Phase 5/6/7 and amplitudes exceeding 1 are colored red. Displacement events are also colored red. Peak easterly winds between 0 and −10 m/s or less than −20 m/s are colored red and blue respectively, with intermediate events in green.

An ensemble member is deemed “successful” if it simulates a SSW within ±3 days of its actual onset date. This definition of a “success” follows Taguchi (2016a), Taguchi (2020), Domeisen et al. (2020), Rao et al. (2019), and Rao et al. (2020). In addition to this hit rate metric, we consider the absolute error, that is, the absolute value of the difference between the ensemble mean predicted zonal wind at 60°N, 10 hPa and the actual zonal wind, averaged within ±1 day of the observed onset date. A “successful” forecast requires the absolute error to be less than 10 m/s, and results are similar if we use a threshold of 5 m/s instead (not shown).

3. Results

We begin with a map of the hit‐rate for the SSW that occurred on 22 February 2008 in Figure 2, as this SSW turned out to be the most predictable one of the SSWs considered in this study (as defined by median hit rate among the models) not previously documented, and the only SSW with predictability approaching that of the well‐predicted 1 January 2019 event already documented by Rao et al. (2019). Nearly all models successfully simulated this SSW for initializations up to 10 days before the SSW. For several models hit rates exceeding 50% are present 15 days before the SSW. All MeteoFrance hindcasts initialized 15 days before the SSW capture it, while half of the MeteoFrance hindcasts initialized 22 days before the SSW still successfully simulate it. The net effect is that the predictability of this event (as for the 1 January 2019 event) for some modeling systems substantially exceeds the deterministic predictability limit of around 10 days for SSWs commonly mentioned in previous work (Domeisen et al., 2020; Taguchi, 2020). In contrast, other SSWs, for example, the 18 January 2003 event, are poorly predicted (Figure S1 in Supporting Information S1).

Figure 2.

Figure 2

Initialization dates and ensemble sizes of the hindcasts available from each subseasonal to seasonal model from 22 January to 22 February 2008. The ensemble size is indicated by the number in each grid cell. The color shading in each grid cell denotes the sudden stratospheric warming hit ratio (units: %) of the ensemble members that forecast a reversal of the zonal mean zonal wind at 60°N and 10 hPa from 19 to 25 February 2008 (i.e., a maximum error of ±3 days is allowed). A blank grid denotes that no hindcasts were initialized on the specific day for the corresponding model.

Similar maps of the hit‐rate have been created for all 16 SSWs, and Figure 1 summarizes the predictability of each SSW for each forecast system. Specifically, we list the earliest forecast lead day on which at least 50% of the ensemble members still successfully forecast the SSW. The frequency with which hindcasts are produced and the specific hindcast dates differ among the models, and hence it can be challenging to directly compare forecast skill between models with, say, daily hindcasts to models with hindcasts every 10 days. Nevertheless, there is a general indication that the low‐top models (BoM and ISAC‐CNR) struggle as compared to the high‐top models, in agreement with previous work. Relatedly, the CMA modeling system is more successful at simulating SSWs starting in 2004 than for earlier SSWs when only the low‐top version of CMA is available. The eight high‐top models differ in their skill for different events, and given the lack of consistent initialization dates, we do not discuss their relative abilities to represent the timing of SSW onset nor grade them. Rather, our goal going forward is to distinguish between SSWs that are relatively more predictable versus relatively less predictable, defined as the median predictability among the models. Results are similar if we focus on whether the “success” rate is greater than the “false alarm rate,” defined as the number of members that predict an event to occur within versus outside of ±3 days surrounding the actual event (not shown). Results are also similar for the absolute error metric (Section 2) if marginal SSWs are not included, as discussed later.

We begin with the three relatively predictable SSWs during the common period to all models (1999–2009): the 22 February 2008, 26 February 1999, and 5 January 2004 events. All three of these events occurred during MJO Phase 6 or 7, and two occurred during strong eQBO conditions and the third during weak eQBO conditions. Both eQBO and the MJO Phase 6 and 7 are known to lead to a weaker vortex, and hence are plausibly linked to enhanced SSW predictability. In contrast, two of these three occurred during La Nina and the third during neutral ENSO, which tends to lead to a stronger monthly mean vortex and a reduced probability of easterly winds in the full hindcast ensemble for the models considered by Garfinkel et al. (2019). However, over the period considered in this study the observed relationship between ENSO and SSW was opposite, with more SSWs during La Nina (Domeisen et al., 2019), and ENSO also may modulate the impact of the MJO on SSWs (Ma et al., 2020). Favorable precursors ‐ in particular El Nino, MJO Phase 5, and eQBO–were present before the well‐predicted 1 January 2019 event as well (Rao et al., 2019).

These results are generalized to all SSWs in the scatter plots on Figure 3. These compare the median predictability for each event to these long‐duration external forcings. While ENSO is not related to the predictability of SSWs, the QBO and MJO state are (Figures 3a–3c): eQBO and the occurrence of MJO phases 5, 6, 7 of amplitude exceeding 1 are associated with more predictable SSWs, though the correlations do not meet the threshold for statistical significance using a two‐tailed students‐t test for 14 degrees of freedom (namely, r(α = 0.05) = 0.5). Even if we were to use a one‐tailed test motivated by previous work that has demonstrated that these modes of variability lead to a weaker vortex, the threshold for significance would be 0.43 and we still could not reject a null hypothesis of no effect; a Monte‐Carlo bootstrapping in which (as an example) the QBO values are randomly re‐assigned to a different SSW 20,000 times indicates an essentially identical threshold (Figure S4 in Supporting Information S1). A Wilcoxon rank sum test of the effect of MJO Phase 5/6/7 on hit rate also is not high enough to reject the null hypothesis of no effect at the 95% level. However both of these correlations would be sufficient to reject the null hypothesis if a confidence level of 90% was used. The importance of the MJO and eQBO is somewhat weaker for the absolute error metric (Figure S3 in Supporting Information S1). While the effect of the MJO on SSW predictability for the metrics used in this study is, at best, marginally significant, other metrics do show a significant impact of the MJO on SSW predictability: Garfinkel and Schwartz (2017) found that models predicted a significantly stronger deceleration of winds for SSWs preceded by MJO Phase 6/7 as opposed to other events, though the deceleration they found does not necessarily lead to a wind‐reversal as observed.

Figure 3.

Figure 3

Scatter plots comparing the median predictability as defined using the hit rate exceeding 50% metric (y‐axis) to each of the following factors (x‐axis): (a) Niño3.4 index [Kelvin]; (b) Quasi‐Biennial Oscillation [m/s]; (c) whether the event was preceded by Madden Julian Oscillation Phase 5, 6, or 7 of amplitude exceeding 1 in the two weeks before the event; (d) precursor pattern in the subpolar upper stratosphere 15–19 days before the sudden stratospheric warming (SSW) using ERA5 reanalysis data [m/s]; (e) U10 hPa, 60N anomalies 5–9 days before the SSW using ERA5 reanalysis data [m/s]; (f) split versus displacement; (g) comparison to absolute error of 10 m/s metric (Figure S2 in Supporting Information S1); (h) a multiple linear regression model using panels b, c, d, and f as given by Equation 1 [days]. Each of the 16 SSWs is indicated with an “x”, and the two weakest, most marginal SSWs are indicated in red (18 January 2003 and 30 December 2001). The correlation for each panel is indicated both with and without including these two marginal SSWs. For the 2021 SSW, the median predictability does not include predictions from European Centre for Medium‐Range Weather Forecasts and MeteoFrance, as these modeling centers significantly upgraded their model as compared to the model versions used for all other SSWs.

Thus far we have found some evidence that long‐duration external forcings can contribute to SSW predictability, and now we switch our focus to whether the properties of the SSW itself help distinguish well‐predicted SSWs from poorly predicted SSWs. Specifically, some are preceded by stratospheric preconditioning while others are not (Lawrence & Manney, 2020). For some the weakening of the westerlies is more gradual while others are more sudden, and finally some SSWs are splits while others are displacement. Is one type of event harder to predict?

We begin by considering the state of zonally averaged zonal wind in days 15–19 before each of these SSW events using ERA5 reanalysis data in Figure 4, with the SSWs ordered by the median hit rate predictability metric. First, note that the five most predictable SSWs (panels l through p) all occurred during eQBO conditions in the tropical lower stratosphere (though as shown in Figure 3b so did several poorly predicted SSWs). Previous work has shown that SSWs are often preceded by a poleward shift of the stratospheric vortex and a weakening of winds in the subtropics, which has the effect of bringing the waveguide for Rossby waves closer to the pole while also strengthening it. We diagnose this effect by computing the difference between zonal wind at 80°N, 5–10 hPa and zonal wind at 40°N, 5–10 hPa (indicated with gray vertical lines in Figure 4). This vortex preconditioning metric is compared to the median predictability metric in Figure 3d, and it is clear that there is a strong connection that just misses the 95% threshold for significance for a two‐tailed test (either using a Monte‐Carlo bootstrapping or a Student‐t test; Figure S4 in Supporting Information S1). However, this connection would be deemed significant if a one‐tailed test was used, and such a one‐tailed test may be appropriate here due to a priori knowledge that vortex preconditioning should help SSW development. Results are similar for days 10–14, or for the absolute error of 10 m/s metric, though below the threshold for significance.

Figure 4.

Figure 4

Latitude‐height cross sections of zonal wind anomalies 15–19 days before the 16 sudden stratospheric warmings in this paper using ERA5 reanalysis data, ordered by their respective median predictability as given by the hit‐rate metric of Figure 1. An “x” marks the location of 10 hPa, 60°N, and vertical gray lines mark the locations used to define stratospheric preconditioning. Black contours denote the 20 m/s and 40 m/s climatological isotachs for the corresponding days.

One might hypothesize that SSWs which develop more slowly (i.e., easterly U10 hPa 60°N anomalies are present well before the onset date) should be easier to simulate, as the initialization already includes a vortex weakening. We examine this hypothesis in Figure 3e, which compares 10 hPa, 60°N zonal wind anomalies 5–9 days before onset using ERA5 reanalysis data (i.e., a “suddenness” metric) to the median predictability. While there is a tendency for SSWs that already feature weakening of the vortex 5–9 days before onset to have a higher hit rate than SSWs that develop more suddenly, the overall correlation is not significant. The importance of slowly developing easterlies is more important for the absolute error of 10 m/s metric (correlation of −0.36, Figure S3 in Supporting Information S1), which is high enough to confidently reject a null hypothesis of no effect at the 90% level. Note that this metric is well correlated with the QBO across these 16 events (correlation of 0.54, whereby wQBO SSWs tend to be preceded by strong vortex states), however, hence there is some ambiguity as to whether the QBO or the suddenness metric are more important for SSW predictability when using the absolute error metric.

Finally, we consider vortex morphology. Figure 3f shows that displacement SSWs are more predictable than split SSWs as measured by the hit rate. If we focus on the absolute error, the correlation increases to 0.46 and the Wilcoxon rank sum test is now at the threshold for significance at the 95% level if a one‐tailed test is used.

The predictability of SSWs as derived from the hit‐rate and absolute error metrics are compared in Figure 3g. The relatively low correlation (0.24) is heavily influenced by two outlier SSWs (marked in red–30 December 2001 and 18 January 2003), and if these two events were removed then the correlation between the metrics rises to 0.62. Both have lower hit‐rates than would be expected given their absolute errors. (Note that results earlier in this paper concerning factors leading to enhanced predictability are robust to removing these two events.) Why might these events have a poor hit‐rate despite a relatively successful absolute error? These two events were the weakest SSWs of the 16 considered, and in order to clarify why this matters, we focus specifically on the 30 December 2001 event, the weaker of the two. Note that this event was poorly predicted using the hit‐rate metric despite many favorable precursors (e.g., a strong MJO Phase 6/7 event) but was the most predictable SSW if the absolute error metric is used (Figure S2 in Supporting Information S1), and indeed was used by Garfinkel and Schwartz (2017) as a case‐study of how ensemble members which successfully simulate MJO‐related convection tend to better predict the ensuing SSW.

We focus on ECMWF hindcasts of this event in Figure 5, with relatively successful ensemble members in blue and other ensemble members in red. On 26 December 2001, zonal winds at 10 hPa, 60N in ERA‐I weakened to 2.2 m/s, though only four days later did they actually reverse. Three of the eleven ECMWF initializations from 19 December 2001 simulated a SSW on 26 December 2001 (indicated in blue), and the ensemble mean vortex strength was weaker than observed (Figure 5a). If we focus on the 12 December 2001 initialization, five of the eleven ensemble members simulated a SSW within three days of 26 December 2001 (indicated in blue), and again the ensemble mean vortex strength was more easterly than observed (Figure 5c). Only the 5 December 2001 initialization can be considered an unambiguous forecast bust: most ensemble members struggle to simulate a weakening of the vortex, and only one member simulates a SSW (Figure 5e). While the 19 December 2001 and 12 December 2001 initializations capture the extremely strong pulse of heat flux in the first week (and over‐estimate it for the successful ensemble members initialized on 12 December 2001), the pulse immediately before the SSW is not well represented even in the relatively successful ensemble members, and this late developing pulse appears to have been important for the winds reversing on 30 December 2001. The mid‐December pulse of heat flux is under‐estimated by the 5 December 2001 initialization, though the three ensemble members that more realistically simulate a weakening of the vortex also do a better job at capturing this wave flux event. The net effect is that the 30 December 2001 SSW was preceded by a strong and long‐lasting wave pulse which is generally well represented even in initializations 18 days before the event, however the eventual SSW was comparatively weak and the models struggled to capture its timing due to their failure to capture the late December secondary heat flux pulse. This struggle to capture its timing led to a forecast bust if we adopt the ±3 days criteria used by previous work (Domeisen et al., 2020; Taguchi, 2016a2020), however the absolute error metric reveals this as the most predictable SSW. Such events are the exception, however, and this divergence between the metrics is associated with the relative weakness of the underlying SSW; for non‐marginal SSWs the absolute error and hit‐rate metrics generally agree.

Figure 5.

Figure 5

Evolution of the 12‐30‐2001 sudden stratospheric warming (SSW) in the European Centre for Medium‐Range Weather Forecasts forecast system. ERA‐I reanalysis data are shown in thick black, relatively successful ensemble members in blue (defined as those with a wind reversal on the SSW onset date for the top two rows, and those in which U10 hPa 60°N was below 20 m/s on the SSW onset date for the bottom row), poorer ensemble members in red, and the daily climatology in reanalysis in green. Thick blue and red lines show the ensemble means for the more‐successful and less‐successful ensemble members respectively. (left) zonal mean zonal wind at 10 hPa, 60N; (right) wave‐1 heat flux at 100 hPa from 40 to 80°N. We show the wave‐1 heat flux only as this was a displacement event.

Finally, we consider the question: if all of the favorable precursors are considered together, can they successfully predict which SSW events are more predictable? To answer this question, we form a multiple linear regression model using these four predictors: the QBO, MJO, morphology, and vortex precursor (panels b, c, d, and f of Figure 3) to predict the hit ratio metric. Specifically we solve for the β coefficients in the equation

hitratepredicted(m)=β0+βQBOQBO(m)+βMJOMJO(m)+βmorphologymorph(m)+βvortexV(m). (1)

that minimize the residual of the resulting best fit line using ordinary linear least squares regression across all 16 SSW events, with m ranging from 1 to 16. All four predictors are standardized so that the units of all four β are identical.

Figure 3h compares the actual predictability from the models to the predicted hit ratio from the multiple linear regression model. It is clear that the multiple linear regression model can explain much of the inter‐event variability in predictability (correlation of 0.63, significant at above the 99% level using either a one‐tailed Student‐t or a bootstrapping test; Figure S4 in Supporting Information S1). The coefficients from the multiple linear regression model indicates the relative importance of each of the four. The most important is the vortex precursor, with a 1.4 days increase in predictability for every 1 standard deviation increase in the vortex precursor metric (i.e., β vortex = 1.4 days per std). The next most important is the MJO, with a 1.1 day increase in predictability for every 1 standard deviation change in the MJO metric (i.e., β MJO = 1.1 days per std). The last two coefficients, β QBO and β morphology, are −0.62 and 0.75, respectively. The relative importance of these predictors differs if one focuses on the absolute error metric (the morphology predictor is now most important with β morphology = 1.29 days per std), and further the correlation of the predicted absolute error with the actual absolute error using these four predictors drops to 0.53. However replacing the QBO predictor with the suddenness predictor leads to a improved correlation of 0.58 (Figure S3 in Supporting Information S1). Regardless of the metric used to quantify predictability, the net effect is that SSW events preceded by favorable precursors are more predictable.

4. Summary and Discussion

SSWs are associated with a range of surface impacts, and to the extent that SSWs can be predicted on subseasonal timescales, there is hope that the subsequent surface impacts could be predicted at earlier leads. Here, we evaluated the predictability of a larger sample of SSWs than has been considered in any previous work, and showed that there is wide diversity in the predictability of different SSW events. Some are predictable 20 or more days in advance in the best performing models, while others can only be predicted a week in advance even in the best performing models. Previous work using fewer models and fewer cases has focused on the vortex morphology or the existence of tropospheric precursors as important for SSW predictability, with displacement events and events preceded by the MJO Phase 6/7 more predictable (Domeisen et al., 2020; Garfinkel & Schwartz, 2017; Rao et al., 2019; Taguchi, 2018). Our results with a relatively larger number of models and SSWs (10 models and 16 SSWs) support this previous work: SSWs preceded by MJO Phase 6/7 and that are of displacement morphology are indeed more predictable. In addition to these factors, our results also provide evidence for two factors that do not seem to have been noted before. Specifically, SSWs preceded by a tightening of the vortex around the pole and a retraction from the subtropics, and also SSWs during eQBO, are more predictable. There is also an indication that more gradual SSWs are more predictable especially if an absolute error metric is used. Despite the larger sample assembled here than in previous work, these effects are capable of clearly rejecting a null hypothesis of no effect only at the 90% confidence level for the metrics considered here. Future work should also consider strong deceleration events (e.g., Wu et al., 2022) that nonetheless do not meet the SSW definition in order to enlarge the sample size.

A commonly used criteria (and the criteria used mostly in this work) for a successful forecast is that the central date of the simulated SSW falls within ±3 days of the actual event. While this criteria is simple to apply and logical for strong SSWs, it may lead to an underestimate of skill for more marginal events. For example, ECMWF initializations 18 days before the 30 December 2001 event performed remarkably well for the first 2 weeks of the forecast (Figure 5) with low absolute errors (Figure S2 in Supporting Information S1), yet the ±3 days hit‐rate criteria judges this forecast to be a failure. This was a marginal SSW event, however, and if this event (and also the 18 January 2003 marginal event) are not considered, the hit rate metric and the absolute error metric agree on the relative predictability of most other events.

An important caveat of our study is that we are assuming that the currently realized predictability of SSWs from biased forecast systems (Lawrence et al., 2022; Schwartz & Garfinkel, 2020; Schwartz et al., 2022) is a proxy for the “true” potential predictability of these SSW events. While our computation of median predictability only considers high‐top models which are known to be less biased (Lawrence et al., 2022; Schwartz & Garfinkel, 2020; Schwartz et al., 2022), future work should reconsider future generations of high‐top models with smaller biases. This assumption is particularly suspect for the marginal SSW events, as for example, a relatively small too‐strong vortex bias can lead to the forecast system missing the transition to easterlies (Figure 5). While the vortex is well‐represented in the ECMWF forecast system (Figure 7 of Schwartz et al. (2022)), biases exist for other systems and future work should explore the impact of bias correction on prediction skill of different SSWs.

Regardless of whether we include or exclude marginal SSWs from the sample, we find evidence that there are four distinct factors, of roughly equal importance and not well‐correlated with each other, that help lead to more predictable SSWs: MJO Phase 5/6/7, easterly QBO, displacement morphology, and a strong mid‐ and upper‐stratospheric preconditioning. While individually they are only marginally significant in the still‐small sample available, taken together they are robustly associated with more predictable SSWs, and specifically explain more than 40% of the inter‐event variability in predictability. Future work should consider whether this enhanced predictability can help lead to improved surface forecasts.

Supporting information

Supporting Information S1

Acknowledgments

DC, CIG, and WC are supported by the ISF‐NSFC joint research program (ISF Grant No. 3259/19 and National Natural Science Foundation of China Grant No. 41961144025). CIG and DC were also supported by a European Research Council starting grant under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 677756). We thank the two reviewers for their constructive comments on an earlier version of this paper. JR was supported by the National Natural Science Foundation of China (Grant No. 42175069). This work is based on S2S data. S2S is a joint initiative of the World Weather Research Programme (WWRP) and the World Climate Research Programme (WCRP).

Chwat, D. , Garfinkel, C. I. , Chen, W. , & Rao, J. (2022). Which sudden stratospheric warming events are most predictable? Journal of Geophysical Research: Atmospheres, 127, e2022JD037521. 10.1029/2022JD037521

Data Availability Statement

The original S2S database is hosted at ECMWF as an extension of the TIGGE database, and can be downloaded from the ECMWF server verthttp://apps.ecmwf.int/datasets/data/s2s/levtype=sfc/type=cf/vert. The QBO data was downloaded from the NCEP website verthttps://www.cpc.ncep.noaa.gov/data/indices/qbo.u50.indexvert. The real time multivariate index of Wheeler and Hendon (2004) was downloaded from the BoM website (http://www.bom.gov.au/climate/mjo/graphics/rmm.74toRealtime.txtvert).

References

  1. Baldwin, M. P. , Ayarzagüena, B. , Birner, T. , Butchart, N. , Butler, A. H. , Charlton‐Perez, A. J. , et al. (2021). Sudden stratospheric warmings. Reviews of Geophysics, 59(1), e2020RG000708. 10.1029/2020rg000708 [DOI] [Google Scholar]
  2. Baldwin, M. P. , & Holton, J. R. (1988). Climatology of the stratospheric polar vortex and planetary wave breaking. Journal of the Atmospheric Sciences, 45(7), 1123–1142. [DOI] [Google Scholar]
  3. Baldwin, M. P. , Stephenson, D. B. , Thompson, D. W. J. , Dunkerton, T. J. , Charlton, A. J. , & O’Neill, A. (2003). Stratospheric memory and skill of extended‐range weather forecasts. Science, 301(5633), 636–640. 10.1126/science.1087143 [DOI] [PubMed] [Google Scholar]
  4. Butler, A. H. , Seidel, D. J. , Hardiman, S. C. , Butchart, N. , Birner, T. , & Match, A. (2015). Defining sudden stratospheric warmings. Bulletin of the American Meteorological Society, 96(11), 1913–1928. 10.1175/bams-d-13-00173.1 [DOI] [Google Scholar]
  5. Butler, A. H. , Sjoberg, J. P. , Seidel, D. J. , & Rosenlof, K. H. (2017). A sudden stratospheric warming compendium. Earth System Science Data, 9(1), 63–76. 10.5194/essd-9-63-2017 [DOI] [Google Scholar]
  6. Charlton, A. J. , & Polvani, L. M. (2007). A new look at stratospheric sudden warmings. Part I: Climatology and modeling benchmarks. Journal of Climate, 20(3), 449–469. 10.1175/JCLI3996.1 [DOI] [Google Scholar]
  7. Cohen, J. , & Jones, J. (2011). Tropospheric precursors and stratospheric warmings. Journal of Climate, 24, 6562–6572. 10.1175/2011JCLI4160.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Domeisen, D. I. , Butler, A. H. , Charlton‐Perez, A. J. , Ayarzaguena, B. , Baldwin, M. P. , Dunn‐Sigouin, E. , et al. (2020). The role of the stratosphere in sub‐seasonal to seasonal prediction. Part I: Predictability of the stratosphere. Journal of Geophysical Research: Atmospheres, 125(2), e2019JD030920. 10.1029/2019JD030920 [DOI] [Google Scholar]
  9. Domeisen, D. I. , Garfinkel, C. I. , & Butler, A. H. (2019). The teleconnection of El Niño Southern Oscillation to the stratosphere. Reviews of Geophysics, 57(1), 5–47. 10.1029/2018RG000596 [DOI] [Google Scholar]
  10. Garfinkel, C. I. , Feldstein, S. B. , Waugh, D. W. , Yoo, C. , & Lee, S. (2012). Observed connection between stratospheric sudden warmings and the Madden‐Julian Oscillation. Geophysical Research Letters, 39(18), L18807. 10.1029/2012GL053144 [DOI] [Google Scholar]
  11. Garfinkel, C. I. , & Schwartz, C. (2017). MJO‐related tropical convection anomalies lead to more accurate stratospheric vortex variability in subseasonal forecast models. Geophysical Research Letters, 44(19). 10.1002/2017gl074470 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Garfinkel, C. I. , Schwartz, C. , Butler, A. H. , Domeisen, D. I. , Son, S.‐W. , & White, I. P. (2019). Weakening of the teleconnection from El Niño–Southern Oscillation to the Arctic stratosphere over the past few decades: What can be learned from subseasonal forecast models? Journal of Geophysical Research: Atmospheres, 124(14), 7683–7696. 10.1029/2018jd029961 [DOI] [Google Scholar]
  13. Garfinkel, C. I. , Son, S.‐W. , Song, K. , Aquila, V. , & Oman, L. D. (2017). Stratospheric variability contributed to and sustained the recent hiatus in Eurasian winter warming. Geophysical Research Letters, 44(1), 374–382. 10.1002/2016GL072035 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Huang, B. , Thorne, P. W. , Banzon, V. F. , Boyer, T. , Chepurin, G. , Lawrimore, J. H. , et al. (2017). Extended reconstructed sea surface temperature, version 5 (ERSSTv5): Upgrades, validations, and intercomparisons. Journal of Climate, 30(20), 8179–8205. 10.1175/jcli-d-16-0836.1 [DOI] [Google Scholar]
  15. Karpechko, A. Y. (2018). Predictability of sudden stratospheric warmings in the ECMWF extended‐range forecast system. Monthly Weather Review, 146(4), 1063–1075. 10.1175/mwr-d-17-0317.1 [DOI] [Google Scholar]
  16. Karpechko, A. Y. , Charlton‐Perez, A. , Balmaseda, M. , Tyrrell, N. , & Vitart, F. (2018). Predicting sudden stratospheric warming 2018 and its climate impacts with a multimodel ensemble. Geophysical Research Letters, 45(24), 13–538. 10.1029/2018gl081091 [DOI] [Google Scholar]
  17. Kolstad, E. W. , Breiteig, T. , & Scaife, A. A. (2010). The association between stratospheric weak polar vortex events and cold air outbreaks in the northern hemisphere. Quarterly Journal of the Royal Meteorological Society, 136(649), 886–893. 10.1002/qj.620 [DOI] [Google Scholar]
  18. Kretschmer, M. , Coumou, D. , Agel, L. , Barlow, M. , Tziperman, E. , & Cohen, J. (2018). More‐persistent weak stratospheric polar vortex states linked to cold extremes. Bulletin of the American Meteorological Society, 99(1), 49–60. 10.1175/BAMS-D-16-0259.1 [DOI] [Google Scholar]
  19. Lawrence, Z. D. , Abalos, M. , Ayarzagüena, B. , Barriopedro, D. , Butler, A. H. , Calvo, N. , et al. (2022). Quantifying stratospheric biases and identifying their potential sources in subseasonal forecast systems. Weather and Climate Dynamics, 1–37. 10.5194/wcd-2022-12 35754938 [DOI] [Google Scholar]
  20. Lawrence, Z. D. , & Manney, G. L. (2020). Does the arctic stratospheric polar vortex exhibit signs of preconditioning prior to sudden stratospheric warmings? Journal of the Atmospheric Sciences, 77(2), 611–632. 10.1175/jas-d-19-0168.1 [DOI] [Google Scholar]
  21. Lehtonen, I. , & Karpechko, A. Y. (2016). Observed and modeled tropospheric cold anomalies associated with sudden stratospheric warmings. Journal of Geophysical Research: Atmospheres, 121(4), 1591–1610. 10.1002/2015JD023860 [DOI] [Google Scholar]
  22. Ma, J. , Chen, W. , Nath, D. , & Lan, X. (2020). Modulation by ENSO of the relationship between stratospheric sudden warming and the Madden‐Julian Oscillation. Geophysical Research Letters, 47(15), e2020GL088894. 10.1029/2020gl088894 [DOI] [Google Scholar]
  23. Marshall, A. G. , & Scaife, A. A. (2010). Improved predictability of stratospheric sudden warming events in an atmospheric general circulation model with enhanced stratospheric resolution. Journal of Geophysical Research, 115(D16), D16114. 10.1029/2009JD012643 [DOI] [Google Scholar]
  24. Mukougawa, H. , Sakai, H. , & Hirooka, T. (2005). High sensitivity to the initial condition for the prediction of stratospheric sudden warming. Geophysical Research Letters, 32(17). 10.1029/2005GL022909 [DOI] [Google Scholar]
  25. Noguchi, S. , Mukougawa, H. , Kuroda, Y. , Mizuta, R. , Yabu, S. , & Yoshimura, H. (2016). Predictability of the stratospheric polar vortex breakdown: An ensemble reforecast experiment for the splitting event in January 2009. Journal of Geophysical Research: Atmospheres, 121(7), 3388–3404. 10.1002/2015jd024581 [DOI] [Google Scholar]
  26. Rao, J. , Garfinkel, C. I. , Chen, H. , & White, I. P. (2019). The 2019 new year stratospheric sudden warming and its real‐time predictions in multiple S2S models. Journal of Geophysical Research: Atmospheres, 124(21), 11155–11174. 10.1029/2019JD030826 [DOI] [Google Scholar]
  27. Rao, J. , Garfinkel, C. I. , & White, I. P. (2020). Predicting the downward and surface influence of the February 2018 and January 2019 sudden stratospheric warming events in subseasonal to seasonal (S2S) models. Journal of Geophysical Research: Atmospheres, 125(2), e2019JD031919. 10.1029/2019JD031919 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Rao, J. , Garfinkel, C. I. , Wu, T. , Lu, Y. , Lu, Q. , & Liang, Z. (2021). The January 2021 sudden stratospheric warming and its prediction in subseasonal to seasonal models. Journal of Geophysical Research: Atmospheres, 126(21), e2021JD035057. 10.1029/2021jd035057 [DOI] [Google Scholar]
  29. Schoeberl, M. R. (1978). Stratospheric warmings: Observations and theory. Reviews of Geophysics, 16(4), 521–538. 10.1029/RG016i004p00521 [DOI] [Google Scholar]
  30. Schwartz, C. , & Garfinkel, C. I. (2020). Troposphere‐stratosphere coupling in subseasonal‐to‐seasonal models and its importance for a realistic extratropical response to the Madden‐Julian Oscillation. Journal of Geophysical Research: Atmospheres, 125(10), e2019JD032043. 10.1029/2019jd032043 [DOI] [Google Scholar]
  31. Schwartz, C. , Garfinkel, C. I. , Yadav, P. , Chen, W. , & Domeisen, D. I. (2022). Stationary waves and upward troposphere‐stratosphere coupling in S2S models. Weather and Climate Dynamics. in press. 10.5194/wcd-2021-58 [DOI] [Google Scholar]
  32. Sigmond, M. , Scinocca, J. , Kharin, V. , & Shepherd, T. (2013). Enhanced seasonal forecast skill following stratospheric sudden warmings. Nature Geoscience, 6(2), 98–102. 10.1038/ngeo1698 [DOI] [Google Scholar]
  33. Taguchi, M. (2014). Stratospheric predictability: Basic characteristics in JMA 1‐month hindcast experiments for 1979‐2009. Journal of the Atmospheric Sciences, 71(9), 3521–3538. 10.1175/JAS-D-13-0295.1 [DOI] [Google Scholar]
  34. Taguchi, M. (2016a). Connection of predictability of major stratospheric sudden warmings to polar vortex geometry. Atmospheric Science Letters, 17(1), 33–38. 10.1002/asl.595 [DOI] [Google Scholar]
  35. Taguchi, M. (2016b). Predictability of major stratospheric sudden warmings: Analysis results from JMA operational 1‐month ensemble predictions from 2001/02 to 2012/13. Journal of the Atmospheric Sciences, 73(2), 789–806. 10.1175/jas-d-15-0201.1 [DOI] [Google Scholar]
  36. Taguchi, M. (2018). Comparison of subseasonal‐to‐seasonal model forecasts for major stratospheric sudden warmings. Journal of Geophysical Research: Atmospheres, 123(18), 10231–10247. 10.1029/2018JD028755 [DOI] [Google Scholar]
  37. Taguchi, M. (2020). Verification of subseasonal‐to‐seasonal forecasts for major stratospheric sudden warmings in northern winter from 1998/99 to 2012/13. Advances in Atmospheric Sciences, 37(3), 250–258. 10.1007/s00376-019-9195-6 [DOI] [Google Scholar]
  38. Thompson, D. W. J. , Baldwin, M. P. , & Wallace, J. M. (2002). Stratospheric connection to Northern Hemisphere wintertime weather: Implications for prediction. Journal of Climate, 15(12), 1421–1428. 10.1175/1520-0442(2002)015 [DOI] [Google Scholar]
  39. Tripathi, O. P. , Baldwin, M. , Charlton‐Perez, A. , Charron, M. , Cheung, J. C. , Eckermann, S. D. , et al. (2016). Examining the predictability of the stratospheric sudden warming of January 2013 using multiple NWP systems. Monthly Weather Review, 144(5), 1935–1960. 10.1175/mwr-d-15-0010.1 [DOI] [Google Scholar]
  40. Tripathi, O. P. , Baldwin, M. , Charlton‐Perez, A. , Charron, M. , Eckermann, S. D. , Gerber, E. , et al. (2015). The predictability of the extra‐tropical stratosphere on monthly timescales and its impact on the skill of tropospheric forecasts. Quarterly Journal of the Royal Meteorological Society, 141(689), 987–1003. 10.1002/qj.2432 [DOI] [Google Scholar]
  41. Vitart, F. , Ardilouze, C. , Bonet, A. , Brookshaw, A. , Chen, M. , Codorean, C. , et al. (2017). The sub‐seasonal to seasonal prediction (S2S) project database. Bulletin of the American Meteorological Society, 98(1), 163–173. 10.1175/BAMS-D-16-0017.1 [DOI] [Google Scholar]
  42. Wheeler, M. C. , & Hendon, H. H. (2004). An all‐season real‐time multivariate MJO index: Development of an index for monitoring and prediction. Monthly Weather Review, 132(8), 1917–1932. [DOI] [Google Scholar]
  43. Wu, R. W.‐Y. , Wu, Z. , & Domeisen, D. I. V. (2022). Differences in the sub‐seasonal predictability of extreme stratospheric events. Weather and Climate Dynamics, 3(3), 755–776. 10.5194/wcd-3-755-2022 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting Information S1

Data Availability Statement

The original S2S database is hosted at ECMWF as an extension of the TIGGE database, and can be downloaded from the ECMWF server verthttp://apps.ecmwf.int/datasets/data/s2s/levtype=sfc/type=cf/vert. The QBO data was downloaded from the NCEP website verthttps://www.cpc.ncep.noaa.gov/data/indices/qbo.u50.indexvert. The real time multivariate index of Wheeler and Hendon (2004) was downloaded from the BoM website (http://www.bom.gov.au/climate/mjo/graphics/rmm.74toRealtime.txtvert).


Articles from Journal of Geophysical Research. Atmospheres are provided here courtesy of Wiley

RESOURCES