Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2022 Mar 12;301:114907. doi: 10.1016/j.socscimed.2022.114907

Making pandemics big: On the situational performance of Covid-19 mathematical models

Tim Rhodes a,b,, Kari Lancaster b
PMCID: PMC8917648  PMID: 35303668

Abstract

In this paper, we trace how mathematical models are made ‘evidence enough’ and ‘useful for policy’. Working with the interview accounts of mathematical modellers and other scientists engaged in the UK Covid-19 response, we focus on two weeks in March 2020 prior to the announcement of an unprecedented national lockdown. A key thread in our analysis is how pandemics are made 'big'. We follow the work of one particular device, that of modelled ‘doubling-time’. By following how modelled doubling-time entangles in its assemblage of evidence-making, we draw attention to multiple actors, including beyond models and metrics, which affect how evidence is performed in relation to the scale of epidemic and its policy response. We draw attention to: policy; Government scientific advice infrastructure; time; uncertainty; and leaps of faith. The ‘bigness’ of the pandemic, and its evidencing, is situated in social and affective practices, in which uncertainty and dis-ease are inseparable from calculus. This materialises modelling in policy as an ‘uncomfortable science’. We argue that situational fit in-the-moment is at least as important as empirical fit when attending to what models perform in policy.

Keywords: Mathematical models, Covid-19, Evidence-making, Assemblage, Affect, Problematization, Pandemic

1. Introduction

What is a pandemic, if not big?

Performed as a global ‘crisis’ of uncertain yet unprecedented threat (Lakoff, 2017; Anderson, 2021), a pandemic is made big. It is the combination of the unknowability and scale of the threat in pandemics that generates the atmosphere for precautionary action (Saminmian-Daresh, 2016). In the face of threat, mathematical models can perform a bridge to knowing by generating forecasts as well as ‘worst-case’ scenarios to enable policy decisions (Brooks-Pollock et al., 2021). Models are thus forms of anticipatory governance (Adams et al., 2009). They help navigate uncertainty by affording a sense of future security through calculus (Hacking, 1990; Rhodes et al., 2020).

Models which problematize things as big help govern as technologies to mobilise action in the present. We can trace, for instance, how projections of massive growth in the Ebola epidemics of Liberia and Sierra Leone, even though unrealised as anticipated, served to mobilise humanitarian assistance (Meltzer et al., 2014; Dubois and Wake, 2015). Projections over-estimating the scale of epidemics underpinned United Nations declarations of Ebola as a threat to national peace and security, also prompting militarised responses to infection control (Parker et al., 2019). Similarly, models of the H5N1 (avian) influenza pandemic were used by the World Health Organization (WHO) to upscale investment in antiviral pharmaceuticals globally, though projections reportedly massively oversized the pandemic (Leach and Scoones, 2013; Caduff, 2015). In this case, the “costs of failure” were projected as “catastrophic” to maximise the “best possible chance of success” of containment efforts (Ferguson et al., 2005). In response to H1N1 (swine) influenza, models led to the stockpiling of (partially effective) antiviral pharmaceuticals, in a pandemic so oversized that WHO were accused of faking projections to boost industry profits (Abeysinghe, 2014). In HIV too, demonstrating that the scale of the problem was beyond expectation and global in its effects was critical to problematizing local epidemics as in urgent need of response (King, 2004).

In these examples, the performance of emerging epidemics as in need of urgent or dramatic attention is accomplished through ‘scalar narratives’ which project the problem as big and expanding (King, 2004). The pandemic of Covid-19 is one such configuration (Anderson, 2021). Here, as with other pandemics, a ‘plausible uncertainty’ of a threat of ‘disastrous proportions’ are elements which make the emerging problem amenable to governance (Williams, 2008; Waller et al., 2016). Mathematical models act as local laboratories of evidence-making to afford epidemics their size and scale, and thus contribute to scalar narratives shaping policy. In the UK Covid-19 pandemic, for instance, the ‘reasonable worst-case’ is deployed in mathematical models to assist Government planning (Bradley and Roussos, 2021). This presents a longer-term scenario rather than short-term prediction to imagine what might be possible, and specifically how bad things could become, rather than forecasting what is probable or might happen. In emergency scenarios, the scaling-up of simulated problem potential often cannot rely on known risks for calculation but works with incalculable probabilities whose consequences might be catastrophic (Lakoff, 2017). Uncertainty, rather than knowable risk, creates the energy for precaution, pre-emption and prudence (Saminmian-Daresh, 2016; Cooper, 2006; Diprose et al., 2008; Anderson and Adey, 2011). What counts in the making of a pandemic is not so much the precision of projections, but the problematization of the event (Foucault, 2009). Models at once enact pandemics as sources of great dis-ease which they make amenable to crisis management via projection. Because models reside in uncertainty, they are never free of it, but embody it, especially in emergencies (Christley et al., 2013). This means that the configuration of pandemics through models is a fluid intervention that is not only materialised in observable data and empirical measures, but also in uncertain projections and qualifications, as well as in affective relations, in which science, policy and situation entangle.

1.1. Fitting models

Despite acknowledging uncertainty, a primary concern with models in policy is their precision and accuracy. Here, attention focuses on how well forecasts or projections can be said to represent the actualities of their epidemic contexts, including via empirical fit. There is a fixation with the precision of projections, and whether these, in time, become evidenced as more or less right or wrong. Models informing Covid-19 policy are no exception (Rice et al., 2020; Medley, 2021). For this reason, they have become sites of heated contestation (Rhodes and Lancaster, 2020). A particular concern, especially in public deliberations, is that scenario models are felt to be overly pessimistic and to over-estimate the likely or ‘true’ scale of pandemics (Caduff, 2015). A focus on the ‘worst case’, and working with counterfactuals that assume a ‘do nothing’ intervention scenario or which assume no adaptive behavioural or ‘natural’ response, may ‘oversize’ future epidemics. A policy desire for certainty has also been suggested as a reason for why the decision to lockdown in the UK was delayed (Evans, 2021). While fixating on the precision of uncertain projections is not necessarily the best judge of forecasts beyond the near future, especially early in an emergency (Funk et al., 2020), an evidence-based approach holds on to the promise of models becoming progressively attuned via their empirical grounding over time (Glasser et al., 2011; Huppert and Katriel, 2013). This idealised process is, of course, messy in practice, given absented time in emergencies, the difficulty of validating projected epidemic futures which are altered through adaptive responses, and the complexity of unfolding human-viral interactions beyond the reach of calculus (Leach and Scoones, 2013; Rhodes et al., 2020). There is nonetheless a tendency to reproduce ‘sacred’ evidence-based accounts of models as if they are available to empirical validation as well as precise and certain enough, and of policy as consequent on translated evidence (Colebatch, 2009; Stewart and Smith, 2015; Evans, 2021).

Alternatively, we can approach models as performative, as modes of epidemic enactment rather than representation (Myers, 2015; Callon and Numiesa, 2005; Rhodes and Lancaster, 2021). In the process of blending heterogeneous data from disparate sites into singular calculative spaces enabling policy decisions, the enumerations that models make detach from their origins, transform into new entities, and enact new realities as anticipatory potentials (Callon and Numiesa, 2005; Verran, 2015). Furthermore, modelled projections can take flight as evidence in fluid and multiple ways, with potentially dramatic material effects, as they are put-to-use in policy (Rhodes and Lancaster, 2021). Models do not simply represent emergent epidemics ‘out there’, but enact them in the ‘in here’ of their methods, calculations and narratives, which themselves are performed in their implementation events (Law, 2004; Myers, 2015). This means that projections are afforded agency in relation to their situation (Verran, 2015; Savransky and Rosengarten, 2016). In turn, this invites an alternative way to approach how models perform their ‘fit’. Rather than giving primacy to empirical fit, usually measured after-the-event through epidemiological data, such as counts of infections, hospitalisations and deaths, we orientate to situational fit, where the focus becomes how projections come to be made useful in-the-moment as matters of social, policy and political concern. When viewed in their situation, such as when projections are put-to-use in events of policy deliberation, the veracity of the projection presents less as the immediate matter of concern. Indeed, enumerations count for ‘nothing’ without their affects, qualifications and contexts to afford them agency and bring them to life (Callon and Law, 2005; Myers, 2015). The epidemics that models enact, big or small, flat or tall, are situated effects of their implementation events, in which models entangle as one of many actors.

1.2. Lockdown

An unprecedented national lockdown in the UK was announced in response to the Covid-19 epidemic on March 23rd, 2020. This “stay at home” policy followed the closure of social venues, like pubs, cafes and restaurants, which was announced March 20th, which had been Government guidance originally made March 16th. The policy decision to lockdown, as we explore below, is attached to modelled evidence. It has been argued that the UK's failure to implement lockdown policies sooner was partly because of political decisions to ‘follow the science’, implying that a precautionary strategy was delayed given insufficient scientific certainty for Government to act, until the evidence became overwhelming (Evans, 2021: 22). This account suggests policy decision-making to follow a rational-technical process in relation to a certainty threshold of what constitutes ‘evidence enough’ to act. In the analysis below, we follow how models are made and used as evidence in the weeks before lockdown through the accounts of modellers involved.

2. Introducing doubling-time

A prime function of mathematical models in emergent epidemics is to project infection growth. The basic reproduction number R0 is a metric standard in this regard. R0 is the average number of new infections that a single infected person in a fully susceptible population generates. This number indicates the portion of transmission that needs to be prevented to bring R0 ≤1.0 to ‘flatten the curve’. Estimates of R0 are inferred indirectly through models, and thus vary according to how they are fit to data and model assumptions (Royal Society, 2020; Pellis et al., 2021). In the UK Covid-19 pandemic, this calculation has been routinely performed to signal epidemic growth to publics, policy makers and scientists alike. The UK Government SPI-M Committee (Scientific Pandemic Influenza Group on Modelling), an advisory body of mathematical modellers and other scientists, and SAGE Committee (Scientific Advisory Group for Emergencies), the body responsible for translating scientific consensus, including from SPI-M, to Government Cabinet Office, have generated routine calculations of R0 alongside short-term forecasts and longer-term scenarios in response to Government commissions.

In the weeks prior to the national lockdown in the UK, a team of modellers triangulated data from multiple sources (confirmed cases, deaths, and hospital and intensive care admissions) in Europe to estimate epidemic doubling-time (Pellis et al., 2021). Doubling-time is the period, often measured in average days, that the size of the epidemic is estimated to double. At this point in the epidemic, doubling-time was a measure of unconstrained potential. Doubling-time is not a novel technique, but enacts evidence differently to R0, because it creates a measure of the speed of growth, useful for optimising the timing of policy responses (Pellis et al., 2021). Because it is a simple calculation, reliant upon inferences drawn directly from empirically observed data, such as incident cases, rather than indirectly derived from models, some argue that this metric is more practically useful than R0 alone for tracing how epidemics unfold in time and for projecting exponential growth (Pellis et al., 2021).

When shorter doubling-times are combined with delays for incident cases to become known, and thus also, for interventions to have measurable effect, the size of epidemics become bigger quicker. Epidemics become taller (they have steeper curves). They present as bigger problems in urgent need of action. And this is what happened, in March 2020, when models materialised a shorter doubling-time in the UK Covid-19 epidemic, as 3 days, rather than a previously presumed 5–7. In combination with a projected delay of 9 days before non-pharmaceutical interventions (like physical distancing) could be assumed to impact, the effects of these small tweaks materialised big effects. Even the doubling of hospital capacity, were it feasible, would not “buy back” enough days of reprieve (Pellis et al., 2021: 6). In fact, far from ‘buying time’, the faster doubling-time metric indicated that “the storm has already arrived” (Pellis et al., 2021: 6).

The alteration of doubling-time, a small tweak in a mundane metric, is an evidence-making moment with implications for policy. It presented as one element “in the decision-making process that led to the closure of pubs and restaurants” and as “evidence supporting the first national lockdown coming into force” (Pellis et al., 2021: 8). In what follows below, we trace how altered doubling-time problematizes a ‘bigger’ epidemic. We follow doubling-time as our point of departure to notice how this device entangles with other actors – other data and models, as well as matters beyond calculus – in the evidence-making of an unprecedented national lockdown.

3. Approach

We draw on ideas of ‘evidence-making assemblage’. By assemblage, we refer to the multiple actors in a network which ‘become together’, and ‘intra-act’, in events to bring about affects (Barad, 2007). Assemblages can be treated as “open-ended gatherings”, as “patterns of unintentional coordination”, between human and nonhuman elements (Tsing, 2015: 23). In assemblage thinking, all objects have equal ontological footing and are made up as affective flows. Here then, the assemblage “generates the cause just as it expresses the effect”, as effects are not attributable to singular or particular actors in the network but to their “associations” which are “immanent” to the network (Duff, 2014: 2). This shifts our attention from “presumed objects” to the “relations involved in their becoming” (Bacchi and Goodwin, 2016: 33). Importantly, assemblages are evolving arrangements rather than static structures, they are “tentative, hesitant and unfolding”, and this means they can be treated as matters of ontological movement (Law, 2004: 42). The elements in an assemblage are “not fixed in shape”, and neither do they “belong to a larger pre-given list”, but they “are constructed at least in part because they are entangled together” (Law, 2004: 42).

Our primary focus in this analysis is the constitution of ‘evidence’, and more specifically, how evidence is performed ‘useful for policy’. To do this, we treat evidence as a thing to be followed and as a gathering in the making (Lancaster and Rhodes, 2022); as “in one sense, an object out there and, in another sense, an issue very much in there, at any rate, a gathering” (Latour, 2004: 233). We approach mathematical models as modes of enactment (Myers, 2015; Law, 2004), as ‘evidence-making interventions’ (Rhodes and Lancaster, 2019), wherein numerical projections ‘come to be’ through their eventuation in methods and practices (Mol, 2002; Verran, 2015). By ‘evidence-making intervention’, we deliberately draw contrast with notions of ‘evidence-based’ intervention, so as to follow evidence as a fluid object of its implementation events in policy rather than as taken-for-granted as a stable thing that is ready-made for transfer and presumed to pre-exist its use.

Taken together, models and projections are afforded their power-of-acting, their agency, through their assemblage relations and intra-actions with other actors in the assemblage. They never act alone. With assemblage relations made up of affective flows (Duff, 2014), we envisage models and metrics at once as affected and affecting, thereby drawing attention to evidence and calculus as inseparable from social and affective practices (Callon and Law, 2005; Anderson and Adey, 2011). Pandemics, and the sciences that make them known, embody uncertainty, and this is one important affect in the evidence-making assemblage (Leach et al., 2021). Our analytical concern is less with how projections perform as measures of evidence-based precision or accuracy than with how they perform ontologically, as relational beings (Verran, 2015; Woolgar and Lezaun, 2013). This is important because it is unwise to take for granted that mathematical models do their evidencing as idealised in evidence-based conceptions of science and policy (Christley et al., 2013; Stewart and Smith, 2015). We therefore attend to the stories of emerging epidemic that models can perform, and more specifically, how the telling of stories about models performs science, in relation to a certain set of expectations (Law and Singleton, 2000).

4. Case study methods

This analysis draws on a qualitative case study tracing how the evidence made through mathematical models situates in relation to science, policy and politics. We undertook depth interviews with 29 mathematical modellers and other scientists engaged in the UK Covid-19 response. All interviews were undertaken by TR, remotely via Teams, between May 2021 and December 2021. Interviews generally lasted between 75 and 90 min, and adopted a conversational approach to coproduce an account. The study has generated ongoing dialogue through repeat interviews as analyses have iteratively progressed (with 5 follow-up interviews undertaken at the time of writing). Key themes included: experiences working as a scientist in a pandemic; generating evidence in relation to lockdown and infection control policy; communicating modelling evidence; key events in models and modelling; deliberation; uncertainty; and consensus. We sampled a diversity of mathematical modellers and modelling teams within and beyond UK Government expert bodies. Our sample includes mathematical modellers in the UK from over 10 different modelling groups and institutions. Roughly half (15) of those interviewed have participated in SPI-M or as part of modelling groups contributing evidence to SPI-M. We concentrate this analysis among these participants, noting that models beyond the SPI-M actor-network may have focused on different matters of concern, also enacting alternative depictions of epidemic scale. Throughout the epidemic, SPI-M has deliberated upon multiple models to produce published ‘consensus statements’, considered by SAGE, as scientific advice to Government.

4.1. Ethics

The study received ethics approval from the London School of Hygiene and Tropical Medicine Observational Ethics Committee. To protect against deductive disclosure among a group of experts working in an area of national policy priority, we do not provide participant biographical details. In the analysis below, we indicate participant identification codes when working with key extracts from interviews (and where we judge there to be no risk of deductive disclosure), and we signal verbatim speech by the use of double quotation marks.

4.2. Analysis

Our analysis accentuates low inference description (Seale, 1999), but is not oriented to representing the accuracy of ‘truth claims’ or inferring the causality of policy decisions. Instead, we approach accounts as storied performances (Law and Singleton, 2000). This means that we are primarily interested in the objects, materials and ideas that come into being and are mobilised through the eventuation of the narration, rather than accentuating interviews as a device to capturing a past outside reality. Our approach is to ‘work with’ the material coproduced; an acknowledgement from the outset that our analysis is also a performance affected by its assemblage (Law, 2004). We therefore see our engagement as a ‘becoming with’ data, a ‘story making’, in a process of deliberation and dialogue, and ultimately enactment (Law and Singleton, 2000; Rose et al., 2017).

As noted above, we focus attention on a particular model estimating doubling-time (See Pellis et al., 2021). We do this to create an actor to follow into the extending relations of the assemblage (Law, 2004), a point of departure rather than closure, which also helps trace the presence of other actors which entangle with this model in evidence gatherings (Latour, 2004). We might have started elsewhere, and perhaps with data or models of seemingly greater agency or capital (such as Ferguson et al., 2020). Yet this is a decision, informed by our interest in tracing some of the less noticeable actors in the evidence-making of first national lockdown which may otherwise disappear from view (Star, 1991). The story of doubling-time presented itself across multiple interviews, as well as through a published account of the model (Pellis et al., 2021), and thus became for us a thing to follow (Latour, 2004). Locating doubling-time in relation to its assemblage helps to ‘upscale’ from the work of one actor to the work of others, in their situation (Tsing, 2015).

Our analysis zooms in on the two weeks before the UK's first ever national lockdown was announced on March 23rd, 2020. We unfold our story in three parts. First, we tell a ‘data story’. This traces the model of doubling-time as a calculative response in relation to emergent data. Second, we tell an ‘assemblage story’. This situates doubling-time in relation to other evidence-making actors involved, including beyond calculus. Third, we tell a ‘policy story’. This traces how modelled evidence comes to be made useful in the eventuation of lockdown.

5. Faster doubling-time: a data story

In early March 2020, mathematical modellers in the UK Covid-19 epidemic had been working with a doubling-time of around 5–7 days, derived from variable estimates of epidemic growth in China (Pellis et al., 2021). In interviews, this doubling-time was said to have operated as tacit assumption without “reference to data”. It was said to feature in the “most cited publications”, was “voiced numerous times” in expert meetings, and became “trusted” as the “official estimate”. It was reproduced in the assumptions of the most prominent pandemic models driving the science at SPI-M at the time. It was “consensus”. The data-based origins of the 5–7 day estimate as modelled in China though, were opaque, as noted by the team modelling doubling-time in the UK (Pellis et al., 2021). Doubling-time awaited local empirical fit, and the “urgency of the matter” – to make evidence for policy quickly – was said to work against its questioning.

But the problem was that UK modelling teams began to find it difficult to fit their models to this standard. Case data were emerging in Italy and Europe, and also in the UK, suggestive of faster doubling-time. Media reports were also circulating. For the modelling team tracing these weak signals, coincidence was to play a role. One of their colleagues had been carefully monitoring, and manually collating, daily reports of incident cases in Italy, and became troubled by their speed of growth. They were “not convinced” by the routine televised press briefings in the UK “saying it's doubling every 5–6 days”. According to one of the team, a “key moment” in transforming this otherwise weak signal into evidence was multiple sources of observable data travelling in the same direction. This is an account which enacts relative certitude via an empiricist claim. The absence of doubt, as we see here, links to hospital admissions said to be “less biased” than delayed or haphazard infection reports:

We could see the confirmed cases were growing, were doubling every 3 days, and hospitalisations. Hospital bed occupancy was doubling every 3 days, and ICU beds were doubling every 3 days. At that point, I just went pale, because you can't really fake or distort the hospital and ICU data.[23]

Numbers here, and by extension the projections they enable, are afforded agency as data, enacted as a thing with correspondence to a reality ‘out there’. For one “young player” recently thrown into the world of “high level decision-making [and] fast policy advice”, the big effects potentiated by the small tweak to data-based estimates of doubling-time affected dis-ease: “I just freaked out”. Putting to test this altering reality, via calculus, became paramount:

I dumped everything else I was doing at that point, and I started trying to ensure, well convince myself that it was not real. […] I just focused all of my energies into making sure that it wasn't a fluke of the data”.[23]

We see here, an account of scientific discovery evidenced in the ‘real’ of emergent data ‘out there’, the uncertainties of which are managed through a calculative process resembling a mix of deduction and triangulation. To make this discovery work as evidence, that can performed with “confidence”, you “look for reasons you're wrong”. The team retraced the projected growth estimates of early outbreaks in China, and adjusted these retrospectively for population movements out of Wuhan to theoretically account for a slower reported doubling-time in this setting at the time (See Pellis et al., 2021). This retrospective adjustment, which enacted a point of triangulated convergence across datasets, created “confidence” that the numbers could indeed be “doubling so fast”. Still moving hesitantly, given the big effects of projecting faster exponential growth, the team presented their model of three-day doubling-time to SPI-M on March 20th. The model was described by some as a ‘boundary-crossing’ moment, altering the atmosphere, and gathering attention:

I've never been in a meeting like it. [They] presented it, and there was silence. There was silence for a good minute, while the whole room, filled with eminent professors and the like, looked at it, looked at the garishly coloured graph. I would never have chosen that colour scheme in my life, purples and yellows. […] Everybody stopped, and went, ‘Yeah, I can think of nothing in what was just presented that gives rise to any kind of uncertainty about where it is that we are heading with this now. That didn't make the decision there, but it made it absolutely clear that we knew exactly what was happening with this epidemic at that point.[14]

The model, and its graph of garish colours (Fig. 1 ), materialised a ‘big’ problem:

It (doubling-time) made the situation look much worse. […] We realised that we were behind the curve, and that things had developed faster. […] We were either going to go into some kind of lockdown not knowing what that was or whether it would work, or we were going to be very quickly, you know, sort of waist-deep, knee-deep, neck-deep in cases, in hospitalisations, and in deaths.[2]

Fig. 1.

Fig. 1

Three-Day Doubling-Time, March 2020.

The above figure is a version of that presented, March 16th, at SPI-M, by Lorenzo Pellis and colleagues (Public Health England (2020) Joint Modelling Cell Guide to Current Modelling Assumptions and Potential Mitigation Measures, March 23, 2020).

Faster doubling-time is performed here as the thing which makes the epidemic known as bigger. It presents as an ‘evidence enough’ moment; a “moment where there was no longer any room for any uncertainty in where the trajectory of the epidemic in the UK was going”.[14] We can notice two elements in this performance of the model as ‘evidence enough’. First, an appeal to the ‘real’ is made via proximity to ‘actual data’ said to represent concrete cases and bodies. This affords the model a simplicity and security said to be alluring. It was “something simple [that] showed what was happening so beautifully simply”. It's closeness to “data”, rather than invention through “massively complex” abstraction, made the projections feel “solid” and “blindingly obvious”. This was evidence enough for SPI-M to act:

It was that very simple exponential fitting that kicked SPI-M into saying something really solid. […] There is nothing in this that is disputable in terms of the model, the model is so simple. It is a couple of numbers and an exponential growth. There is nothing left to dispute.[14]

Second, there was corroboration with multiple data which served to navigate the “hesitancy” of emergent estimates into the relative certitude of “scientific consensus”. A second group of modellers were also projecting faster doubling-times. They too, were said to be hesitant of their (unpublished) estimates. Because they were “different from the consensus, everybody had doubts”. The account of corroboration acts to bring modellers more at ease with their projections. We are told that bringing multiple models together was “extremely powerful”, affording a “new consensus” to emerge that same day (SPI-M-O, 2020a). Genomic data further pointed to a “huge amount of import of infection from Europe”, which “we just had no idea about”. Halving doubling-time enabled models to fit to their altering empirical situation: “We were effectively seeing something that was growing much faster, and as we thought it was all happening within the UK, the only way of making that happen is if you increase the doubling-time”.[2] Multiple data from different sites and sources were thus brought together, into a single space, to converge as a single point, to create a new entity of doubling-time (Callon and Law, 2005).

In this moment, a new (temporary) standard of doubling-time comes into being. The model of doubling-time is performed here as a story of discovery, emergent data and calculus, managed in an evidence-based science approach. Accounts reflect back on the consensus that once was, as “an error in calculation”. It is said that “we should have been better prepared”. The model of faster doubling-time is presented as making “obvious” the urgent need to “switch off, shut down, the country”. It performs, as a matter of fact, the growing concern among scientists that “you can't wait”. By March 23rd, SAGE moved to a doubling-time of 3–4 days (SAGE, 2020a), and Government incorporated 3.3 days as the base for projecting ‘reasonable worst-case scenarios’ (SAGE, 2020b), with models also readjusting to this new metric.

6. Upsizing the epidemic: an assemblage story

The assemblage relations upsizing the epidemic in these weeks of March extend beyond the model of three-day doubling-time, its data inputs, and the careful handling of these in a narrative of scientific calculation. There are multiple entangling actors in the evidence-making assemblage which affect what becomes gathered together as evidence, and how evidence is constituted useful. Among the circulating actors involved, we select for attention: policy; other data and mathematical models; SPI-M infrastructure; time; uncertainty; emotion; and leaps of faith. This is a story of how multiple actors connect and align to energise what is made as evidence for use in policy. Whereas the model of doubling-time performed as a ‘data story’ emphasises calculation (see above), the ‘assemblage story’ below also draws attention to things beyond calculus.

6.1. The absent-presence of policy

Modellers advising Government describe themselves as providing “independent scientific advice”, which they say is critical to preserving the value of their science to travel as evidence. They also say that “policy people do not wish to discuss policy with scientists who are independent”, that there is a hinterland of expectation performing science and policy as separate and apart. Yet, “you can't make a policy-free or policy-neutral model”. Even the counterfactual of “doing nothing is a decision”. Incorporating policy as an input or actor in the model is said to be critical to making modelled evidence ‘work’, as otherwise, “We don't know what we're trying to model”.[2] There is then, an absent-presence of policy in the model, even in the face of “no steer” or “no clear ask” from Government.[10] These are the uneasy conditions that modellers describe in the weeks prior to lockdown. Modellers position themselves as too distant from policy, as working in a “policy vacuum”, as “just monitoring what was going on”,[2] without invitation to model “any kind of national lockdown”, and with “no indication that was being planned”.[7] Through the first weeks of March, “nobody prepared for a lockdown”, and “we didn't think we needed to do it”.[7] Rather, attempts to forecast “what is going to happen” relied on “the guess that there would be no policy decisions”.[2] The taken-for-granted assumption incorporated into models was that “we are going to try and have the epidemic”, a ‘do nothing’ scenario of unconstrained epidemic potential, rather than enact policies to make the epidemic smaller.

The default of “doing nothing” in policy while making the epidemic big in models, enacts a tension, a troubling situation, not easily narrated away by the justification account of independence along the lines of “we advise, they decide”. The epidemic was felt to be downsized in policy circles, belittled and trivialised: “A lot of us were slightly nervous, as they [policy] didn't seem to take it that seriously”.[4] Through early March, we are told there is an intensifying sense of looming failure of a ‘do nothing’ policy and an anticipation that, at some point, “the Government will lose their nerve” and be energised to act. Accounts perform evidence as failing to ‘break-through’, with projected epidemic growth at odds with an absence of precautionary action: “Yes, this virus is the reasonable worst case! Get out your plan, and do it!“.[4] This is an account which performs evidence translation as not working as comfortably as it should, even while reproducing the hinterland of expectation that science works independently of policy, and that policy follows science (Colebatch, 2009; Evans, 2021).

6.2. Gathering evidence

In the absent-presence of policy, modellers make efforts to problematize the epidemic as bigger within the generous constraints of their situation. We see here, a less public account enacted of modellers strategically stepping ‘out of role’, beyond that of tamed independent advice-giving, to ‘speak out’, including “loud and clear”. Here, scientists engage as knowledge users, even activists, and not mere knowledge producers, emboldened by their evidence of ‘catastrophe about to happen’ (Lahsen, 2005). The SPI-M community of modellers is invoked as collectively at odds with the absence of Government precautionary action: “Everyone was screaming for lockdown”.[18] There is also an apparent absence of doubt in the calculus here, which affects an intensity of urgency to translate the worsening situation as forcefully as possible, even if not in “an entirely scientific way”:

We were screaming that the decision-makers and politicians were on a catastrophically bad path. My only intention was to get us locked down. I was not really thinking in an entirely scientific way, because every time I did I came to the same conclusion.[7]

The collectivising story of “screaming for lockdown” invokes passion, not merely calculus, though it is metrics and models that are energising these affects. One actor circulating in the momentum of concern was a simple model, drafted March 10th, and deliberated upon by SPI-M on March 16th (Riley, 2020). Drawing on epidemiological case data, the model illustrated the failure of efforts to ‘flatten the curve’ through mitigation, an approach which had been characterised by Government officials as trying to “reduce the peak, broaden the peak, not suppress it completely”, also enabling “some kind of herd immunity” (Vallance, 2020). In a note submitted to SPI-M, the model simulated a future characterised by an overwhelmed health service, unachievable herd immunity, and more infections and deaths, as well as a longer epidemic, than would be the case via an alternative strategy of fixed-term suppression through social distancing. As one of the modellers described, “the actual curves were unimaginable”, and “just the number of deaths in a short period of time, no Government could allow”.[7] We are told that the model was attention-gathering; at once affecting energy (“there was a huge reaction”) and infused by it (“it catalysed a huge amount of reflection”, incorporating “pressure from everywhere”). The model is afforded agency in its altering assemblage to problematize the epidemic as in urgent need of lockdown. The object of ‘lockdown’, hereto an uninvited actor in modelled policy proposals, becomes a possibility. A boundary-crossing is opened-up in this account of lockdown emerging in the assemblage, with some scientists and models said to “switch” as it “became obvious we had to consider lockdown”. The SPI-M network aligns in relation to lockdown as an emergent matter of concern: “Everybody then did a version of lockdown with their model”. With a bigger epidemic problem projected, and with lockdown emerging as an anticipated potential, the evidence-making ecology was altering. Models then, gather agency in their affects and not only in their calculations, as elements which come to align with others in the actor-network. Yet, modellers' accounts still emphasise that what they saw as evidence enough for lockdown was failing to break-through in policy: “People just didn't believe it, or didn't accept it”.[7].

6.3. Infrastructure

The sense of momentum, at least among some advising Government, that “we're getting this all wrong”, that “we are much further into this than we think”, and that “we are behind the curve”, is not necessarily useful evidence until made-to-work as such. The SPI-M committee is presented as a critical actor in the assemblage, operating as a mechanism to transform multiple concerns into a ‘consensus’, which operates as a ‘tool for use’ in policy (Lancaster, 2016). A growing concern of looming catastrophe, materialised in projections and affects, has to be “developed into evidence”. Voiced concerns, even if enumerated, do not constitute enough: “You can't just tell me, you have to bring it to SPI-M”. Through deliberation and corroboration across multiple models, SPI-M generates published ‘consensus statements’ as evidence energised to travel, via SAGE to Cabinet Office. Both the epidemiological model discussed above (Riley, 2020), and the model of faster doubling-time (Pellis et al., 2021), alongside other models, are gathered together as ‘the evidence’, coordinated as if singular, complete enough even if uncertain: “We sign off the evidence as a consensus” [emphasis added]. As said of this process:

We all go do our independent modelling, and we all submit them to SPI-M, and after that we have a lengthy discussion. […] And we come to a consensus statement. […] You've folded-in hundreds of scientists in there, great decision, that then goes to SAGE. […] You have a locally relevant scientific consensus that could be wrong, but at least it's a consensus.[18]

SPI-M enacts a coordinating mechanism to create the space to intervene by attaching multiple heterogeneous things to hang together, more-or-less, as wholes (Law, 2004; Mol, 2002). In doing so, evidence is made. The performance of consensus is then, a standardisation for transporting evidence towards “the people who make policy”. As said of three-day doubling-time:

It enabled us to go to SAGE, and to say to SAGE ‘we are further into this than you think’, you know, ‘this is no longer a situation in which we can watch things pan out and make decisions on the hoof’“.[2]

Consensus-making also navigates the unease of mobilising uncertain evidence as a basis for a “big leap” in policy decision, like enacting a policy “never done before”. There is a “whole series of evidence leading up to that decision”, which SPI-M as an evidence gathering affords, which is considered critical given “you are about to say on a national and possibly international stage that this is going to go horribly wrong”.[16].

6.4. Time and uncertainty

When science is done in a rush, infrastructures of deliberation and corroboration, like SPI-M, might afford a sense of protection through transforming “sudden and rough” best guesses into shared “robust” evidence, holding out hope for discovery to break-through while “shrinking uncertainty” along the way.[4] Time is not merely an absence in emergencies but has incredible presence. Most obviously, making evidence carefully is felt troubled when rushed: “It's challenging to do these things in a short amount of time, and mistakes are always possible”; [23] “We try to make it as error-proof as possible, but it's difficult”.[28] We see here, an account of compromise in the production of evidence as if definite and correct, an ideal reproduced in the hinterland of expectation of evidence-based calculus, also invoked in the ‘data story’ of faster doubling-time (see above). The uncertainty of projections affected by absented time can be unnerving in the face of emerging epidemics and decisions projected as “big”, “enormous”, “dramatic” and “unprecedented”. It troubles “confidence”, statistically and emotionally speaking. Here is an account of emergency modelling as a science compromised:

I'm exhausted, and I made the model yesterday. I hadn't time to check it. I hadn't time to play around with it … I [did the] code between, I don't know, 11pm and 5am, in order to bring a result to that meeting. […] Because of the level of exhaustion, you try to do the bare minimum, but with time and energy you would probably break the tool and start from scratch or rebuild it to be more robust, and you just don't have the luxury to do that.[23]

Accounts of emergency modelling give expression to the uncertainty generated in the absent-presence of time as a tension between precaution and caution; a liminal state of felt need for dramatic policy action now and hesitancy residing in uncertainty. The data signals in an emerging epidemic are said to be confusing because the “noise is almost as big as your data stream”.[14] Early March was characterised as a period of “24/7” and “frenzied” modelling, felt by some as an “incredible pressure”, “exhausting”, and a “stress”. Emergency modelling is done on a “very, very short timescale from very, very limited data”[18], and “there is always the worry that something is wrong”.[28] This sense of dis-ease is intensified when emergent discoveries making the epidemic bigger are at odds with the tacit assumptions of circulating standards. As said of faster doubling-time: “You are hesitant in presenting it because it's different from what has been presented before, potentially significantly different”.[23] Hesitancy is performed here as contingent not only on weak empirical signals but on the hinterland of investment in taken-for-granted metrics:

You are coming out with a different answer or a different number, and with a model that they can't really scrutinise or anything. […] It's a natural resistance, that if people have done a model yesterday, and code it up potentially wrong, they are not certain. […] People that have not looked at it, or that have gotten previous estimates, possibly with a robust method that they have used and tested for a long time, they are certainly going to question it. […] When you've got other models that are sort of more developed, and have been developed by senior figures in the community for a long time, and they're saying ‘well, maybe, maybe not, there is a lot of uncertainty’, it's hard to bring that forward.[23]

In uncertainty, science and policy might tend towards the familiar, the tried and tested, rather than speculative. There is security and capital, both comfort and confidence, in the model you know: “In the moment of emergency you don't have time for innovation and deliberation. You go for something that you already know”. The potential for discovery, like faster doubling-time, to ‘break-through’ might be slowed-down, given the boundary-crossings in “taken-for-granted assumptions” required, and how these might be reproduced in pandemic models which locate to a hinterland of investment, including the reputation of the scientists and institutions involved (Law, 2004, 2006). The account of hesitancy linked to making a new consensus in doubling-time also links to the challenges of producing ‘evidence enough’ to displace epistemic power within the field: “We didn't really have a second opinion, there was no second opinion”; [13] “When they [other modellers] say, ‘Oh, we think the doubling-time is every 5–8 days, you just think, ‘Well, they've probably got it right’“[6]. In this narrative, ‘the model you know’ – described variously here as “big”, “shiny”, “developed”, “well-founded”, “well-funded”, “trusted”, “strong”, “a military operation”, and “simulation ready” given its “long history” – entangles with an emergent model with a less recognisably authoritative hinterland – depicted as the “young player”, a “reasonably minor player in the great saga”. Extraordinary claims require extraordinary evidence:

When you've got other models that are sort of more developed, and have been developed by senior figures in the community for a long time, and they're saying, ‘Well, maybe not, maybe not, there's a lot of uncertainty’, it is hard to bring that forward. […] When their model [of faster doubling-time] was produced, and something was said, like, ‘Well, it [the epidemic] isn't so much of a problem just yet’, then you've got to be pretty sure of what you're saying in order to dispute that. […] Extraordinary claims require extraordinary evidence. If [another] model was the de facto base, then you needed extraordinary evidence to debunk it.[14]

Uncertainty does not only reside in the epistemic power of calculations but in their relations and affects. Scientists embody the affective atmospheres of the assemblage in which their calculations perform (Lahsen, 2005; Myers, 2015; Anderson and Adey, 2011). In the moment of calculating faster doubling-time, models, and modellers, entangle in an atmosphere of dis-ease. As we heard earlier when modelling faster doubling-time: “I just freaked out”; “I went pale”. The ‘bigness’ of epidemic is not just a numbering. Here is one account that locates evidence-making inside an affective atmosphere of pandemic in the weeks before lockdown:

It was always just this constant battle burnout time. We've got results that we needed to churn out to send to Government. […] You feel as if you have the weight of the world on your shoulders. […] I can't say that if I missed that task the whole of the UK response would fall apart, it wouldn't, it's just this isn't a world where if you don't do it, it doesn't matter. […] I was just quite frightened of going into lockdown. […] It was really intense and scary. Obviously we'd see model results before it hit the news, and you just sat there like, ‘This looks terrible, there is no vaccine, there is nothing, we don't see how there is a way out of this, all we know is we're coming into a period of lots of ill people, overwhelmed health service, deaths, isolation’. At that point in time, it was just scary and sad. […] It was very personal.[15]

6.5. Leaps of faith

As we have seen, a prime distinction performed in accounts of uncertain evidence-making is a contrast between idealised ‘normal’ science and science troubled by absented time. This distinction appears rooted in an onto-epistemological imaginary which contrasts ‘data’ as a source of privileged knowing through empirical fit in an observed world, with ‘faith’ in abstractions yet to have been grounded beyond reasonable doubt. The consensus-making of the SPI-M committee helps to bridge what scientists refer to as “the leap of faith” required when acting on uncertain evidence. The ‘leap of faith’ is an axiomatic epistemological requirement in an emerging epidemic, as the emergency has to be actualised prior to its event, rather than wait to be seen:

You somehow act on the faith that what the model is predicting is going to become reality unless you do something, and that is very difficult from a political point of view. It's much easier to wait until the situation is an emergency in order to motivate why you're taking drastic action. […] You would like to use models to act when the situation is not an emergency, but then you need to be confident. […] You have to not only convince yourself but convince the other people around you, that by taking an action that is disproportionate to the occurring situation visible to everybody you are doing the right thing, where the right thing is defined as you avoiding a dramatic problem later.[23]

Acting in an emergency is thus cast as a “judgement call”; a matter of using models as “qualitative interpretation” rather than “precision”. This means that “absolute numbers” can become a “side effect” in the gathering of evidence. Absented time is presented as both explanation for policy delay (a hesitancy rooted in uncertainty which makes it “much easier to wait”) and justification for acting quickly in faith (“By the time you get data which definitely shows you've got a problem, you're too late, by definition”). Modellers are again juggling the unrealisable expectations of evidence-based calculus which assumes there being “enough time to estimate the impact of what you are doing”, with the situated realities that “you have absolutely no idea” whether the action proposed is “going to be enough or not” and that “we are never really able to predict the impact of anything”.

Faster doubling-time performs an exponentially growing problem to energise the policy ‘leaps of faith’ required. The calculation of doubling-time promises a sense of technical control over time and nature; a taming, a comfort, of uncertain and insecure future ‘out there’ through calculus. The metric of doubling-time feels like “giving yourself margin”, even if it is only “buying back three days”. This is calculus which seeks to locate nature in time and its measurement. Yet in the calculation of three-day doubling-time, science and policy emerge as already out-of-time and too late; a state of being “behind the curve” on a “catastrophically bad path”. Time instead resides in nature, not calculus. Nonetheless, scientists hold on to notions of empirical fit by arguing that it is safer to “hit hard and fast early on” in order to leave “enough margin to correct” by adapting the intensity of interventions according to their observed impacts in time. But this is always a ‘leap of faith’, made visible in emergencies. What counts is not so much knowing precisely than an atmosphere which energises action in the face of not knowing. Faster doubling-time presents as useful evidence because the assumption is that when not knowing or having the time to find out “the safe thing to do would be to be pessimistic”. Problematizing the epidemic bigger is less a concern of empirical fit, at the time or after-the-event, than of situational fit and matter of concern in-the-now.

7. Evidence-making lockdown: a policy story

We have drawn attention to two main modelling actors in the evidence-making assemblage linked to the emergence of lockdown policy in the UK; an epidemiological model projecting the catastrophic failure of mitigation (Riley, 2020), and a model of faster doubling-time (Pellis et al., 2021). Each circulate in an atmosphere in mid-March said to be characterised as a “massive change in flow”. We are told that “all hell broke loose after the Tuesday”, the day that the first of these models was presented at SPI-M to catalyse a break-through towards a lockdown imaginary. By the Friday, “the Prime Minister was already looking at a whiteboard with what became the new plan”, and on this day three-day doubling-time also became ‘consensus’. Between those days, we are told that “it was carnage” and that “everything changed”.

There is a third model (among others) circulating in this assemblage. This is the now infamous “Report 9” (Ferguson et al., 2020), presented to SPI-M March 16th. Unlike the epidemiological model earlier circulated (Riley, 2020), Report 9 was a set of complex agent-based simulations generated by a model described as “predominant”. Both models enacted policies of doing nothing and of mitigation as potentially catastrophic. The tweaking of parameters doubling the anticipated burden of intensive care in the days before the release of the Report 9 model upsized the scale of the epidemic and was said to have led to a “sudden focusing of minds” (Adam, 2020). In a ‘do nothing’ scenario, a counterfactual which can upsize epidemics by assuming that behavioural alterations do not happen, 80% of the population become infected and 510,000 die. Even under mitigation scenarios, there will be deaths of 250,000. National lockdown is announced. The model incorporates a faster doubling-time, of 3.3 days, considered “prudent for worst case planning purposes” (SPI-M-O, 2020b).

The projections of Report 9 take flight. The model becomes “incredibly loud” as a boundary object working across science, policy and publics to perform, more-or-less, a singular evidence space with a primary author (Star, 1991). According to some, the model becomes “pegged as the single event”, as “this figurehead thing”; enacted publicly as the model, the evidence, for national lockdown.

Rather than policy being enacted here as consequential on modelled evidence translated as scientific consensus, we see that evidence comes to be in its policy event. Modellers' accounts draw attention to evidence as a thing that is made in policy, presenting this as a problem of ‘evidence-based’ policy not working as it should. It is suggested that the model is mobilised as useful evidence as a “game changer”, even as the “excuse to do the rational thing”, to make “the situation look much worse”, given the apparent default policy to date of “doing nothing”.[2] The “narrative was made using a model”.[7] The model's power-of-acting resides inside the narrative relations of the event; a policy story that “the epidemiological situation had changed”, that “things were worse than we thought”, and that “we had only just discovered that”.[7] Lockdown is presented as “because of the model”, said to “produce some new evidence”[18]. Here then, is a public performance of consequentialist evidence-based policy; of policy ‘following the science’. Yet, in alternative accounts, “that is not the case, it's not the case”. There is a messier story of entangling actors in the assemblage, some of which we have followed above, in which multiple objects of evidence – from epidemiological case data to projections to screaming scientists – problematized the epidemic as big as well as in urgent need of lockdown. The fluid transformations of evidence in policy makes for an uncomfortable science:

We've got this very explicit pinning of modelling work on the decision. It was an absolute conscious decision to pin it on an incredibly complex calculation, which was not the right thing to do. […] We can't let that happen again. We can't pretend that something incredibly complicated is the reason the whole country has to change, when actually, the reasons were completely different and obvious. It's put back quantitative science. It's going to undermine confidence in the very good work that gets done. […] The narrative was made using a model, and in retrospect that was an awful mistake. […] It gives the perception that modellers had prevented some obvious steps being taken, when we were listening to modellers and didn't lock down.[7]

8. Discussion

We have traced how models fit within their situational relations to explore how evidence is made for policy in the Covid-19 pandemic. In this story, “the thing we call evidence” (Lancaster and Rhodes, 2022) does not present as a ready-made object for translation between science and policy, but emerges as an intra-action among actors in assemblage relations. Evidence, like assemblage, is a site of altering attachments, of ontological movement. The evidence that models make, and how models are constituted as useful in policy, is a matter of situational performance, wherein models are brought to life as fluid elements of their implementation events. This has helped us to see how the evidence generated by models ‘comes to be’ in the policy event of an unprecedented national lockdown, and also, how evidence and policy emerge as entanglements, inseparable from the other, in an assemblage relationship.

8.1. Situational performance

In our view, models become fit, including for purpose, not only when they are empirically grounded after-the-event as if corresponding to a real ‘out there’, but as they fit inside their social, policy and political situations in the ‘here-and-now’ (Law, 2004). Once we appreciate evidence as relational, rather than as pre-existing or stable, presumed ready to be translated, we recognise all science as emergent, an effect of its unfolding evidence-making assemblage. Emergency situations afford useful lessons because they make visible some of the troubles, the dis-ease, the messy translation, of evidence-based claims in science and policy generally (Law and Singleton, 2005; Greenhalgh and Wieringa, 2011; Leach et al., 2021). Approached as performative, modelling science becomes more comfortable, more at-ease, in its situation. We therefore accentuate situational fit as at least important as empirical fit when tracing models as achievements. This tells us that there is value to be gained by holding on less to narrative claims of science as if it is performing empirically-grounded and evidence-based policy translation, to engage more openly with modelling as a performative science of ontological intervention (Savransky and Rosengarten, 2016; Rhodes and Lancaster, 2021). This draws attention to how models are actually put-to-work, and made useful, in policy events, as in the case of an unprecedented national lockdown.

8.2. Numbers in the atmosphere

Whereas the ‘data story’ of doubling-time we told above gives primacy to emergent data discovery and calculus in a narrative reproducing evidence-based science, our ‘assemblage story’ draws attention to affective flows beyond calculus. Evidence-making here is a matter of ‘qualculation’, not merely calculation, in which multiple things – infections, hospitalisations, deaths, metrics, projections, infrastructures, institutions, scientists, emotions, affects, time, uncertainties – come into play with the potential to align in the evolving actor-network (Callon and Law, 2005). One element in this assemblage account is affective atmosphere (Anderson, 2009). The ‘bigness’ of pandemic, and the evidencing of the UK epidemic growing big and faster, is situated in affects wherein uncertainty and dis-ease are inseparable from, and also extend, calculus. Models and scientists are not detached from, but embody, atmospheres of dis-ease energised by the emergency situation in relation to epistemic uncertainty and anticipated disaster. The making of pandemics big enacts an affective atmosphere of anticipation and indeterminacy – of time that is neither present nor future; an interval of emergency in which potentiated catastrophe is yet to happen; and a narrowing opportunity to act, with more or less certainty (Anderson and Adey, 2011). Affective atmospheres can thus be approached as a “relation of tension” (Anderson, 2009); what we trace in this analysis as an embodiment, through models, of the collective affects of emergent epidemic and dis-ease. What this also tells us is that responses to emerging epidemics, in science as well as in policy, do not arise in calculations alone but in affective practices.

8.3. Uncomfortable science

The thing that is constituted as ‘evidence enough’ and ‘useful’ in policy is a differently situated achievement than in science (Lancaster et al., 2020). We see this, for instance, in how SPI-M performs as a device of corroboration and consensus which works in emergency conditions of absented time but which falls short of scientific expectation. This is obvious – evidence is performed differently in the absence of time, and ‘for’ policy, and expert committees coordinate their evidence-making accordingly. But it doesn't make navigating these different versions of modelling science any the more comfortable. The model of emerging epidemic materialises dis-ease into troubled technological solution, as science seeks to know, and help govern, as best and as fast as it can.

Perhaps modelling in pandemics becomes an uncomfortable science? This is a version of science, affected in pandemic atmosphere, which materialises dis-ease in its doing, especially as threats are projected nearer and larger in the face of unprecedented decisions. The ‘leaps of faith’ required to make and use evidence not only discount scientific method, an epistemological problem, but reproduce uncomfortable affects, an ontological concern. This version of modelling science is tricky and unnerving. As we have seen, scientists hold on, as good scientists do, to the narratives and ideals of evidence-based science – characterised as corroboration, abduction, empirical fit and consensus – but may not feel like their actualised science is ‘evidence enough’ or that this is how science should be done; a liminal space that can be uncomfortable to embody. Similarly, holding on to narratives of consequentialist evidence-based policy – characterised by idealised hope that ‘evidence enough’ can make a difference to policy – is troubled when science for policy seemingly breaks-down rather than breaks-through. Holding-on to science as if evidenced enough and as if consequential knowledge for policy, is, of course, itself a performance, and a comfort, which helps navigate liminality without fully ‘letting-go’ (Rhodes and Lancaster, 2021). Uncomfortable science is the embodiment of dis-ease when trying to perform science and make it work in conditions which make visible its evidence-based ideals as troubled or illusionary.

8.4. Performing science

And one final word about this analysis. We follow the argument that the “difference between telling stories and acting realities isn't so large”, and that this means “our stories aren't simply innocent descriptions” (Law and Singleton, 2000: 769). The stories that models, that science, that we, perform can “bring aid and comfort to existing performances” or can make “difference” by enacting things otherwise. Stories in science are thus analytical and political, and we make ours here, about mathematical models and the making of pandemics big, not to trace what might be empirically right or wrong, but to see models and evidence as matters of performance, that can be made-up in multiple ways.

Notes on Figure 1

  • 1

    Day 0 is 30th January and the first date in plot (day 29) is the 28th February

  • 2

    Monday 23rd March is Day 53 in this model

  • 3

    The purple crosses are additional data points added after the model was run

  • 4

    The x/y intersects with the growth lines are: red dotted line an estimated current figure for UK ICU bed capacity (assuming that Covid cases can access 2/3 of the 4000 beds available in total); red dash-dotted is same but with additional surge capacity providing a total of 7000 beds (2/3 for Covid); blue is total hospital beds allocation available for Covid patient care (20,000 beds)

Author contribution

TR led the study, analysis, and writing, and undertook the interviewing. KL co-led the study, contributing to analysis and writing.

Declaration of competing interest

Both authors have no conflicts to declare.

Acknowledgements

We thank our participants who generously gave their time to work in dialogue with us. This project is part supported by an Australian Research Council Discovery Project grant (DP210101604) and part supported through London School of Hygiene and Tropical Medicine. We are grateful for support from the UNSW SHARP (Professor Tim Rhodes) and Scientia (Associate Professor Kari Lancaster) schemes.

References

  1. Abeysinghe S. An uncertain risk: the World Health Organization's account of H1N1. Sci. Context. 2014;27:511–529. doi: 10.1017/s0269889714000167. [DOI] [PubMed] [Google Scholar]
  2. Adam D. Modelling the pandemic. Nature. 2020;580:316–318. doi: 10.1038/d41586-020-01003-6. [DOI] [PubMed] [Google Scholar]
  3. Adams V., Murphy M., Clarke A.E. Anticipation: technoscience, life, affect, temporality. Subjectivity. 2009;28:246–265. [Google Scholar]
  4. Anderson B. Affective atmospheres. Emot., Space Soc. 2009;2:77–81. [Google Scholar]
  5. Anderson B., Adey P. Affect and security: exercising emergency in UK civil contingencies. Environ. Plann. D. 2011;6:1092–1109. [Google Scholar]
  6. Anderson W. The model crisis, or how to have critical promiscuity in the time of Covid-19. Soc. Stud. Sci. 2021;51:167188. doi: 10.1177/0306312721996053. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bacchi C., Goodwin S. Palgrave; New York: 2016. Poststructural Policy Analysis. [Google Scholar]
  8. Barad K. Duke; London: 2007. Meeting the Universe Halfway. [Google Scholar]
  9. Bradley R., Roussos J. Following the science: pandemic policy making and reasonable worst-case scenarios. LSE Public Pol. Rev. 2021;1:6. [Google Scholar]
  10. Brooks-Pollock E., Danon L., Jombart T., Pellis L. Modelling that Shaped the Early COVID-19 Pandemic Response in the UK. Philos Trans Roy Soc. 2021;B 376:20210001. doi: 10.1098/rstb.2021.0001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Caduff C. University of California Press; Oakland: 2015. The Pandemic Perhaps. [Google Scholar]
  12. Callon M., Law J. On qualculation, agency and otherness. Environ. Plann. 2005;23:717–723. [Google Scholar]
  13. Callon M., Numiesa F. Economic markets as calculative and calculated collective devices. Organ. Stud. 2005;26:1229–1250. [Google Scholar]
  14. Christley R.M., Mort M., Wynne B., Wastling J.M., Heathwaite A.L., Pickup R., Austin Z., Latham S.M. Wrong, but useful”: negotiating uncertainty in infectious disease modelling. PLoS One. 2013;8 doi: 10.1371/journal.pone.0076277. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Colebatch H.K. third ed. Open University Press; Buckingham, UK: 2009. Policy. [Google Scholar]
  16. Cooper M. Pre-empting emergence. Theor. Cult. Soc. 2006;23:113–135. [Google Scholar]
  17. Diprose R., Stephenson N., Mills C., Race K., Hawkins G. Governing the future: the paradigm of prudence in political technologies of risk management. Secur. Dialog. 2008;39:267–288. [Google Scholar]
  18. Dubois M., Wake C. Overseas Development Institute; London: 2015. The Ebola Response in West Africa: Exposing the Politics and Culture of International Aid. Humanitarian Group Working Paper. [Google Scholar]
  19. Duff C. Assemblages of Health. Springer; London: 2014. [Google Scholar]
  20. Evans R. SAGE advice and political decision-making: ‘Following the science’ in times of epistemic uncertainty. Soc. Stud. Sci. 2021;52:53–78. doi: 10.1177/03063127211062586. [DOI] [PubMed] [Google Scholar]
  21. Ferguson N.M., Cummings D.A., Cauchemez S., et al. Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature. 2005;437:209–214. doi: 10.1038/nature04017. [DOI] [PubMed] [Google Scholar]
  22. Ferguson N., et al. Imperial College COVID-19 Response Team; 2020. Impact of Non-pharmaceutical Interventions (NPIs) to Reduce COVID-19 Mortality and Healthcare Demand. Report 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Foucault M. Palgrave Macmillan; Basingstoke: 2009. Security, Territory, Population (Trans G Burchell) [Google Scholar]
  24. Funk S., Abbott S., Atkins B.D., Baguelin M., Baillie J.K., Birrell P., et al. 2020. Short-term Forecasts to Inform the COVID-19 Response. [DOI] [Google Scholar]
  25. Glasser J.W., Hupert N., McCauley M.M., Hatchett R. Modeling and public health emergency responses: lessons from SARS. Epidemics. 2011;3:32–37. doi: 10.1016/j.epidem.2011.01.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Greenhalgh T., Wieringa S. Is it time to drop the ‘knowledge translation’metaphor? A critical literature review. J. R. Soc. Med. 2011;104:501–509. doi: 10.1258/jrsm.2011.110285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Hacking I. Cambridge University Press; Cambridge: 1990. The Taming of Chance. [Google Scholar]
  28. Huppert A., Katriel G. Mathematical modelling and prediction in infectious disease epidemiology. Clin. Microbiol. Infect. 2013;19:999–1005. doi: 10.1111/1469-0691.12308. [DOI] [PubMed] [Google Scholar]
  29. King N.B. The scale politics of emerging diseases. Osiris. 2004;19:62–76. doi: 10.1086/649394. [DOI] [PubMed] [Google Scholar]
  30. Lahsen M. Seductive simulations? Uncertainty distribution around climate models. Soc. Stud. Sci. 2005;35:895–922. [Google Scholar]
  31. Lakoff A. University of California Press; 2017. Unprepared: Global Health in a Time of Emergency. [Google Scholar]
  32. Lancaster K. Performing the evidence-based drug policy paradigm. Contemporary Drug Problems. 2016;43:142–153. [Google Scholar]
  33. Lancaster K., Rhodes T. In: Evidence In Action between Science and Society: Constructing, Validating, and Contesting Knowledge. Esselborn S., editor. Routledge; London: 2022. The thing we call evidence: Towards a situated ontology of evidence in policy, in Ehlers. [Google Scholar]
  34. Lancaster K., Rhodes T., Rosengarten M. Making evidence and policy in public health emergencies: Lessons from COVID-19 for adaptive evidence-making and intervention. Evidence and Policy. 2020;16:477–490. [Google Scholar]
  35. Latour B. Why has critique run out of steam? From matters of fact to matters of concern. Crit. Inq. 2004;30:225–248. [Google Scholar]
  36. Law J. Routledge; London: 2004. After Method. [Google Scholar]
  37. Law J., Singleton V. Object lessons. Organization. 2005;12:331–335. [Google Scholar]
  38. Law J. Disaster in agriculture: or foot and mouth mobilities. Environ. Plann. 2006;38:227–239. [Google Scholar]
  39. Law J., Singleton V. Performing technology's stories. Technol. Cult. 2000;41:765–775. [Google Scholar]
  40. Leach M., Scoones I. The social and political lives of zoonotic disease models. Soc. Sci. Med. 2013;88:10–17. doi: 10.1016/j.socscimed.2013.03.017. [DOI] [PubMed] [Google Scholar]
  41. Leach M., MacGregor H., Ripoll S. Scoones, Wilkinson A. Critical Public Health; 2021. Rethinking Disease Preparedness: Incertitude and the Politics of Knowledge. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Mol A. 2002. The Body Multiple: Ontology in Medical Practice. (Durham Duke. [Google Scholar]
  43. Medley G. In Defence of SAGE's models. Spectator. 2021 Octoer 21, 2021. [Google Scholar]
  44. Meltzer M.I., Atkins C.Y., Santibanez S., Knust B., Petersen B.W., Ervin D., Nichol S.T., Damon K., Washington M.L. Estimating the future number of cases in the Ebola epidemic: Liberia and Sierra Leone, 2014-2015. MMWR (Morb. Mortal. Wkly. Rep.) 2014;63:1–14. [PubMed] [Google Scholar]
  45. Myers N. Duke University Press; London: 2015. Rendering Life Molecular: Models, Modelers and Excitable Matter. [Google Scholar]
  46. Parker M., Hanson T.M., Vandi A., Babawo L.S., Allen T. Ebola and public authority: saving loved ones in Sierra Leone. Med. Anthropol. 2019;38:440–454. doi: 10.1080/01459740.2019.1609472. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Pellis L., Scarabel F., Stage H.B., Overton C.E., Chappell L.H.K., Fearon E., et al. Challenges in control of COVID-19: short doubling time and long delay to effect of interventions. Philos. Trans. Roy. Soc. B. 2021;376:20200264. doi: 10.1098/rstb.2020.0264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Rhodes T., Lancaster K. Evidence-making interventions in health: A conceptual framing. Social Science and Medicine. 2019;238:112488. doi: 10.1016/j.socscimed.2019.112488. [DOI] [PubMed] [Google Scholar]
  49. Rhodes T., Lancaster K. Mathematical models as public troubles in COVID-19 infection control: Following the numbers. Health Sociology Review. 2020;29:177–194. doi: 10.1080/14461242.2020.1764376. [DOI] [PubMed] [Google Scholar]
  50. Rhodes T., Lancaster K. Excitable models: Projections, targets, and the making of futures without disease. Sociology of Health and Illness. 2021;43:859–880. doi: 10.1111/1467-9566.13263. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Rhodes T., Lancaster K., Lees S., Parker M. Modelling the pandemic: Attuning models to their contexts. BMJ Global Health. 2020;5 doi: 10.1136/bmjgh-2020-002914. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Rice K., Wynne B., Martin V., Ackland G.J. Effect of school closures on mortality from coronavirus disease 2019: old and new predictions. Br. Med. J. 2020;371:3588. doi: 10.1136/bmj.m3588. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Riley S. 2020. Mitigation of COVID-19 Epidemics Will Likely Fail if the Population Reduces Rates of Transmission in Response to the Saturation of Critical Care Facilities. Paper for prepared for SPI-M, March 9, 2020. [Google Scholar]
  54. Rose D.B., van Dooren T., Chrulew M. In: Extinction Studies: Stories of Time, Death, and Generations. Rose D.B., van Dooren T., Chrulew M., editors. Columbia Univeristy Press; New York: 2017. Telling extinction stories; pp. 1–18. [Google Scholar]
  55. Royal Society . Royal Society; 2020. Reproduction Number and Growth Rate of the COVID-19 Epidemic in the UK. [Google Scholar]
  56. SAGE . 2020. Addendum to Eighteenth SAGE Meeting on Covid-19.https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/888787/S0386_Eighteenth_SAGE_meeting_on_Covid-19_.pdf March 23, 2020. [Google Scholar]
  57. SAGE . 2020. Addendum to Nineteenth SAGE Meeting on Covid-19.https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/888789/S0387_Nineteenth_SAGE_meeting_on_COVID-19_.pdf March 23, 2020. [Google Scholar]
  58. Saminmian-Daresh L. Practicing uncertainty: scenario-based preparedness exercises in Israel. Cult. Anthropol. 2016;31:359–386. [Google Scholar]
  59. Savransky M., Rosengarten M. What is nature capable of? Evidence, ontology and speculative medical humanities. Med. Humanit. 2016;42:166–172. doi: 10.1136/medhum-2015-010858. [DOI] [PubMed] [Google Scholar]
  60. Seale C. Sage; London: 1999. The Quality of Qualitative Research. [Google Scholar]
  61. SPI-M-O . 2020. Consensus View on Covid-19.https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/882721/24-spi-m-o-consensus-view-20032020.pdf March 20, 2000: [Google Scholar]
  62. SPI-M-O . 2020. Consensus View on Scenario Planning.https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/887470/26-spi-m-o-working-group-scenario-planning-consensus-view-25032020.pdf March 20, 2000: [Google Scholar]
  63. Star L. In: Women, Work and Computerization. Eriksson I., Kitchenham B.A., Tijdens K.G., editors. North Holland; Amsterdam: 1991. Invisible work and the silenced dialogues in representing knowledge. (81–92) [Google Scholar]
  64. Stewart E.A., Smith K.E. ‘Black magic’ and ‘gold dust’: the epistemic and political uses of evidence tools in public health policy making. Evid. Pol. 2015;11:415–437. (Sismond) [Google Scholar]
  65. Tsing A.L. Princeton University Press; Princeton, NJ: 2015. The Mushroom at the End of the World. [Google Scholar]
  66. Vallance P. 2020. BBC News. March 10, 2020. [Google Scholar]
  67. Verran H. In: Mathematics, Substance and Surmise. Davis E., Davis P., editors. Springer; 2015. Enumerated entities in public policy and governance; pp. 365–379. [Google Scholar]
  68. Waller E., Davis M., Stephenson N. Australia's pandemic influenza ‘Protect’ phase: emerging out of the fog of pandemic. Crit. Publ. Health. 2016;26:99–113. [Google Scholar]
  69. Williams S. ‘Plausible uncertainty’: the negotiated indeterminacy of pandemic influenza in the UK. Crit. Publ. Health. 2008;18:77–85. [Google Scholar]
  70. Woolgar S., Lezaun J. The wrong bin bag: a turn to ontology in science and technology studies? Soc. Stud. Sci. 2013;43:321–340. [Google Scholar]

Articles from Social Science & Medicine (1982) are provided here courtesy of Elsevier

RESOURCES