Abstract
In this paper, we simulate outbreaks of foot-and-mouth disease in the Commonwealth of Pennsylvania, USA – after the introduction of a state-wide movement ban – as they might unfold in the presence of mitigation strategies. We have adapted a model previously used to investigate FMD control policies in the UK to examine the potential for disease spread given an infection seeded in each county in Pennsylvania. The results are highly dependent upon the county of introduction and the spatial scale of transmission. Should the transmission kernel be identical to that for the UK, the epidemic impact is limited to fewer than 20 premises, regardless of the county of introduction. However, for wider kernels where infection can spread further, outbreaks seeded in or near the county with highest density of premises and animals result in large epidemics (>150 premises). Ring culling and vaccination reduce epidemic size, with the optimal radius of the rings being dependent upon the county of introduction. Should the kernel width exceed a given county-dependent threshold, ring culling is unable to control the epidemic. We find that a vaccinate-to-live policy is generally preferred to ring culling (in terms of reducing the overall number of premises culled), indicating that well-targeted control can dramatically reduce the risk of large scale outbreaks of foot-and-mouth disease occurring in Pennsylvania.
Keywords: Foot and mouth disease, Spatial model, USA, Culling, Vaccination
1. Introduction
In the event of an outbreak of foot-and-mouth disease (FMD) in the USA, it is likely that mathematical models will be used to inform policy decisions with respect to control. The use of models in this context has become controversial (Anon., 2001; see Dickey et al., 2008 for a brief review), nevertheless considerable effort has been expended in both Europe and the USA to formulate useful model frameworks for FMD (reviewed in Dickey et al., 2008 and Ward et al., 2009).
In Europe, spatial, stochastic premises-based models have been developed and used during FMD epidemics in 2001 and 2007 (Keeling et al., 2001; Ferguson et al., 2001a, b) and, retrospectively, to investigate the value of specific culling and vaccination strategies with respect to various definitions of success (Keeling et al., 2003; Ferguson et al., 2003; Tildesley et al., 2006). Many of these European models benefitted from relatively easy access to agricultural records concerning the size and location of farming premises. Census data like these, together with industry-specific registers, facilitated the development of detailed premises-based models of infectious diseases deemed to be a major threat to the livestock industry (e.g. Truscott et al., 2007; Sharkey et al., 2008; Tildesley et al., 2009). However, as Bates et al. (2003a,b,c) point out, comprehensive data sets on herd locations in the USA are not available in the public domain. In consequence, most published studies involving premises-based models of foot and mouth disease in the USA have depended upon synthetic farm maps that generally mimic the overall density of premises in particular regions (e.g. Schoenbaum and Disney, 2003), synthetic maps with the same marginal properties as the county or zip code census data (e.g. Ward et al., 2009), or real maps for specific and relatively small regions in the USA using data mostly collected independently of the census (e.g. Bates et al., 2003a,b,c; Carpenter et al., 2007; Kobayashi et al., 2007). The problem was exacerbated in February 2010, when the United States Department of Agriculture abandoned an ambitious plan to create a National Animal Identification System and announced a new framework for animal disease traceability in the United States (Anon., 2010). This new framework will apply only to animals moved in interstate commerce, which means that there is little likelihood – in the foreseeable future – that spatial, stochastic models of disease transmission between farms in the USA will involve real maps of farm locations except for those generated by individual research groups, and experience shows that such maps rarely include more than a few counties (Bates et al., 2003a,b,c; Carpenter et al., 2007; Kobayashi et al., 2007). Even so, simulation studies have shown that synthetic maps can be of use in addressing questions of optimal disease control strategies in a range of scenarios (Tildesley et al., 2010; Riley, 2010; Rorres et al., 2011a,b).
The second problem facing those who wish to model the control of FMD in the USA is that there is no recent empirical basis for estimating the values of the parameters that govern transmission between premises. Most of the models used to investigate FMD in farmed animals in the USA begin with the assumption that there is direct transmission (animal to animal contact) and indirect transmission (wind borne transmission of virus and transmission via fomites) and that both kinds of transmission are crucially dependent upon the nature of the farming enterprise (species, herd size, management) and, importantly, on animal movement between premises, sale barns and shows. The influence of these factors on the contact rate and on the probability of infection given contact is evaluated on the basis of expert opinion (e.g. Bates et al., 2003a; Ward et al., 2009; Schoenbaum and Disney, 2003). The difficulty with this is that the responses from the experts are highly variable leading to wide and flat probability distributions (Bates et al., 2003a). How complex a model has to be to be useful is perennially debated. Experience shows that many models that are used to predict spread and control of disease are not necessarily comprehensive and detailed descriptions of the system of interest (Keeling, 2005). However, highly complex models often engender user confidence (Smith, 2011). Dickey et al. (2008) found that simplified models of FMD spread in the USA always generated different results with respect to epidemic duration, and relative infection risk for different production sectors than did the “full” model (Bates et al., 2003a) that incorporated premises specific heterogeneities. They argued that this result established the “need for heterogeneous, operation-specific contact parameters”. However, in practice, models are usually used comparatively, to rank putative control strategies according to some definition of success (e.g. epidemic impact, duration of the epidemic). Dickey et al. (2008) did not show (because this was not considered) that the simplified models ranked different control strategies any differently from the “full” model. Their work serves to emphasize that the purpose for which a model was constructed must be clearly delineated and, provided it serves this function, it can be useful even if it is deficient in other respects (Smith, 2011). Nevertheless, the complex manner in which the existing models of FMD in the USA represent transmission between premises proved useful because it provided a concrete framework for presentation to panels of experts. It was a practicable solution in the absence of an actual outbreak. Furthermore, models that are concerned with predicting the extent of spread in the occult phase of an outbreak (prior to detection), are highly sensitive to small changes in the transmission parameters describing long distance (as opposed to “local”) spread by animal movement (Keeling et al., 2001; Green et al., 2006) and it is clearly reasonable to try to capture these phenomena in the model architecture. However, the situation changes once the infection is detected: bans on animal movement are implemented and transmission is dominated by local spread. Investigations of the behavior of models that use transmission kernels based on the principle that ‘proximity to the disease is the greatest risk factor for its secondary spread’ (Gibbens and Wilesmith, 2002; Bessell et al., 2008) show that, when local spread predominates, the qualitative ranking of control strategies is not sensitive to the quantitative details of transmission (Keeling et al., 2001).
The functional form of these kinds of transmission kernels is determined empirically by contact tracing (e.g. Haydon et al., 2003; Kao, 2002), or by model fitting (e.g. Diggle, 2006; Rorres et al., 2010, 2011a, b) to epidemic data. Empirically derived kernels like this describe the probability of a susceptible set of premises being infected by an infectious set of premises as a monotonically declining function of the distance between them. Some adjustment is usually needed to take into account the number of animals on the infectious and susceptible premises respectively and for the specific location of the outbreak (Keeling et al., 2001; Bessell et al., 2010). Whilst this methodology assumes nothing about the actual mechanism of transmission, it is easily able to capture, for example, the dramatic reduction in contact distance that is consequent upon animal movement bans (Chis Ster and Ferguson, 2007). Given the admitted difficulties of basing transmission estimates on expert opinion (Bates et al., 2003a,b,c), it is of interest to ask whether the transmission kernels based upon actual outbreaks of FMD in Europe may be useful in providing rules of thumb for the control of FMD in the USA. This is the purpose of our paper.
The paper is set out as follows. First, we describe how we have adapted the FMD model first used by Keeling et al. (2001) for the 2001 FMD outbreak in the UK to the US context. This model has been used to investigate a range of scenarios both for the UK and elsewhere (Tildesley et al., 2006; Tildesley and Keeling, 2008) but in order not to stretch the credulity of the reader we confine our modifications to those that are most appropriate for the dairying regions of the North Eastern USA, and, specifically, Pennsylvania. Previous models of FMD in the USA have focused on the high density livestock industry in Texas or the (mostly) dairying operations of California (Ward et al., 2009; Bates et al., 2003a,b,c; Carpenter et al., 2007, 2011). The authors of those papers assiduously cautioned that their models may not apply elsewhere in the USA (even within the same state) such is the diversity of livestock systems (Bates et al., 2003c) and so we are careful to explain why we believe the model described here may be relevant to the conditions in Pennsylvania. Second, we examine whether or not the transmission parameters estimated for the 2001 UK outbreak (after the ban on animal movement was in place) are sufficient to generate FMD outbreaks in Pennsylvania and, if not, what modifications are necessary for outbreaks to occur. Third, we examine the extent to which it is possible to generate qualitatively consistent rules of thumb for the control of FMD in Pennsylvania. Our thinking is that such rules of thumb could inform policy decisions in the early stages of an outbreak before the full quantitative details of transmission had been established. We conclude with a brief discussion of the constraints our assumptions place upon the kinds of questions that can be addressed using our model.
The policies and standard operating procedures that will govern how a confirmed outbreak of FMD in Pennsylvania will be controlled involve the establishment of a policed quarantine area within which there will be an absolute ban on the movement of susceptible animals and severe restrictions on the movement of non-susceptible animals and agricultural vehicles (Anon., 2000, 2003, 2006). Our model is specifically designed to evaluate the control policies that are put in place after the ban on animal movement. The stated control policies rely upon depopulation (i.e. the culling of all animals on known infected and possible contact farms) as the first resort unless certain specific thresholds are reached (Anon., 2000), in which case vaccination would be considered. However, there is a renewed interest in vaccination for the control of FMD even in countries – like the USA – that have previously depended upon depopulation (Golde et al., 2005; Cox et al., 2005; Parida et al., 2008; Kobayashi et al., 2009; Paton et al., 2009; Spickler and Roth, 2011) and for this reason we considered both depopulation and vaccination.
2. Method
2.1. The host/virus system
We assume that a FMD virus identical to the one that caused the 2001 FMD epidemic in the UK is introduced into Pennsylvania. Cattle were the key host species in the 2001 UK epidemic. In the early phase of this epidemic, around 88% of the infected premises were infected by cattle and only 12% by sheep. At this stage, sheepto- sheep transmission accounted for only for 3.1% of the infected premises. Later on, 91% of the IPs were infected by cattle and 8.9% by sheep, with sheep-to-sheep transmission accounting for only 2.3% of infections (Keeling et al., 2001; Chis Ster and Ferguson, 2007). Pigs, too, were very underrepresented amongst the affected species in the 2001 epidemic (Gibbens et al., 2001). Subsequent investigations showed that, although that the virus spread readily within populations of pigs (Orsel et al., 2007; Quan et al., 2009), local spread between pig herds was much more limited because of the resistance of pigs to infection by aerosols (Donaldson et al., 2001; Donaldson and Alexandersen, 2001; Alexandersen and Donaldson, 2002; Spickler and Roth, 2011), the much better biosecurity on pig farms, and legislation unrelated to FMD that regulated the movement of swine from farm to farm (Gibbens et al., 2001). We assume that conditions in Pennsylvania are similar enough to those in the UK (USDA, 1995, 2007, 2009, and see below) to warrant excluding pigs from our model and to consider spread only via cattle and sheep farms. The limitations this assumption placed upon the model are discussed later. We also ignore the possible role of wildlife in the transmission of FMD in Pennsylvania. Feral swine are a relatively recent introduction in the Commonwealth and may even be declining in numbers as the result of a concerted eradication campaign (Anon., 2011). Elk are resistant to infection (Rhyan et al., 2008) and, although white-tailed deer are susceptible to infection and may transmit FMV virus under laboratory conditions (McVicar et al., 1974), there is no serological evidence that cervids have ever been involved in any FMD outbreak Europe (Mouchantat, 2005). The deer that were culled during the 1924 outbreak in California were identified as infected with FMDV on the basis of proximity to confirmed cases in cattle herds and clinical signs that were similar (but not identical) to the lesions seen in cattle (Mohler, 1926). We have assumed in our model that FMDV does not become established in the wild or domesticated Cervid population in Pennsylvania.
2.2. Conditions in Pennsylvania
Animal agriculture in Pennsylvania (farm density, type and size) is broadly similar to that found in England and Wales. For example, although the percentage of land under agricultural use in England and Wales (72%) is different from Pennsylvania (27%), the average density of farms per agricultural acre is the same (0.008 farms per acre) as is, of course, the average farm size (124 acres). In both regions, dairying is a major component of the agricultural economy. In England and Wales, the dairy industry represents about one third of livestock output and provides about half of the animals used in beef production. In Pennsylvania, the Dairy industry represents about 48% of livestock output. However, it should be noted that there are more dairy farms (18,284) in England and Wales than in Pennsylvania (8333) and that the average cow herd tends to be larger (99 in England and Wales, 66 in Pennsylvania) (Pennsylvania Agricultural Statistics 2010–2011, USDA National Agricultural Statistics Service; Department for Environment Food and Rural Affairs, Agricultural and Horticultural Census 2010). We know of no formal study comparing the agricultural infrastructure of the two regions and, specifically, we know very little about animal movement patterns between counties in Pennsylvania. But this has no bearing on the deployment of our model because we deal here only with the post detection phase of an outbreak when restrictions on animal movement are in place and viral spread is dominated by local processes whose properties, in this first instance, we describe using a standard transmission kernel.
2.3. The map
The Census of Agriculture was used to develop a surrogate farm demography database for cattle and sheep farms in Pennsylvania. Farms were allocated random locations within their respective counties, whilst the known farm size distributions within each county were used to assign a farm size to each farm (see Table 1 for Adams County, Pennsylvania). The greatest concentration of farms is found in Lancaster County and for the purposes of the results in this paper counties are color-coded according to their proximity to Lancaster County. Fig. 1a shows a map of all 67 counties in Pennsylvania, where the color of each county represents its relative proximity (from 1 to 67, where Lancaster County is 1) to Lancaster County.
Table 1.
Census of Agriculture entry for 2002 for cattle farms in Adams County, Pennsylvania. The total number of farms and cattle in each farm size category are listed.
| Farm size (number of cattle) | Farms | Cattle |
|---|---|---|
| 1–9 | 141 | 658 |
| 10–9 | 777 | 1048 |
| 20–49 | 111 | 3363 |
| 50–99 | 54 | 3375 |
| 100–199 | 38 | 5093 |
| 200–499 | 18 | 5439 |
| 500+ | 5 | 7799 |
Fig. 1.
(a) Map of the counties of Pennsylvania. Counties are colored in order of their relative distances from Lancaster County, as indicated by the color bar. Lancaster County is represented by the black dot, Carbon by the red dot, Fayette by the magenta dot, Jefferson by the blue dot and Lebanon by the green dot. (b) Bar graph showing the number of farms in each county of Pennsylvania. Counties are ordered from 1 to 67 according to their proximity to Lancaster County. Each bar is colored according to its distance to Lancaster as in (a).
According to the June 2002 census, there were 3536 farms in Lancaster County with sheep or cattle (leftmost bar in Fig. 1b). Washington County, in the west, has the next highest number, at 1445, with moderate farm densities also found for other western counties (Fig. 1b). A fairly even distribution of farms is found in the remainder of the Commonwealth.
Before proceeding to use our surrogate farm database for Pennsylvania to investigate the risk of spread of disease given an outbreak of FMD in Pennsylvania and the potential impact of intervention strategies for a range of model parameters, we note that the use of a county-level random model ignores county-level spatial heterogeneities and spatial correlations between the location of farms which may influence preferred control strategies (e.g. Bessell et al., 2008). Previous work suggests that parameterization of such a model to early outbreak data will subsume the effect of the random location assumption within the model parameters (Tildesley et al., 2010; Riley, 2010). Therefore, in the event of an epidemic in Pennsylvania, the use of case reporting data within the first few days of the outbreak could be used to fine tune the parameters of a county-level model such as the one presented in this paper.
2.4. The model
The mathematical model used throughout this paper was first used by Keeling et al. (2001) to predict spread and optimal control strategies during the course of the 2001 FMD outbreak in the UK. The model has been adapted since to investigate a range of scenarios both for the UK and elsewhere (Tildesley et al., 2006; Tildesley and Keeling, 2008). The rate at which an infectious farm i infects a susceptible farm j is given by:
Ns,i is the number of livestock species s recorded as being on farm i, Ss and Ts measure the species-specific susceptibility and transmissibility, dij is the distance between farms i and j, and K(dij) is the transmission kernel, which is estimated from contact tracing and is a function of the distance between farms i and j (Keeling et al., 2001). The parameters, ps, pc, qs and qc are power law parameters accounting for a non-linear increase in susceptibility and transmissibility as animal numbers on a farm increase and provide a closer fit to the 2001 data than when these powers are set to unity (Diggle, 2006; Tildesley et al., 2008; Deardon et al., 2009). In the UK version of the model, the parameters are determined for five distinct regions, Cumbria, Devon, the rest of England (excluding Cumbria and Devon), Scotland and Wales. Seven parameters are therefore estimated for each region (Scow, Tcow, Tsheep, ps, pc, qs and qc, with Ssheep = 1). The seven parameters are estimated by fitting the model to the aggregate regional time-series data from the UK 2001 epidemic. The parameters used here take the same value as those estimated for the 2001 epidemic in the county of Cumbria in the UK, the main hotspot of infection, and are given in Table 2.
Table 2.
Parameter values used in the model.
| Parameter | Parameter value |
|---|---|
| Ss | 1 |
| Sc | 5.7 |
| Ts | 8.3 × 10−4 |
| Tc | 8.2 × 10−4 |
| ps | 0.20 |
| pc | 0.41 |
| qs | 0.49 |
| qc | 0.42 |
In addition to routine culling of infected premises (IPs) in the 2001 UK FMD epidemic, all “premises where animals have been in direct contact with infected animals or have, in any way, become exposed to infection” were defined as dangerous contacts (DCs) and were preemptively culled in an effort to control disease. In our model, DCs are determined based upon both prior infection by an IP and future risk of infection in the same way as in previous work (Tildesley et al., 2006). The probability that farm i is a DC associated with a particular IP j is given by:
The accuracy of DC culling, or the ability to detect transmission routes, is defined by parameter f, whilst F controls the overall DC:IP ratio. In line with previous work for the UK, we fix F = 6, giving a DC:IP ratio of 2:1, which corresponds to the best ratio that was achieved during the 2001 epidemic. We also fix f = 0.85 which corresponds to a relatively high level of successful identification of DCs compared with the UK 2001 epidemic.
In 2001 in the UK there was a target of culling all IPs within 24 h of reporting infection and associated preemptive culling within 48 h. Whilst this result was not always achieved in practice during 2001, we assume a 24/48 h policy here. Previous work has investigated the effectiveness of this strategy compared with one using the culling delays as carried out during 2001 (Tildesley et al., 2009).
It has been shown through simulation that a policy of IP and DC culling alone would have resulted in a much larger epidemic during 2001 than actually occurred and would lead to the depopulation of more farms (Tildesley et al., 2009), implying that additional culling strategies, including culling of contiguous premises and farms within 3 km of IPs, aided in disease control. In the event of an outbreak of FMD in the USA, culling of IPs and DCs would be carried out automatically (Anon., 2000, 2003, 2006) and other control policies, such as ring culling and vaccination would be considered as additional intervention strategies. In this work, we investigate the effectiveness of introducing a policy of ring culling or vaccination in addition to IP and DC culling. It is important to note that, initially, we assume resources to carry out control are limited and that a maximum of 100 premises can be ring culled per day and 35,000 animals can be vaccinated per day in line with previous work and estimates of what is feasible in the UK (Tildesley and Keeling, 2008; Tildesley et al., 2006). The sensitivity of model output to resource limitations is examined later on. When an IP is reported, all premises within a particular radius of that IP will be targeted for culling or vaccination. The radius of the ring is allowed to vary between simulations and we seek the radius which minimizes the “epidemic impact”, defined as the total number of premises depopulated of livestock (either as IPs, DCs or ring culled premises). Vaccination is assumed to take place within a ring around each infected premises such that all premises within a given distance of every reported IP will be vaccinated. Premises are vaccinated in the order they are identified and vaccination around each premises is performed from the outside in. In line with previous work, a vaccine efficacy of 90%, a four day delay from vaccination to immunity and cattle-only vaccination are assumed. The effect of these assumptions has been investigated in detail elsewhere (Tildesley et al., 2006).
2.5. The transmission kernel
In the initial phase of the 2001 outbreak of FMD in the UK, prior to detection of disease, it is believed that transmission was a combination of local spread (direct transmission by aerosol or fomites) and more distant dissemination (the result of animal movement and the spread of disease by personnel and vehicles) (Gibbens et al., 2001). In the event of an outbreak of FMD in the USA, a movement ban would be introduced at the state level initially, with a view to extending this further should state-level control be ineffective. Because we are concerned here with evaluating the control strategies that are implemented after the ban, we use the transmission kernel that was estimated from the 2001 UK FMD infection database after the introduction of a nationwide movement ban on 23rd February 2001. During the 2001 UK epidemic, the transmission kernel was estimated from detailed contact tracing that was performed by the local veterinary authorities. By knowing the likely source of infection for the majority of cases, we were able to estimate the shape and magnitude of the transmission kernel – effectively determining (non-parametrically) the impact of distance on the risk of infection (Keeling et al., 2001). This yielded a transmission kernel that decreased monotonically with distance, which compares well to parametric estimates (based on (D0 + d)−α for a distance d – where D0 and α are parameters to be determined) performed by other groups (Ferguson et al., 2001a,b; Diggle, 2006; Chis Ster and Ferguson, 2007). As such this kernel represents what happens when transmission is dominated by local spread (without specifying the precise mechanism how that local spread occurred).
3. Simulations
3.1. The effect of culling IPs and DCs alone on the epidemic impact
We began with the transmission kernel that had previously been used to represent the spatial spread of the 2001 FMD epidemic in the UK. We investigated the consequences of beginning the epidemic in each of the 67 Pennsylvania Counties in turn. We ran 1000 simulations for each of the counties. Each simulation began by randomly selecting five premises and seeding these with the FMD virus. Simulations from stochastic models like ours typically fail to generate significant epidemics in some of the runs (where we define “significant” in our case as fewer than 10 farms infected): the fraction of runs that have this outcome is generally dependent upon the initial conditions. In these cases, epidemics would die out without the need for control policies such as vaccination and ring culling to be implemented. In this paper we are interested in epidemic impacts and preferred control strategies given that a significant epidemic occurs as this has relevance to policy makers. Therefore, in order to estimate the average epidemic impact (here and elsewhere) we discarded results from simulations in which the total number of infected premises was 10 or fewer unless otherwise stated.
When using the original UK transmission kernel, and culling IPs and DCs alone, the mean number of premises that were culled during the entire epidemic (the epidemic impact) was small (fewer than 10 premises in total; Fig. 2a). For this set of parameters alone, we therefore calculate mean epidemic impacts across all simulations (including those with 10 or fewer infected premises). The highest mean epidemic impact (a mean of 17 premises) was found for epidemics seeded in Lancaster County (the county with the highest livestock density) with slightly lower epidemic impacts in those counties bordering Lancaster County. We conclude that, when using the original UK transmission kernel, the epidemic in Pennsylvania does not become widespread and very few secondary farms become infected regardless of where the epidemic began. In such a case, control measures in addition to IP and DC culling would be unnecessary.
Fig. 2.
The predicted epidemic impact (total number of premises depopulated of livestock) of foot-and-mouth disease epidemics seeded in each county of Pennsylvania in turn (a) kernel width the same as for the UK (WPA = 1) (b) kernel width twice that for the UK (WPA = 2) and (c) kernel width four times that for the UK (WPA = 4). Note that the range of the y-axis increases by one order of magnitude (a) through (c). The ordering of counties and color scale is the same as in Fig. 1b.
Next, we conducted a sensitivity analysis to investigate how much the epidemic impact changed when we departed from the parameter values that determined the characteristics of the original UK transmission kernel. Specifically, we were interested to see what would happen when we used wider kernels (more extensive local spread). In order to vary kernel width, we created a transmission kernel function (KPA(d)) for Pennsylvania in which the relevant parameters were defined in terms that were relative to the original UK transmission kernel.
| (2) |
Here RPA(d) measures the relative strength of transmission in Pennsylvania compared with the UK and WPA measures the relative width of the kernel. The parameter d continues to be defined as the distance between the infected farm and the susceptible farm in each case. KPA(d) represents the distance dependent kernel function for Pennsylvania and KUK(d/WPA) represents the distance dependent kernel function for the UK. We considered two additional cases. In each case, the relative strength of transmission was the same as in the UK (RPA = 1), but the relative width of the kernel was twice and then four times what it was in the UK (WPA = 2 and WPA = 4 respectively). We note that increasing the relative width (WPA) also changes the total magnitude of transmission from an infectious farms; in fact given the precise shape of the UK transmission kernel, doubling WPA is qualitatively similar to quadrupling the value of RPA.
As before, epidemics seeded in Lancaster county resulted in the biggest epidemic impacts, with a mean value of around 950 premises when WPA = 2 and a mean value of around 15,000 premises when WPA = 4 (Fig. 2b and c). Clearly, the relationship between kernel width and epidemic impact was very non-linear. When the kernel width was two times that of the 2001 UK kernel (WPA = 2) and epidemics were seeded in counties bordering Lancaster County (the county with the highest density of livestock), there was a possibility for spread to Lancaster County itself and the mean epidemic impacts for epidemics seeded in these neighboring counties were found to be high. However, even given this wider kernel, epidemics seeded in counties a long distance from Lancaster (particularly in the north and west of the state where farm densities are low) rarely resulted in epidemic impacts greater than 50–60 premises and the epidemic impact declined with the distance between the county of origin and Lancaster County (Fig. 2b). When the kernel width was four times that of the 2001 UK kernel, the risk of spread of disease over long distances was so high that the infection could readily move to areas of high farm density. For that reason, there was generally little difference between the epidemic impact of epidemics seeded in counties in the north, extreme east, or extreme west of Pennsylvania and the epidemic impact of epidemics seeded in the south eastern and south central counties (Fig. 2c). Typically, mean epidemic impacts of 5000–15,000 premises are observed. For source counties, mean epidemic impacts are significantly lower (2000–3500 premises) – these counties tend to have a low number of premises (fewer than 200; c.f. Fig. 1b) and hence the risk of large scale epidemics occurring is lower than for epidemics seeded elsewhere.
3.2. Pre-emptive ring culling and the epidemic impact
We now extend our analysis to investigate when it is beneficial to add ring culling to the default strategy of culling only the IPs and DCs; each of the scenarios considered assumed that pre-emptive ring culling was deployed in addition to the default strategy of culling IPs and DCs. We were also interested to see how the efficacy of pre-emptive ring culling varied with the width of the transmission kernel. We began by considering epidemics randomly seeded in five premises in each of 5 counties respectively: the high density county of Lancaster, Lebanon County – which borders Lancaster County, Fayette County in the South West, Jefferson County in the North West and Carbon County to the North East of Lancaster (see Fig. 1a). For epidemics seeded in each of these counties, we computed the ring cull strategy that minimized the epidemic impact, as WPA was varied. Defining the optimum strategy required increasing the number of runs per scenario from 1000 to 10,000. The results are summarized in Fig. 3a.
Fig. 3.
(a) Optimal ring cull radius to minimize the predicted epidemic impact of foot-and-mouth disease in Pennsylvania. Epidemics were seeded in Lancaster, Carbon, Fayette, Jefferson and Lebanon Counties (black, red, magenta, blue and green lines respectively). The x-axis shows the relative width of the dispersal kernel in relation to the UK kernel. (b) Optimal ring cull radius to minimize the epidemic impact across the whole state for outbreaks seeded in each county in turn for (WPA = 2). (c) Optimal ring cull radius to minimize the epidemic impact across the whole state for outbreaks seeded in each county in turn for (WPA = 4). The ordering of counties and color scale is the same as in Fig. 1b.
For epidemics with narrow kernels (WPA < 1.5), there was no advantage to adding ring culling to the default culling strategy. However, as WPA increased, the addition of ring culling was generally helpful. The optimal strategy (defined in terms of the ring cull radius) was found to be highly dependent upon the kernel width and upon the region of introduction (Fig. 3a). For example, when the epidemic was seeded in Lancaster County and when WPA = 1.8 ring culling to 1.0 km was optimal to minimize the impact of the epidemic This value increased as WPA increased such that ring cull radii of between 3.6 km and 5.1 km were optimal when WPA took values of between 2.5 and 4. Further increases in the kernel width caused the optimal ring cull radius to decrease to zero once again (Fig. 3a). Once the kernel width exceeds a given value, ring culling (for practicable ring cull radii) proves inadequate to control the epidemic and merely adds to the epidemic impact. The optimal ring cull radius for epidemics seeded in Lebanon County was very similar to that for epidemics seeded in the adjacent Lancaster County. However, optimal strategies were found to be very different for epidemics seeded in Carbon, Fayette and Jefferson Counties. Cattle densities in Carbon, Fayette and Jefferson Counties are all low and are some distance from the high density region of Lancaster. In Carbon County, nearest to Lancaster, it was optimal to deploy no ring culling for kernel widths less than 2.6 times the UK kernel. For higher values of WPA there is a significant risk of spread to Lancaster County and therefore large ring culls of between 4.8 km and 9.1 km are necessary to reduce this risk and hence minimize the mean epidemic impact. Similar behavior is found for epidemics seeded in Jefferson county, with ring culling of between 5.3 km and 9.4 km required to minimize the mean epidemic impact when WPA > 3.2. Fayette County, in the South West has a low farm density and is a significant distance from high density regions. In this case, ring culling is unnecessary unless WPA > 3.4, when 3.5–5.2 km ring culling is optimal to prevent spread to Lancaster County. We note that, regardless of seeding, should WPA exceed a value of 4.6, the epidemic spreads to the entire state and ring culling of any radius will not be able to control the epidemic (Fig. 3a).
To further understand the effect on pre-emptive culling of varying the place where the epidemic began, we reverted to beginning epidemics in each of the 67 Pennsylvania counties in turn. We considered just 2 kernel widths (WPA = 2 in Fig. 3b, and WPA = 4 in Fig. 3c) and determined the optimum culling radius for epidemics begun in each county in turn. As before, each epidemic simulation was begun by randomly seeding 5 premises in the chosen county.
As we have already seen (Fig. 2b), when WPA = 2 and epidemics were begun in counties in the north and west of the state, the default strategy of culling IPs and DCs was usually sufficient to keep the epidemic impact to below 100 premises. In such cases ring culling increased the epidemic impact. That is why there is no “optimum” ring culling radius depicted in Fig. 3b for the counties furthest from Lancaster County. However, there was a clear benefit (i.e. a reduced epidemic impact) of adding ring culling to the default strategies for those epidemics that were begun either in Lancaster County or in those counties immediately bordering or a short distance from Lancaster (towards the left of the bar chart in Fig. 3b). In these counties, the addition of ring culling between 0.6 and 2.5 km around all IPs resulted in mean epidemic impacts that were less than those experienced when only the IPs and DCs were culled. The reason for this is that ring culling lowers the risk of spread to Lancaster itself. This highlights an interesting trade-off – a policy of ring culling in these counties, often increased the epidemic impact in the source county itself, but this increase was offset by the huge potential state-wide benefit of preventing spread to the high density region of Lancaster County.
When we increased the kernel width (WPA = 4), a very different result was obtained (Fig. 3c). As we saw in Fig. 2c, with kernel widths this wide, the default strategy of culling IPs and DCs is inadequate and most outbreaks are very large. In this case, the addition of ring culling to the default strategy is almost always beneficial although the optimum ring cull radius is often 6 times larger (or more) than was required when WPA = 2 (compare Fig. 3c with Fig. 3b). The reader will also notice the counter-intuitive result that the optimum ring cull radius for those cases in which the epidemics were begun in counties distant from Lancaster County were generally greater than the ring cull radii for those epidemics begun Lancaster County itself or the other neighboring counties with high densities of livestock. This was a largely consequence of the resource limitations that we imposed on how many premises could be depopulated in a day and the obvious constraints that places on how many new culling zones could be established and cleared each day for a given culling radius. At this kernel width, epidemics begun in Lancaster and neighboring counties were generally large and could ‘escape’ control if the culling radius was so large and consumed so many resources that the required number of new culling zones could not be set up in a timely fashion. This meant that moderate culling radii were a more efficient and effective use of limited resources in counties in which larger epidemics were more likely. In counties distant from Lancaster County, the epidemics were generally smaller and so larger culling radii exerted sufficient control as to prevent the infection reaching Lancaster and its neighboring counties.
3.3. How optimum ring culling radius varies with the size of the epidemic
It is typical of stochastic, agent based spatial models like the one used here that not all introductions of virus lead to a major epidemic (or even a minor one). This is one reason why we simulated the same scenarios many times – only some of the simulations led to large scale outbreaks and the fraction that did so varied with the scenario. For this reason, it is of interest to ask whether, for a given scenario, the optimum ring cull radius is the same for all magnitudes of outbreak. Fig. 4 shows that it is not. If the optimum ring cull radius were the same for all magnitudes of outbreak the minima in the lines representing the mean culling radius and its 95% confidence limits should all occur in the same place. That they do not (Fig. 4) means that the choice of control strategy, given an infection in a particular county, will depend upon the early infection profile and the perceived likelihood of a large scale epidemic.
Fig. 4.
The epidemic impact (black line) of foot-and-mouth disease in Pennsylvania and the 95% prediction intervals (dashed lines) plotted against the ring cull radius for (a) Lancaster county when WPA = 4, (b) Lebanon county when WPA = 4, (c) Fayette county when WPA = 4 and (d) Lancaster county when WPA = 2. The black dots show the locations of the minimum values on each line.
There are three kinds of scenario illustrated in Fig. 4. Fig. 4a and b shows what happens in Lancaster County (highest livestock density) and counties adjacent to Lancaster County. In these counties, when the kernel width is large (WPA = 4), resource limitations become influential for the reasons explained in the previous section. For this reason, although the optimal strategy, on average, is to ring cull all premises within 4–5 km of all infected premises for outbreaks seeded in Lancaster and Lebanon Counties, the optimal ring cull radius for especially large outbreaks is smaller than that determined by averaging over all possible outbreaks (see the minima of the upper bounds of the 95% confidence limits in Fig. 4a and b). By contrast, Fig. 4c shows that in a county (Fayette) that is very distant from Lancaster County, it is actually optimal not to ring cull at all irrespective of size of the outbreak. Finally, returning to Lancaster County, Fig. 4d shows what happens when the kernel width is reduced (WPA = 2). In this case (unlike in Fig. 4a), the optimal ring cull radius for especially large outbreaks is larger than that determined by averaging over all possible outbreaks (see the minima of the upper bounds of the 95% confidence limits in Fig. 4d). Parenthetically, we note here that increasing the ring cull radius beyond the optimum value with regard to the epidemic impact may be of some value with respect to other definitions of success: for example, the likelihood that an introduction will lead to an epidemic (of greater than 10 farms). Higher ring cull radii, whilst causing higher epidemic impacts for very large epidemics, also decrease the likelihood of an epidemic occurring (defined as resulting in 10 or more IPs) – an important trade off.
3.4. Ring vaccination and the epidemic impact
In this section we describe how the radius of a vaccination zone varies with the county of introduction and the width of the transmission kernel. As before, we began epidemics in each of the 67 Pennsylvania counties in turn by randomly seeding 5 premises in the chosen county. We considered just 3 kernel widths (WPA = 1, WPA = 2, and WPA = 4) and simulated each scenario 10,000 times. Ring vaccination was deployed in addition to the default strategy of culling only the IPs and DCs. The results are summarized in Fig. 5. When we used a transmission kernel identical to the one used for the 2001 UK epidemic (WPA = 1), the addition of ring vaccination had no obvious effect on the epidemic impact (not shown). However, when WPA = 2, there were instances when the optimal strategy as judged by the epidemic impact did include vaccination (Fig. 5a). It should be noted that that we were assuming a strategy of “vaccinate-to-live” and that the only criterion of success was the impact of strategies upon the epidemic impact (the cost of vaccination was not considered here). Unsurprisingly, when epidemics were seeded in the north and west of Pennsylvania, epidemic impacts were low and adding ring vaccination did nothing to improve control. However, when epidemics were begun in the south eastern part of the state, the addition of ring vaccination was beneficial. Vaccination rings (of 6–12 km) around each farm help to prevent spread to the high density region of Lancaster County and severely reduce epidemic impact. For Lancaster itself, and the counties of Lebanon immediately to the north and York immediately to the west, larger vaccination rings of 13–16 km were required – the epidemic spread immediately to the region of high cattle density so larger rings had to be deployed to vaccinate livestock in the affected counties.
Fig. 5.
(a) Optimum vaccination radius to minimize an epidemic of foot-and-mouth disease in Pennsylvania seeded in each county in turn (WPA = 2), (b) optimum vaccination radius to minimize an epidemic seeded in each county in Pennsylvania in turn (WPA = 4), (c) the reduction in epidemic impact (number of premises ‘saved’) if optimum radius vaccination is carried out as opposed to optimum radius ring culling (WPA = 4). The ordering of counties and color scale is the same as in Fig. 1b.
When the kernel width was increased such that WPA = 4 (Fig. 5b), the addition of ring vaccination was beneficial almost irrespective of where the epidemic began (the exception was Philadelphia county, which only has one farm). Furthermore, larger vaccination rings were required than when WPA = 2 – vaccination ring radii of 20–40 km were optimal dependent upon county (Fig. 5b). Optimal ring sizes are again found to be largest in Lancaster and neighboring counties.
Finally, we set the kernel width such WPA = 4 and estimated the difference between the epidemic impact that resulted from deploying an optimum ring culling strategy as defined in Fig. 3 and the epidemic impact that resulted from deploying an optimum vaccination strategy as defined in Fig. 5c. As before, we assumed a vaccination-to-live policy and so only the IPs and DCs were culled. Given these assumptions, vaccination at optimal radius always resulted in a lower epidemic impact than ring culling at an optimum radius (Fig. 5c). The benefits were greatest for epidemics that began in the south and eastern parts of Pennsylvania, with up to 9000 premises fewer farms being culled in the entire state for epidemics seeded in Lancaster and nearby counties. This benefit was less when epidemics were seeded in the north west of the state, but even here, at least 1500 premises fewer farms were culled over the entire state. When the kernel width was reduced (WPA = 2) and the comparison was conducted in those counties where previous simulations had shown that the addition of either vaccination or ring culling was preferable to a policy of IP and DC culling alone we obtained similar results (not shown) but the differences between ring vaccination and ring culling were much less marked.
3.5. Sensitivity analyses
All of the simulations described in preceding sections were begun by randomly seeding 5 premises in the chosen county. This is the same as assuming that 5 premises had become infected prior to the discovery of the infection and prior to the implementation of a statewide ban on the movement of animals. It was important to investigate the effect of this assumption upon the efficacy of ring culling and ring vaccination. In order to pursue this further we chose Lancaster County (the county with the highest density of livestock) as the site of introduction of the FMD virus, we set the kernel width such that WPA = 2, varied the number of premises infected prior to detection of the outbreak between 1 and 100, identified the optimum ring cull or vaccination radius in each case and estimated the epidemic impact given optimum control. The results are shown in Fig. 6. We remark in passing on a difference that we have already reported: the addition of ring vaccination to the default strategy resulted in lower epidemic impacts that the addition of ring culling (compare Fig. 6a with Fig. 6b). Furthermore, this difference was maintained irrespective of the number of premises infected prior to detection. In the case of ring culling (Fig. 6a) we found the optimal ring cull radius decreased as the number of premises infected before detection increased. This was another manifestation of setting resource limits on the number of premises that could be culled in a single day. The scale of the epidemic increased as the number of premises infected prior to detection increased. This is reflected in the increasing epidemic impacts seen in Fig. 6a and b. When more than 20 premises were infected prior to the introduction of a movement ban, it was actually optimal to perform no ring culling at all. Similar, though less dramatic, results were obtained when ring vaccination was added to the default culling strategy. As the number of premises infected initially was increased, the epidemic impact increased and the optimum vaccination radius decreased (and for similar reasons) (Fig. 6b).
Fig. 6.
Mean and 95% confidence intervals of epidemic impact (blue lines) of foot-and-mouth disease in Pennsylvania and (a) the optimum ring cull radius (green line), (b) the optimum vaccination radius (green line) as the number of farms infected before the introduction of a movement ban varies. Epidemics were seeded in Lancaster County (WPA = 2).
The issue of limited resources has recurred throughout this paper and we have explained several of the results in terms of these limitations. Thus far, we have assumed a ring culling capacity of 100 premises per day and a vaccination capacity of 35,000 animals per day. These figures were based upon capacity estimates from the Department of the Environment, Food and Rural Affairs (DEFRA) in the UK. There is no guarantee that similar capacities would be appropriate for an outbreak of FMD within the state of Pennsylvania and so we examined the sensitivity of our results to the actual values chosen for these limits. As before, we chose Lancaster County (the county with the highest density of livestock) as the site of introduction of the FMD virus; we set the kernel width such that WPA = 4 (to provide a particularly severe test of limited resources), and varied culling capacity between 20 and 600 premises per day and vaccination capacity between 2500 and 75,000 animals per day. In each case we examined the effect of these changes on the efficacy of adding ring culling or ring vaccination to the default culling strategy.
As the ring cull capacity increased from 20 to 600, the epidemic impact decreased, whilst the optimal ring cull radius increased (Fig. 7a). Similar behavior was observed as the number of doses of vaccine which could be administered per day increased (Fig. 7b). However, from Fig. 7b we see that when the number of doses of vaccine per day increased beyond around 25,000, there was no noticeable reduction in epidemic impact, implying that this limit is sufficient to control epidemics seeded in Lancaster when WPA = 4.
Fig. 7.
(a) Mean and 95% confidence intervals of epidemic impact (blue line) of foot-and-mouth disease in Pennsylvania and optimal ring cull radius (green line) as the number of farms ring culled per day varies, (b) mean and 95% confidence intervals of Epidemic Impact (blue line) and optimal vaccination radius (green line) as the number of animals vaccinated per day varies, (c) the relative impacts of ring culling and vaccination as the number of farms ring culled and the number of animals vaccinated per day are varied. The lines show the points at which ring culling and vaccination result in, on average, the same number farms lost when WPA = 2 (solid line) and WPA = 4 (dashed line). The dash-dot line indicates the points at which the overall cost of the epidemic (as defined in the main text) is equal for both ring culling and vaccination. Ring culling is preferred above and to the left of these lines, whilst vaccination is preferred below and to the right.
Fig. 7c combines the data from Fig. 7a and b and shows the relative impacts of ring culling and vaccination as the number of premises ring culled and the number of animals vaccinated per day are varied. The lines show the points at which simulations with additional ring culling and simulations with addition ring vaccination predict the same number of premises culled (i.e. same epidemic impact) when WPA = 2 (solid line) and WPA = 4 (dashed line). Fig. 7c shows that when WPA = 2, ring culling is only preferred when the number of doses of vaccine available per day is very low. When 7000 doses of vaccination can be administered per day, ring culling is only optimal provided 550 premises can be culled per day (Fig. 7c, solid line). As the number of doses of vaccine available decreases, ring culling is preferred over vaccination for a much larger range of ring culling capacities. However, even when only 2000 doses are available per day, at least 150 premises must be ring culled per day for ring culling to be preferred. When WPA = 4, ring culling is optimal for a slightly larger range of vaccination capacities, but vaccination remains optimal for all realistic ring culling capacities if greater than 9000 doses can be administered per day (Fig. 7c, dashed line).
The reader will remember that, throughout, we have identified the preferred control strategy based only upon the number of premises culling livestock. Other definitions of success are possible though, including the economic cost of the outbreak and its control. Whilst a rigorous cost analysis is beyond the scope of this paper, we can propose a very rudimentary epidemic cost index using the total number of cattle and sheep culled and the total cost of vaccination. We assumed that the mean cost of a cow in the USA is ten times the mean cost of a sheep and a hundred times the cost of one dose of vaccination. This allows for an investigation into the relative costs of vaccination and ring culling as capacities vary where the cost function is calculated as:
Using this cost function, we investigated the preferred intervention strategy as ring culling and vaccination capacities were varied. As before, vaccination was preferred to ring culling except for when the vaccination capacity is low and the ring cull capacity exceeds 125 premises per day (Fig. 7c, dash-dot line). Given that results were now sensitive to the total number of doses of vaccine administered, ring culling is preferred for a slightly larger range of vaccination doses per day (up to around 10,000 for sufficiently high ring culling capacities). Of course, should the cost of vaccination be considerably higher than stated here, ring culling will be preferred for a much larger range of vaccination and culling capacities.
4. Discussion
Here we have considered epidemics of foot-and-mouth disease in the Commonwealth of Pennsylvania, with different seed locations, different between-farm transmission profiles and different optimized control measures. We used a phenomenological model of transmission (based, initially, upon the transmission kernel and parameters derived from the 2001 UK FMD outbreak), which made no assumptions about the actual mechanisms by which one set of premises became infected by another. Our general goal was to see whether a transmission kernel based upon actual outbreaks of FMD in Europe could be useful in providing rules of thumb for the control of FMD in regions of the USA. The original, empirically derived, transmission kernel was based upon an FMD outbreak in which farm density was relatively high, and which affected mostly cattle and sheep. Wildlife species were not involved in that epidemic, and, apart from the initial stages of the outbreak, pigs played a very minor role in transmission (Gibbens et al., 2001; Chis Ster and Ferguson, 2007). Accordingly, we applied quite severe constraints on how we used this model in the US context – and, especially, upon the kinds of questions we used the model to address. We located the model in Pennsylvania, for reasons explained earlier, and considered, in detail, only that part of the epidemic which followed the detection of the virus and the imposition of a statewide ban on animal movement. We were also obliged to assume that the virus would behave very much as it did in the UK. It was this last constraint, in particular, which limited the kinds of questions we could address. We could not consider species-specific attack rates for example (because we had excluded pigs, a priori) nor could we deal with an outbreak that involved wildlife (because a fixed farm location model like ours has no provision for free roaming hosts). Nevertheless, it was possible to address, for example, how heterogeneities in farm density interact with changes in the site of first introduction of the virus to affect optimization of control strategies; and our subsequent modifications to the transmission kernel provided insights into the importance of kernel width and the spatial scale of transmission.
In general terms, should the quantitative characteristics of post-detection transmission be exactly the same as were observed in the UK, our model suggests that, regardless of location of the outbreak, FMD in Pennsylvania will be adequately controlled by culling only the IPs and DCs. However, should the contact network between farms be much more widely dispersed, there is the threat of a large scale epidemic. If the kernel width were four times that observed for the UK, the epidemic impact could exceed 15,000 premises, or around half the farms in Pennsylvania (unless additional control measures such as vaccination or ring culling were used). A much more heterogeneous result is the extent to which the epidemic impact of an introduction of FMD virus is critically dependent upon where the virus is introduced and the proximity of that location to areas in which there are dense aggregations of farms. The importance of Lancaster County was a recurring theme in all our simulations. The density of farms in Lancaster County is 3 or more times greater than the density of farms in the other counties in Pennsylvania. Strategies which failed to prevent the spread of infection to Lancaster county always resulted in higher epidemic impacts.
This paper suggests that there is no single strategy that can be guaranteed to always minimize the epidemic impact of an FMD outbreak in Pennsylvania. Our results were very sensitive to the width of the transmission kernel, the extent and location of infection prior to detection, and the resources available for control. In the event of a real outbreak, data concerning the last two factors will emerge fairly rapidly, but experience shows that it takes time to establish the characteristics of the transmission kernel (in practice, these are updated on a regular basis during the course of the epidemic as more data becomes available). This, together with the stochastic nature of epidemics, complicates the choice of control strategies – especially in the early stages – and even limits the opportunity to fine tune during the course of the epidemic. A single example will suffice to illustrate this complexity:
Suppose it was proposed to add pre-emptive ring culling to a default strategy of culling IPs and DCs. Proposals like this need to be addressed in the context of some preexisting definition of success (Keeling, 2005). There are several plausible definitions of success and some are mutually exclusive: for example, to achieve disease-free status as quickly as possible (i.e. to minimize the duration of the outbreak and allow trade to resume), to confine the outbreak to a defined region, to minimize losses to the domestic animal population, to minimize the financial cost of control, and to minimize “push back” (resistance to implementation expressed by the farming community, public criticism). We focus in our model on the following definition of success: to minimize the total number of farms with livestock culled (measured as the ‘epidemic impact’). The model indicates that there would be no reason to add ring culling to the default strategy provided the width of the transmission kernel was small. In fact, it would increase the epidemic impact by needlessly culling uninfected farms. Unfortunately, “small” is defined relative to how the farms are spread across the landscape of interest. As we saw, a transmission kernel that was sufficient to cause an extensive outbreak in the UK was not sufficient to cause an extensive outbreak in Pennsylvania. Although the density of premises in the farming areas of Pennsylvania was similar to that in the UK, the percentage of land under agricultural use in Pennsylvania is much less than in the UK and the aggregated nature of premises in Pennsylvania meant it was much harder for the infection to spread large distances once a ban on animal movement was in place. As the width of the transmission kernel increases, the choice about whether or not to deploy ring culling depends upon whether or not the outbreak is in or very near a region of high livestock density. However, even if the infection has already reached a region of high livestock density, it is by no means a given that ring culling will provide any additional benefit. If the transmission kernel is especially wide or if the number of farms infected prior to detection is large (and, again, this is a relative term) then the model suggests that ring culling is contra-indicated. Should the decision be taken to add ring culling to the existing strategy, and the prior and current conditions are such that this could in fact provide some benefit provided the correct ring cull radius is chosen, there is still the question of how large that radius should be. If resources are limited, the model suggests it is better to use smaller ring cull radii that new culling zones can be set up rapidly around newly discovered IPs. However, even if limitations on resources are not an issue, the stochastic nature of epidemics makes it difficult to recommend any particular ring cull radius. The vagaries of chance mean that the same initial conditions can give rise to small (or no outbreaks) or very large outbreaks. If the width of the transmission kernel is relatively small and the outbreak is becoming large then a wider ring cull radius is indicated than if the outbreak remains small (given the same initial conditions). Unfortunately, if the width of the transmission kernel is large enough, the argument is reversed and a smaller than anticipated ring cull radius is best if the outbreak is large.
Note that we are not arguing against the use of preemptive ring culling, we are merely pointing out how many factors determine the success or failure of the strategy. It is also worth mentioning that had we used a different definition of success it is possible, even likely, that our qualitative arguments about when to add ring culling would be different.
5. Conclusion
Our results highlight the inherent complexity in determining optimal control measures but that suitable models may allow us to develop qualitative rules-of-thumb to aid early policy decisions. In the event of an outbreak, these models and policies need to be refined to account for observations, but historical parameters (even if from a different continent) provide prior information on which to base early predictions.
Acknowledgements
This work was made possible by funding from the Research and Policy for Infectious Disease Dynamics (RAPIDD) Program, Directorate of Science and Technology, U.S. Department of Homeland Security, Chemical/Biological Division, by award number 5U01GM- 076426 from the National Institute of General Medical Sciences and by the Foreign Animal Disease Modeling program of the Science and Technology Directorate, Department of Homeland Security (grant ST-108-000017). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of General Medical Sciences, the National Institutes of Health or the Department of Homeland Security.
References
- Alexandersen S, Donaldson AI. Further studies to quantify the dose of natural aerosols of foot-and-mouth disease virus for pigs. Epidemiol. Infect. 2002;128:313–323. doi: 10.1017/s0950268801006501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anon. Questions and Answers: New Animal Disease Traceability Framework. UDSA, APHIS, VS Fact Sheet, February. 2010. [Google Scholar]
- Anon. Lessons from an Epidemic, Opinion, Nature. 2001;411:977. doi: 10.1038/35082704. [DOI] [PubMed] [Google Scholar]
- Anon. Foot and Mouth Disease Emergency Guidelines. UDSA, APHIS, VS, January. 2000. [Google Scholar]
- Anon. Standard Operating Procedures for the Pennsylvania Department of Agriculture in the Event of an Outbreak of FMD. Department of Agriculture, Commonwealth of Pennsylvania. 2003. [Google Scholar]
- Anon. Response Plan for a Highly Contagious Animal Disease. Bureau of Animal Health and Diagnostic Services, Department of Agriculture, Commonwealth of Pennsylvania. 2006. [Google Scholar]
- Anon. Game Commission Lifts Restriction on Taking Feral Swine in Bedford County. New Release #035-11, March 2011, Pennsylvania Game Commission. 2011. [Google Scholar]
- Bates TW, Thurmond MC, Carpenter TE. Description of an epidemic simulation model for use in evaluating strategies to control an outbreak of foot-and-mouth disease. Am. J. Vet. Res. 2003a;64:195–204. doi: 10.2460/ajvr.2003.64.195. [DOI] [PubMed] [Google Scholar]
- Bates TW, Thurmond MC, Carpenter TE. Results of epidemic simulation modeling to evaluate strategies to control an outbreak of foot-and-mouth disease. Am. J. Vet. Res. 2003b;64:205–210. doi: 10.2460/ajvr.2003.64.205. [DOI] [PubMed] [Google Scholar]
- Bates TW, Carpenter TE, Thurmond MC. Benefit-cost analysis of vaccination and preemptive slaughter as a means of eradicating foot-and-mouth disease. Am. J. Vet. Res. 2003c;64:805–812. doi: 10.2460/ajvr.2003.64.805. [DOI] [PubMed] [Google Scholar]
- Bessell PR, Shaw DJ, Savill NJ, Woolhouse MEJ. Geographic and topographic determinants of local FMD transmission applied to the 2001 UK FMD epidemic. BMC Vet. Res. 2008;4:40. doi: 10.1186/1746-6148-4-40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bessell PR, Shaw DJ, Savill NJ, Woolhouse MEJ. Statistical modelling of holding level susceptibility to infection during the UK 2001 Foot and Mouth Disease epidemic. IJID. 2010;14:E210–E215. doi: 10.1016/j.ijid.2009.05.003. [DOI] [PubMed] [Google Scholar]
- Carpenter TE, Christiansen LE, Dickey BF, Thunes C, Hullinger PJ. Potential impact of an introduction of foot-and-mouth disease into the California State Fair. J. Am. Vet. Med. Assoc. 2007;23:1231–1235. doi: 10.2460/javma.231.8.1231. [DOI] [PubMed] [Google Scholar]
- Carpenter TE, O’Brien JM, Hagerman AD, McCarl BA. Epidemic and economic impacts of delayed detection of foot-and-mouth disease: a case study of a simulated outbreak in California. J. Vet. Diagn. Invest. 2011;23:26–33. doi: 10.1177/104063871102300104. [DOI] [PubMed] [Google Scholar]
- Chis Ster I, Ferguson NM. Transmission parameters of the 2001 foot and mouth epidemic in Great Britain. PLoS ONE. 2007;2(6):e502. doi: 10.1371/journal.pone.0000502. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cox SJ, Voyce C, Parida S, Reid SM, Hamblin PA, Paton DJ, Barnett PV. Protection against direct-contact challenge following emergency FMD vaccination of cattle and the effect on virus excretion from the oropharynx. Vaccine. 2005;23:1106–1113. doi: 10.1016/j.vaccine.2004.08.034. [DOI] [PubMed] [Google Scholar]
- Green DM, Kiss IZ, Kao RR. Modelling the initial spread of foot and-mouth disease through animal movements. Proc.R. Soc. B. 2006;273:2729–2735. doi: 10.1098/rspb.2006.3648. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deardon R, Brooks SP, Grenfell BT, Keeling MJ, Tildesley MJ, Savill NJ, Shaw DJ, Woolhouse MEJ. Inference for individual-level models of infectious diseases in large populations. Stat. Sinica. 2009;20:239–261. [PMC free article] [PubMed] [Google Scholar]
- Dickey BF, Carpenter TE, Bartell SM. Use of heterogeneous operation-specific contact parameters changes predictions for foot-and-mouth disease outbreaks in complex simulation models. Prev. Vet. Med. 2008;87:272–287. doi: 10.1016/j.prevetmed.2008.04.006. [DOI] [PubMed] [Google Scholar]
- Diggle PJ. Spatio-temporal point processes, partial likelihood, foot and mouth disease. Stat. Methods Med. Res. 2006;25:325–336. doi: 10.1191/0962280206sm454oa. [DOI] [PubMed] [Google Scholar]
- Donaldson AI, Alexandersen S. Relative resistance of pigs to infection by natural aerosols of FMD virus. Vet. Rec. 2001;148:600–602. doi: 10.1136/vr.148.19.600. [DOI] [PubMed] [Google Scholar]
- Donaldson AI, Alexandersen S, Sorensen JH, Mikkelsen T. Relative risks of the uncontrollable (airborne) spread of FMD by different species. Vet. Rec. 2001;148:602–604. doi: 10.1136/vr.148.19.602. [DOI] [PubMed] [Google Scholar]
- Ferguson NM, Donnelly CA, Anderson RM. The foot-and-mouth epidemic in Great Britain: pattern of spread and impact of interventions. Science (Washington) 2001a;292(5519):1155–1160. doi: 10.1126/science.1061020. [DOI] [PubMed] [Google Scholar]
- Ferguson NM, Donnelly CA, Anderson RM. Transmission intensity and impact of control policies on the foot and mouth epidemic in Great Britain. Nature. 2001b;413:542–548. doi: 10.1038/35097116. [DOI] [PubMed] [Google Scholar]
- Ferguson NM, Keeling MJ, Edmunds WJ, Gant R, grenfell BT, Anderson RM, Leach S. Planning for smallpox outbreaks. Nature. 2003;425:681–685. doi: 10.1038/nature02007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gibbens JC, Wilesmith JW. Temporal and geographical distribution of cases of foot-and-mouth disease during the early weeks of the 2001 epidemic in Great Britain. Vet. Rec. 2002;151(14):407–412. doi: 10.1136/vr.151.14.407. [DOI] [PubMed] [Google Scholar]
- Gibbens JC, Sharpe CE, Wilesmith JW, Mansley LM, Michalopoulou E, Ryan JBM, Hudson M. Descriptive epidemiology of the 2001 foot-and-mouth disease epidemic in Great Britain: the first five months. Vet. Rec. 2001;149:729–743. [PubMed] [Google Scholar]
- Golde WT, Pacheco JM, Duquea H, Doel T, Penfold B, Ferman GS, Gregg DR, Rodriguez LL. Vaccination against foot-and-mouth disease virus confers complete clinical protection in 7 days and partial protection in 4 days: use in emergency outbreak response. Vaccine. 2005;23:5775–5782. doi: 10.1016/j.vaccine.2005.07.043. [DOI] [PubMed] [Google Scholar]
- Haydon DT, Chase-Topping M, Shaw DJ, Matthews L, Friar JK, Wilesmith J, Woolhouse MEJ. The construction and analysis of epidemic trees with reference to the 2001 UK foot and mouth outbreak. Proc. R. Soc. Lond. B. 2003;270:121–212. doi: 10.1098/rspb.2002.2191. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kao R. The role of mathematical modelling in the control of the 2001 FMD epidemic in the UK. Trends Microbiol. 2002;10:279–286. doi: 10.1016/s0966-842x(02)02371-5. [DOI] [PubMed] [Google Scholar]
- Keeling MJ. Models of foot and mouth disease. Proc. R. Soc. B. 2005;272:1195–1202. doi: 10.1098/rspb.2004.3046. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Keeling MJ, Woolhouse MEJ, May RM, Davies G, Grenfell BT. Modelling vaccination strategies against foot-and-mouth disease. Nature. 2003;421:136–142. doi: 10.1038/nature01343. [DOI] [PubMed] [Google Scholar]
- Keeling MJ, Woolhouse MEJ, Shaw DJ, Matthews L, Chase-Topping ME, Haydon DT, Cornell SJ, Kappey J, Wilesmith J, Grenfell BT. Dynamics of the 2001 UK foot and mouth epidemic: stochastic dispersal in a heterogeneous landscape. Science. 2001;294:813–817. doi: 10.1126/science.1065973. [DOI] [PubMed] [Google Scholar]
- Kobayashi M, Carpenter TE, Dickey BF, Howitt RE. A dynamic, optimal disease control model for foot-and-mouth disease: I. Model description. Prev. Vet. Med. 2007;79:257–273. doi: 10.1016/j.prevetmed.2007.01.002. [DOI] [PubMed] [Google Scholar]
- Kobayashi M, Howitt RE, Carpenter TE. Model could aid emergency response planning for foot-and-mouth disease outbreaks. California Agric. 2009;63:137–142. [Google Scholar]
- McVicar JW, Sutmoller P, Ferris DH, Campbell CH. Foot-andmouth disease in white-tailed deer: clinical signs and transmission in the laboratory. Proc. USAHA. 1974;78:169–180. [PubMed] [Google Scholar]
- Mohler JR. Foot and Mouth Disease: with Special Reference to the Outbreaks in California in 1924, and Texas, 1924 and 1925, vol. 400. United States Department of Agriculture, Department Circular. 1926:1–82.
- Mouchantat S. Serological investigations about the exposure to foot-and-mouth disease virus (FMDV) in free-ranging roe deer (Capreolus capreolus) from selected areas of Germany – investigations concerning the FMD-outbreak in Europe in 2001. PhD Thesis. Berlin: Freien Universität; 2005. p. 135. [Google Scholar]
- Orsel K, de Jong MCM, Bouma A, Stegeman JA, Dekker A. Foot and mouth disease virus transmission among vaccinated pigs after exposure to virus shedding pigs. Vaccine. 2007;25:6381–6391. doi: 10.1016/j.vaccine.2007.06.010. [DOI] [PubMed] [Google Scholar]
- Parida S, Fleming L, Oh Y, Mahapatra M, Hamblin P, Gloster J, Paton DJ. Emergency vaccination of sheep against foot-and-mouth disease: significance and detection of subsequent sub-clinical infection. Vaccine. 2008;26:3469–3479. doi: 10.1016/j.vaccine.2008.04.026. [DOI] [PubMed] [Google Scholar]
- Paton DJ, Sumption KJ, Charleston B. Options for control of foot-and-mouth disease: knowledge, capability and policy. Philos. Trans. R. Soc. Lond. B. 2009;364:2657–2667. doi: 10.1098/rstb.2009.0100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Quan M, Murphy CM, Zhang Z, Durand S, Esteves I, Doel C, Alexandersen S. Influence of exposure intensity on the efficiency and speed of transmission of foot-and-mouth disease. J. Comp. Pathol. 2009;140:225–237. doi: 10.1016/j.jcpa.2008.12.002. [DOI] [PubMed] [Google Scholar]
- Rhyan J, Deng M, Wang H, Ward G, Gidlewski T, McCollum M, Metwally S, McKenna T, Wainwright S, Ramirez A, Mebus C, Salman M. Foot-and-mouth disease in North American bison (Bison bison) and elk (Cervus elaphus nelsoni): susceptibility, Intra-and interspecies transmission, clinical signs, and lesions. J. Wldlife Dis. 2008;42:269–279. doi: 10.7589/0090-3558-44.2.269. [DOI] [PubMed] [Google Scholar]
- Riley S. Coping without farm location data during a foot-and-mouth outbreak. PNAS. 2010;107:957–958. doi: 10.1073/pnas.0913286107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rorres C, Pelletier STK, Bruhn MC, Smith G. Ongoing estimation of the epidemic parameters of a stochastic, spatial, discrete-time model for a 1983-84 avian influenza epidemic. Avian Dis. 2011a;55:35–42. doi: 10.1637/9429-061710-Reg.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rorres C, Pelletier STK, Smith G. Stochastic modeling of animal epidemics using data collected over three different spatial scales. Epidemics. 2011b;3:61–70. doi: 10.1016/j.epidem.2011.02.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rorres C, Pelletier STK, Keeling MJ, Smith G. Estimating the kernel parameters of premises-based stochastic models of farmed animal infectious disease epidemics using limited, incomplete, or ongoing data. Theor. Popul. Biol. 2010;78:46–53. doi: 10.1016/j.tpb.2010.04.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schoenbaum MA, Disney WT. Modeling alternative mitigation strategies for a hypothetical outbreak of foot-and-mouth disease in the United States. Prev. Vet. Med. 2003;58:25–52. doi: 10.1016/s0167-5877(03)00004-7. [DOI] [PubMed] [Google Scholar]
- Sharkey KJ, Bowers RG, Morgan KL, Robinson SE, Christley RM. Epidemiological consequences of an incursion of highly pathogenic H5N1 avian influenza into the British poultry flock. ProcRSoc. B. 2008;275:19–28. doi: 10.1098/rspb.2007.1100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith G. Models of macroparasitic infections in domestic ruminants – conceptual review and critique OIE scientific and technical review. Models Manag. Anim. Dis. 2011;30:447–456. doi: 10.20506/rst.30.2.2041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spickler AR, Roth JA. The Foreign Animal Disease Preparedness and Response Plan (FADPReP)/National Animal Health Emergency Management System (NAHEMS). NAHEMS Guidelines: Vaccination for Contagious Diseases Appendix A: Foot-and-Mouth Disease. USDA, APHIS, VS, April. 2011. [Google Scholar]
- Tildesley MJ, Bessell PR, Woolhouse MEJ, Keeling MJ. The role of pre-emptive culling in the control of Foot-and-Mouth Disease. Proc. R. Soc. B. 2009;276:3239–3248. doi: 10.1098/rspb.2009.0427. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tildesley, Deardon MJ, Savill RNJ, Bessell PR, Brooks SP, Woolhouse MEJ, Grenfell BT, Keeling MJ. Accuracy of models for the 2001 Foot-and-Mouth Epidemic. Proc. R. Soc. B. 2008;275(1641):1459–1468. doi: 10.1098/rspb.2008.0006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tildesley MJ, House TA, Bruhn MC, Curry RJ, O’Neil M, Allpress JL, Smith G, Keeling MJ. Impact of spatial clustering on disease transmission and optimal control. PNAS. 2010;107:1041–1046. doi: 10.1073/pnas.0909047107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tildesley MJ, Keeling MJ. Modelling foot-and-mouth disease: a comparison between the UK and Denmark. Prev. Vet. Med. 2008;85:107–124. doi: 10.1016/j.prevetmed.2008.01.008. [DOI] [PubMed] [Google Scholar]
- Tildesley MJ, Savill NJ, Shaw DJ, Deardon R, Brooks SP, Woolhouse MEJ, Grenfell BT, Keeling MJ. Optimal reactive vaccination strategies for a foot-and-mouth outbreak in Great Britain. Nature. 2006;440:83–86. doi: 10.1038/nature04324. [DOI] [PubMed] [Google Scholar]
- Truscott J, Garske T, Chis Ster I, Guitian J, Pfeiffer D, Snow L, Wilesmith J, Ferguson NM, Ghani AC. Control of a highly pathogenic H5N1 avian influenza outbreak in the GB poultry flock. Proc. Biol. Sci. 2007;274:2287–2295. doi: 10.1098/rspb.2007.0542. [DOI] [PMC free article] [PubMed] [Google Scholar]
- USDA. Nursery and Grower/Finisher Management in Swine 2000 and Swine 2006. USDA:APHIS:VS:CEAH, Fort Collins, CO N497.0109. 2009
- USDA. Trends in Biosecurity Measures of Pork Producers. NAHMS Fact Sheet. USDA:APHIS:VS:CEAH, Fort Collins, CO N185.995, October. 1995
- USDA. Swine 2006, Part I: Reference of Swine Health and Management Practices in the United States. 2006 USDA:APHIS:VS, CEAH, Fort Collins, CO N475.1007. 2007
- Ward MP, Highfield LH, Vongseng P, Garner G. Simulation of foot-and-mouth disease spread within an integrated livestock system in Texas, USA. Prev. Vet. Med. 2009;88:286–297. doi: 10.1016/j.prevetmed.2008.12.006. [DOI] [PubMed] [Google Scholar]







