Abstract
The time available to implement successful control measures against epidemics was estimated. Critical response time (CRT), defined as the time interval within which the number of epidemic cases remains stationary (so that interventions implemented within CRT may be the most effective or least costly), was assessed during the early epidemic phase, when the number of cases grows linearly over time. The CRT was calculated from data of the 2001 foot-and-mouth disease (FMD) epidemic that occurred in Uruguay. Significant regional CRT differences (ranging from 1.4 to 2.7 days) were observed.
The CRT may facilitate selection of control measures. For instance, a CRT equal to 3 days would support the selection of measures, such as stamping-out, implementable within 3 days, but rule out measures, such as post-outbreak vaccination, because intervention and immunity building require more than 3 days. Its use in rapidly disseminating diseases, such as FMD, may result in regionalized decision-making.
Selection of measures for control of diseases of rapid dissemination is a major problem affecting decision-making in epidemiology. The actual efficacy of a control campaign depends not only on the intrinsic efficacy of control instruments, but also on the time required for their implementation (1). Regional (geographical) differences may also influence the selection process of a control measure (2).
The first parameter to be determined in an epidemic is the rate of epidemic growth (the number of new infected cases per unit of time). At the beginning of an outbreak of a rapidly disseminating disease, the cumulative number of cases usually follows an exponential growth pattern with parameter β (Appendix), where β represents the number of new infections per unit of time per primary case. Therefore, in the early epidemic stage, the log number of cumulative cases typically follows a linear relationship with time, as shown by the 1967–1968 and the 2001 British foot-and-mouth disease (FMD) outbreaks during their 1st mo (1,2,3). Consequently, β can be estimated from the linear regression of data on cumulative cases.
The expected number of new infections generated by each primary case in a small time interval (Δt) can be estimated as βΔt. The inverse 1/β defines a critical time for responding to the epidemic, because if source cases, or their susceptible contacts, are removed within a time 1/β, the epidemic cannot spread. For this reason, the critical response time (CRT) is defined as 1/β. When an intervention is implemented in a period equal to or less than CRT, the number of secondary cases produced per source case will be less than 1 and the epidemic will usually die out.
These concepts can also be described by the basic reproduction number (R0). The R0 is defined as the number of secondary cases per primary case when the infectious agent is introduced into a population of susceptible individuals (4,5,6). Therefore, R0 = βt, where t is the mean infectious period. An epidemic can occur when R0 ≥ 1. Therefore, the goal of a control policy is to implement an intervention that leads to an effective reproduction number (Reff) less than 1. If each infectious case is removed within an average time T ≤ CRT, then the maximum Reff (Reff = βT) is less than 1 and the epidemic is expected to die out.
The association between CRT and early epidemic data is demonstrated here using the simple differential equation model for an epidemic outbreak (4,5,6). It is shown that, when there is a linear relationship between the log number of cumulative cases and time, CRT is equal to 1/β (Appendix). The most stringent epidemic scenario is when the number of cases grows exponentially per unit of time. This results in CRT being a conservative measure, which implies that a policy completed within 1/β units of time is expected to be effective.
An evaluation of CRT may be facilitated by retrospective analysis of actual epidemic data. Such an evaluation may give insight on: a) the time available to implement control measures, and b) whether epidemic regional differences (associated with different CRTs) may result in region-specific control measures.
Foot-and-mouth disease is a disease of rapid dissemination with major economic costs (1,2,3,7). While several reports have analyzed the 2001 British epidemic data (1,2,7,8), the 2001 Uruguayan epidemic has yet to be explored. Uruguay remained free of FMD (without vaccination) prior to April 23, 2001, when the first case of an epidemic affecting primarily bovines was reported. The goals of this study were to determine: a) the CRT(s) of the 2001 Uruguayan FMD epidemic, and b) whether regional CRT differences were observed in that epidemic.
This study estimated CRT values by analyzing data on cases (infected farms) observed in the first 60 d of the Uruguayan FMD outbreak beginning April 23, 2001 (9,10). Variables included the daily number of new cases and the fraction of daily case increase. Log-transformed cumulative cases were regressed on time (the first 7 d from the first infected herd report in a given geo-epidemic region). Parameters of interest were the slope of the regression (β) and the CRT (1/β). Epidemiologic differences were assessed by testing the proportion of cases among geographic regions (χ2 test). Significance was estimated at P ≤ 0.05. Tests were conducted using a statistical package (Minitab, version 12.2; State College, Pennsylvania, USA).
Sixty days into the epidemic, 1736 infected farms had been reported. The proportion of cases across regions indicated that 60.1% of the total (1044/1736) were in region I, 30.0% (520/1736) were in region II, and 9.9% (172/1736) were in region III (Figure 1A). These proportions were significantly different across regions; a chi-square test of the null hypothesis that every observed case is equally likely to be in each of the 3 regions, showed highly significant departure from this hypothesis (H0: p1 = p2 = p3 = 1/3), with test statistic χ2 = 665.94 (2 degrees of freedom), P < 0.0001. Epidemic data indicated linear relationships for the national (aggregated) number of cases and, at least, for those of region I and region II (Figure 1B). Therefore, the requirement for estimation of CRT by regression analysis was met.
Figure 1. Regional distribution of the 2001 Uruguayan foot-and-mouth disease (FMD) epizootic outbreak. A: 3 regions (I, II, and III) are indicated according to percentage of all cases reported within the first 60 d of the outbreak (1736 cases), of which 60.1% were observed in region I, 30.0% in region II, and 9.9% in region III (P < 0.0001, χ2 test) (9,10). Star indicates the site of the first reported case. B: Log-transformed regional daily cases in the first 30 d of the epidemic in regions I, II, and III.
Four separate regression analyses were performed; all regions (national data), region I, region II, and region III. The daily fraction of case increase (the slope β of the regression) and CRT (CRT = 1/β) were estimated from these data, showing significant regional differences. The preliminary aggregated (national) β̂ was 0.678, with regional β̂ values of 0.692, 0.371, and 0.367 in regions I, II, and III, respectively. Hence, the estimated CRT to conduct an intervention leading to Reff ≤ 1 was 1.475 d at the national (aggregated) level, with estimated regional CRT values of 1.444, 2.696, and 2.723 d in regions I, II, and III, respectively (Table I and Figure 2).
Table I.
Figure 2. Relationships between early cases and time. The log of the cumulative number of cases was regressed on time (the first 7 d from the 1st reported case). Plots show the observations (dots), the regression line (solid line), and the 95% confidence interval (broken lines). A-D: national, region I, region II, and region III, respectively.
Further analysis of the data indicated a high standardized residual (−2.11) for the first observation (day 1) in the national (aggregated) data (Table I), which suggested a possible outlier. After removal of that observation, no outliers were suggested, and the estimate of the national CRT increased to 1.805 d. Observation of an outlier at the very beginning of an epidemic may be the result of at least 2 factors, delayed reporting of the first case(s) and exaggerated influence of errors when the number of cases is extremely low. Delayed reporting of the first case(s) is likely to occur due to human behavior (particularly at the very beginning of an epidemic caused by an exotic agent). Later, when there is public knowledge of the epidemic and the alert level increases, delayed reporting is likely to diminish. In addition, when the number of cases is low (in this epidemic only 1 case was reported on the 1st d), the effect of any error is the greatest.
In contrast, analysis of the region III data did not reveal an obvious linear relationship. Thus, the validity of 2.723 d, as the estimated CRT for region III, is uncertain. However, because CRT is, by definition, a very conservative estimate, the true CRT for region III was very likely to be at least 2.723 d.
Estimation of CRT can suggest intervention- and region-specific control measures (as opposed to national campaigns where relationships between specific interventions and regional conditions are not considered). In the scenario under analysis, almost 2-fold regional CRT differences (between 1.4 d in region I and 2.7 d in regions II and III) were observed, as well as non-overlapping 95% confidence intervals for CRT (between region I and region II), which could result in different control measures (regionalization).
This study was conducted to evaluate the CRT construct in the context of an actual epidemic situation. The quality of the data set being analyzed was checked with 2 sources. Both provided identical information. Common limitations associated with an epidemic scenario (which may hamper data quality) include delayed reporting and the difference between the time an individual (animal or farm) becomes infective (capable of transmitting the disease to others) and the time it becomes symptomatic (showing symptoms and, therefore, becoming observable or reportable). While delayed reporting is unlikely to be a significant source of error when there is public knowledge on the epidemic (after its first case is reported), the difference between the time becoming infectious versus symptomatic is likely to be a systematic error. Although the pre-symptomatic period for herds (the time between becoming infected and showing signs) may be up to 2 d (9), this time interval may be ignored because in large populations it is expected to follow a normal distribution with very low variance (on average, the population will have very similar pre-symptomatic periods). Consequently, the gap between becoming infected and becoming symptomatic is expected to be nearly constant (the 2 slopes, except for the 1st d, are nearly parallel). If the data gathered on the 1st d of the epidemic are deleted, the values of the symptomatic (observable) cases provide an estimate of CRT very similar to that yielded by infected, but not symptomatic, cases (which cannot be observed).
The CRT is assumed to be associated with the most effective or least costly control measures because it is the time interval associated with a number of secondary cases less than or equal to the number of primary cases, the condition where cessation of epidemic growth is expected. However, complete implementation of control measures within CRT is not always followed by a cessation in new cases. An exponentially rapid decay of new cases is expected when R0 < 1; however, when individuals can move between compartments, such as regions, an event likely to depend on the contact structure (road structure, animal trade structure, human movement patterns), new cases may occur even with R0 < 1, as suggested in the 2001 British epidemic (2,8). Consequently, estimation of CRT is more likely to be effective if conducted together with an assessment of temporal- spatial (local or regional) contact rates. This could be facilitated by use of complementary technologies, such as geographical information systems (1,8).
The CRT facilitates the comparison of different measures under conservative assumptions. Therefore, the true CRT is likely to be larger than estimated when only early epidemic data are considered. Consequently, great confidence can be placed in measures supported by the estimated CRT. For instance, consider a hypothetical situation where the number of cases is growing linearly or exponentially over time, the epidemic is spreading at a 10 km radius per day, and there is a 100% effective vaccine that can be administered to all herds within 3 d. In that situation, a vaccination policy would require 3 to 4 additional days (a total of 7 d from the time the decision is made) to induce protective immunity (12). To achieve results, the area to be vaccinated should have at least a 70-km radius, comprising 15 394 square kilometers. This can be compared to a stamping-out policy (assumed to be 100% effective and implemented in 2 d), which would involve a 20 km radius area comprising 1257 square kilometers. In this situation, if CRT were estimated at 3.0 d, the 2nd measure could be adopted with a high degree of confidence in its success. Assuming a perfect linear relationship (with 1, 2, 4, 8, 16, 32, and 64 cases per day over the span of a week), stamping-out would achieve Reff ≤ 1 by implementing a measure that would cover less than one-twelfth as much territory (1257 square km/15 394 square km), in which only one-sixteenth as many herds would be sacrificed (4 herds would be expected to be infected at day 3 compared to 64 herds infected at day 7).
The CRT may be useful if applied in the early epidemic phase. However, this implies a balance between data quality (which requires the longest possible interval) and intervention efficacy (which requires the shortest possible interval). Such a balance may be achieved: a) after one incubation period of the infective agent, and b) as soon as a linear or exponential growth phase is documented. The same data allow for identification of regional epidemiological differences (based on statistical analysis), so this simple model also facilitates region-specific epidemiological decision-making.
Appendix 1.
Footnotes
Address all correspondence and reprint requests to Dr. Ariel L. Rivas; telephone: (607) 255-8103; fax: (607) 255-4698; e-mail: alr4@cornell.edu
Received December 20, 2003. Accepted April 21, 2003.
References
- 1.Ferguson NM, Donnelly CA, Anderson RM. Transmission intensity and impact of control policies on the foot and mouth epidemic in Great Britain. Nature 2001;413:542-548. [DOI] [PubMed]
- 2.Keeling MJ, Woolhouse MEJ, Shaw DJ, et al. Dynamics of the 2001 UK Foot and Mouth epidemic: stochastic dispersal in a heterogeneous landscape. Science 2001;294:813-817. [DOI] [PubMed]
- 3.Howard SC, Donnelly CA. The importance of immediate destruction in epidemics of foot and mouth disease. Res Vet Sci 2000;69:189-196. [DOI] [PubMed]
- 4.Kermack WO, McKendrick AG. Contributions to the mathematical theory of epidemics.1. (reprinted from Proceedings of the Royal Society, Vol 115A, Pg 700-721, 1927). B Math Biol 1991;53:33-55. [DOI] [PubMed]
- 5.MacDonald G. The analysis of equilibrium in malaria. Tropical Diseases B 1952;49:813-829. [PubMed]
- 6.Anderson RM, May RM. Oxford Scientific Press, Oxford. In: Infectious diseases in humans: dynamics and control, 1991:13-23.
- 7.Woolhouse M, Donaldson A. Managing Foot-and-Mouth — the science of controlling disease outbreaks. Nature 2001;410:515-516. [DOI] [PubMed]
- 8.Ferguson NM, Donnelly CA, Anderson RM. The Foot-and-Mouth epidemic in Great Britain: pattern of spread and impact of interventions. Science 2001;292:1155-1160. [DOI] [PubMed]
- 9.Organization of International Epizootics. Web site address http://www.oie.int
- 10.Uruguayan Ministry of Agriculture and Fisheries. Web site address http://www.mgap.gub.uy
- 11.Burrows R. Excretion of Foot-and-Mouth Disease virus prior to the development of lesions. Vet Rec 1968;82:387-388.
- 12.Woolhouse MEJ, Haydon DT, Pearson A, Kitching RP. Failure of vaccination to prevent outbreaks of foot-and-mouth disease. Epidemiol Infect 1996;166:363-371. [DOI] [PMC free article] [PubMed]




