Abstract
The biosurveillance capabilities needed to rapidly detect and characterize emerging biological threats are an essential part of the Global Health Security Agenda (GHSA). The analyses of the global public health system's functioning during the 2009 H1N1 pandemic suggest that while capacities such as those identified in the GHSA are essential building blocks, the global biosurveillance system must possess 3 critical capabilities: (1) the ability to detect outbreaks and determine whether they are of significant global concern, (2) the ability to describe the epidemiologic characteristics of the pathogen responsible, and (3) the ability to track the pathogen's spread through national populations and around the world and to measure the impact of control strategies. The GHSA capacities—laboratory and diagnostic capacity, reporting networks, and so on—were essential in 2009 and surely will be in future events. But the 2009 H1N1 experience reminds us that it is not just detection but epidemiologic characterization that is necessary. Similarly, real-time biosurveillance systems are important, but as the 2009 H1N1 experience shows, they may contain inaccurate information about epidemiologic risks. Rather, the ability of scientists in Mexico, the United States, and other countries to make sense of the emerging laboratory and epidemiologic information that was critical—an example of global social capital—enabled an effective global response. Thus, to ensure that it is meeting its goals, the GHSA must track capabilities as well as capacities.
The biosurveillance capabilities needed to rapidly detect and characterize emerging biological threats are a central—and indeed essential—part of the Global Health Security Agenda (GHSA) that was announced in February 2014.1 As noted by Inglesby and Fischer, timely public health surveillance requires that clinicians and health officials recognize cases of emerging diseases and report events in time to alert their own governments, which, in turn, must inform the World Health Organization (WHO) and the global community.2 To this end, under the heading of “Detect Threats Early,” the GHSA objectives include the development and strengthening of diagnostic and laboratory systems as well as global networks for sharing biosurveillance information and training and deploying a workforce to ensure the effective functioning of these systems.2 The specific action steps describing initiatives to build the capacity needed to accomplish this include:
• Launching, strengthening and linking global networks for real-time biosurveillance: Promote the establishment of monitoring systems that can predict and identify infectious disease threats; interoperable, networked information-sharing platforms and bioinformatic systems; and networks that link to regional disease detection hubs.
• Strengthening the global norm of rapid, transparent reporting and sample sharing in the event of health emergencies of international concern: Strengthen capabilities for accurate and transparent reporting to the WHO, OIE [World Organisation for Animal Health], and the FAO [Food and Agriculture Organization of the United Nations] during emergencies, with rapid sample and reagent sharing between countries and international organizations.
• Developing and deploying novel diagnostics and strengthen laboratory systems: Strengthen country and regional capacity at the point-of-care and point-of-need to enable accurate, timely collection and analysis of information, and laboratory systems capable of safely and accurately detecting all major dangerous pathogens with minimal biorisk.
• Training and deploying an effective biosurveillance workforce: Build capacity for trained and functioning biosurveillance workforce, with trained disease detectives and laboratory scientists.1(p2)
Clearly, these capacities are necessary elements of an effective global biosurveillance system. But are they sufficient? Precisely what must this system be capable of achieving? How well do current systems meet the GHSA's objectives? How will we know whether we have made progress and are prepared for the next pandemic?
To address such questions, this commentary reviews a series of analyses of the global public health system's functioning during the 2009 H1N1 pandemic. These analyses suggest that while capacities such as those identified in the GHSA are essential building blocks, the global biosurveillance system must possess 3 critical capabilities: (1) the ability to detect outbreaks and determine whether they are of significant global concern, (2) the ability to describe the epidemiologic characteristics of the pathogen responsible, and (3) the ability to track the pathogen's spread through national populations and around the world and to measure the impact of control strategies.
Outbreak Detection
Detecting an outbreak as quickly as possible enables an earlier and more effective public health response. Zhang and colleagues' analysis of efforts in Mexico and the United States to detect the 2009 H1N1 outbreak suggests that investments in surveillance systems made a major difference, enabling a quicker response than would have been possible a decade earlier.3 These investments include laboratory capacity enhancements in the United States, Mexico, and Canada as well as arrangements that enabled collaboration among the 3 countries. Perhaps more important were developments in global notification systems—also known as event-based surveillance—such as the Global Public Health Intelligence Network (GPHIN), ProMED Mail, and HealthMap, which enabled Mexican officials to “connect the dots” to realize that outbreaks that they were aware of throughout the country were all manifestations of the pandemic virus (pH1N1) that had just been isolated in 2 California children. The expectations set up by the 2005 International Health Regulations4—that countries would report a potential “public health emergency of international concern” (PHEIC)—were also important.3
This same analysis showed that syndromic surveillance played an important role in detecting the pH1N1outbreak, but a different one than is commonly used to justify these systems. Syndromic surveillance systems collect and analyze statistical data on health trends—such as symptoms reported by people seeking care in emergency departments or other healthcare settings, or even sales of prescription or over-the-counter flu medicines or web searches—and are typically used to detect outbreaks before conventional surveillance systems to enable a rapid public health response.5 Because pH1N1 emerged in the winter, there were too few cases to be detected against the background of the normal flu season. However, once Mexican public health authorities became aware of severe respiratory illness cases, syndromic surveillance systems provided positive confirmation that the virus had spread widely throughout Mexico.3
Despite this generally good performance, there was a period of 1 to 2 weeks in April 2009 when Mexican authorities were aware of an unusual pattern of disease outbreaks in different parts of the country (events that were reported globally through GPHIN, ProMED Mail, and HealthMap) but didn't understand the full implications of the evidence. In particular, the point at which it becomes clear that something is a public health emergency of international concern is often not very distinct. The IHRs require countries to report a public health emergency of international concern “within 24 hours of assessment of the public health information by the national authority.”4 Judging that an event must be reported requires a number of complicated assessments, including: (1) Is the pathogen a new subtype of human influenza or another pathogen listed in the IHR Annex 2? (2) Is the public health impact serious? (3) Is the event unusual or unexpected? and (4) Is there a significant risk of international spread? In 2009, the Center for Disease Control and Prevention's (CDC) report of a new influenza strain in humans in the United States helped Mexico answer the first question in the affirmative, making the others redundant.
Going forward, it is important to recognize that even in the best of circumstances, some period of uncertainty of this sort is to be expected and planned for. A more nuanced process that recognizes the inherent uncertainty is necessary. For instance, while Annex 2 governs when countries must report potential public health emergencies of international concern, the WHO Director General (with appropriate scientific advice) makes the final determination about whether an event is a public health emergency of international concern. In addition to obtaining and analyzing virus samples, this will generally require efforts to obtain more information about the pathogen's epidemiologic characteristics, as addressed in the following section.
Epidemic Characterization
Once a new pathogen is identified, it must be characterized in order to develop testing kits and surveillance procedures, to create and manufacture a vaccine and set policies for its use, and to guide interventions such as infection control policies, social distancing, and quarantine. In 2009, the enhanced laboratory capacity just discussed led to the rapid characterization of the pH1N1virus itself, development of a vaccine, PCR testing kits, and so on.
On the other hand, pH1N1's epidemiologic characteristics were harder to identify. This includes disease incidence and rates of change of incidence, severity of infection, and risks to specific population groups. Estimates of these quantities inform decisions about control measures as well as resource procurement and allocation. They also affect public perceptions of illness severity and risk, which influence the willingness of people to comply with control measures.6
As is often the case, under-ascertainment of infected individuals with less severe cases led to an initial overestimate of case fatality rate and the mischaracterization of the virus's “severity.”6-8 Early evidence from Mexico, based on an observational study of hospitalized patients, suggested that 6.5% were critically ill and 41% of these died.9,10 And although epidemiologists understand this phenomenon, policymakers and the public understandably find such figures—especially “41% of these died,” taken out of context to make the point—quite alarming. Confusion about the case fatality rate was compounded by differences in whether “severity” referred to virulence or ability to spread globally, the basis for the WHO's pandemic phase classification in force at that time. This led to public confusion about exactly what the WHO meant by a pandemic and complicated decisions about response logistics that depend on both spread and severity.10
One of the most commonly held perceptions about pH1N1 is that children and young adults were at especially “high risk.” While children were more likely to become infected with pH1N1 than with seasonal influenza, the case fatality and hospitalization rates among those infected were lower than in the elderly. An alternative explanation is that early concerns about children being “at risk” led to surveillance biases that inflated the reported numbers of cases and deaths in children. In addition, evidence of preexisting immunity in older people led to surveillance biases deflating reported numbers of cases and deaths in the elderly. While there is no “gold standard” evidence about the actual numbers of cases and deaths, the evidence suggests that these biases are due in part to surveillance systems that are dependent on patients' decisions to seek care and providers' actions to report cases they see. This problem is complicated by a failure to understand the distinction between the risk of infection and of having a severe case requiring hospitalization or leading to death. While epidemiologists understood this and were aware of the age biases in the surveillance data, these distinctions may have been lost on policymakers and the public. Mischaracterization of who was “at risk” could have led to vaccine priorities focused on children rather than the elderly (who might have benefited more) and school closing policies that were less than optimal.11
The 2009 H1N1 experience reminds us that uncertainty is inherent in infectious disease outbreaks, especially those involving emerging pathogens, so it should be expected and planned for.6 Once the virus was identified, it was months before its epidemiologic characteristics were understood. For instance, the case-fatality rate is a key measure of the severity of a pandemic, but getting a handle on it requires precise estimates of how many people have been infected. It was not until about September 2009—5 months into the pandemic—that epidemiologists began to get such data.12 Many emergency preparedness professionals, however, still think in terms of single cases triggering a response in hours or at most days, and this thinking is reflected in key public health preparedness documents. While this might be appropriate for smallpox or inhalational anthrax, it is not appropriate for pandemic influenza and many other pathogens.
More broadly, the recognition that the emergence of the pandemic H1N1 virus was characterized by uncertainty, which took weeks to months to resolve, means that it is important to expect and plan for uncertainty in preparing for the emergence of a new pathogen. In particular, because some future public health emergencies may be more like 2009 H1N1 than the acute events on which many planning assumptions are based, plans should be developed for situations that emerge over extended periods of time and are characterized by uncertainty. Rather than outbreak detection per se, the challenge is to determine whether the outbreak is a public health emergency of international concern and determine its epidemiologic characteristics. In this context, the first evidence of an outbreak should initiate efforts to learn more about the pathogen's characteristics rather than triggering disproportionate control measures based on worst-case scenarios.13 The risk management approach in WHO's new pandemic influenza guidance is one example.14 Previously, pandemic influenza viruses were assumed to be highly virulent, and pandemic stages were defined in terms of spread of the virus in the population and between countries and regions. The new approach includes only 4 phases, defined in terms of both global spread and a risk assessment based on virologic, epidemiologic, and clinical data.
Situational Awareness
Once an outbreak has been identified and the pathogen characterized, surveillance systems are needed to track its spread through the population, including geographically. This is important “situational awareness” information needed to monitor the effectiveness of disease control policies and interventions and enable planning for health services, among other things.
Despite the many surveillance systems that had been set up in recent years, more needed to be developed during the pandemic to get additional data on priority populations such as children (eg, hospitalizations, school absenteeism surveillance) and to inform local decision making. Many of the new systems that were developed before and during the pandemic, including Google Flu Trends, social media, and other “big data” approaches, focus on prediagnostic data. Indeed, the 2009 H1N1 experience provides a test of the hypothesis that these systems might be better at providing situational awareness than at outbreak detection.15
As with the case fatality rate and risk group characterization, however, the evidence suggests that all of these systems are dependent on patients' and providers' decisions. Outpatient, hospital-based, and emergency department surveillance systems, for instance, all rely on individuals presenting themselves to healthcare facilities, and these decisions are based in part on their interpretations of their symptoms. Similarly, virologic surveillance and systems based on laboratory confirmations depend on physicians deciding to send specimens for testing. Even the number of Google searches and self-reports of influenzalike illness can be influenced by individuals' interpretation of the seriousness of their symptoms.
Every element of this decision-making process is potentially influenced by the informational and policy environment (eg, media coverage, current case definitions and practice recommendations, implementation of active surveillance), processing and reacting to the information on an individual level (eg, healthcare seeker's self-assessment of risk, incentives for seeking medical attention and self-isolation, healthcare provider's ordering of laboratory tests), and technical barriers (eg, communication infrastructure for data exchange, laboratory capacity). And all of these decisions are potentially influenced by what these people know and think, both of which change during the course of an outbreak. Recognizing the possibility of these biases but not knowing the full extent of their impact adds to the uncertainty that characterizes public health emergencies.11
The 2009 H1N1 experience shows how global biosurveillance systems are overly dependent on case-based surveillance methods that are subject to information environment–related reporting biases as well as artifactual differences due to changes in surveillance and other policies. Population-based surveillance methods are needed to address this deficiency. Lipsitch and colleagues, for instance, have suggested identifying well-defined population cohorts at high risk for pH1N1 infection and ensuring that everyone in that group is tested to avoid biases due to physician decisions about who should be tested.16 Ultimately, population-based seroprevalence surveys are crucial to making informed policy decisions. Seroprevalence surveys like those deployed in the United Kingdom17 and Hong Kong18 would provide the least biased data on who is at risk for infection as well as temporal and geographic patterns.
Implications for the Future
To apply the lessons of the 2009 H1N1 pandemic, it helps to return to the notion of capacities and capabilities and emphasize the distinction between the 2 in this particular context. Capacities represent the resources—infrastructure, policies and procedures, response mechanisms, knowledgeable and trained personnel—that a public health system has to draw upon. These include legal, economic, and operational dimensions19 as well as “social capital,” the intangible partnership and informal relationships between individuals and organizations that research shows are critical to effective emergency operations and community resilience.20 Capacities are necessary but not sufficient to ensure a system's effective functioning.
Capabilities, on the other hand, describe the actions a public health system is capable of taking to effectively identify, characterize, and respond to emergencies: surveillance, epidemiologic investigations, disease prevention and mitigation, surge capacity for healthcare services, risk communication to the public, and coordination of system responses through an effective incident management system. This analysis of the global public health system's response to the 2009 H1N1 pandemic suggests that the global biosurveillance system must possess 2 critical capabilities: (1) the ability to not only detect outbreaks, but also to characterize the pathogen and its epidemiologic characteristics; and (2) the ability to provide accurate situational awareness about the pathogen's spread through national populations and around the world and to measure the impact of control strategies. Both must operate despite considerable scientific uncertainty.
The GHSA “Detect Threats Early” objective is stated in general terms as a capability for “detecting, characterizing, and transparently reporting emerging biological threats early through real-time biosurveillance,”1(p2) but the specific action steps describe initiatives to build the capacity needed to accomplish this: launching, strengthening, and linking global networks for real-time biosurveillance; strengthening accurate and transparent reporting; developing and deploying novel diagnostics and strengthening laboratory systems; and training and deploying an effective biosurveillance workforce.1
As capacities, these objectives address most of the issues raised by experience with 2009 H1N1 and other public health emergencies. One issue that is not addressed, however, is the ability of real-time biosurveillance systems to provide accurate estimates of disease spread and severity. Consideration should be given to developing the capacity for seroprevalance surveys as discussed above.
It is hard to deny the relevance of laboratory and diagnostic capacity to detecting emerging disease outbreaks. Similarly, reporting networks (another capacity) were essential in 2009 and surely will be in future events. But the 2009 H1N1 experience reminds us that not just detection but also epidemiologic characterization is necessary. Real-time biosurveillance systems are important, but as the 2009 H1N1 experience shows, they may contain inaccurate information about epidemiologic risks. Rather, the ability of scientists in Mexico, the United States, and other countries to make sense of the emerging laboratory and epidemiologic information that was critical—an example of global social capital—enabled an effective global response. Thus, to ensure that it is meeting its goals, the Global Health Security Agenda must track not only capacities (eg, laboratory, reporting networks) but also capabilities such as the ability to consolidate and make sense of rapidly emerging information.
Capacities are generally easier to measure; one can count the number of countries that have laboratories that meet international standards, the number of epidemiologists, and so on. Indeed, this is the approach taken by the United States' progress measures.1 However, there is often little credible evidence that having these capacities, individually or in combination, ensures the desired outcome.21 As a result, capacity measures do not adequately represent how well a complex public health system will perform during an actual emergency.22
Capabilities, on the other hand, are latent characteristics of the public health emergency preparedness system that are best measured and assessed in realistic exercises and when the public health system responds to an emergency, as it did in 2009.22,23 To enable organizational learning, the capabilities must be defined at a high enough level so that lessons learned in one example can be transferred to similar situations and in future emergencies. To systematically learn from actual incidents, Piltch-Loeb and colleagues have proposed a public health emergency preparedness critical incident registry that fosters in-depth analyses of individual incidents and provides incentives to share results with others working in similar contexts and for cross-incident analysis.24 For comparative purposes, registry reports would address specific public health emergency preparedness capabilities and could be a platform for a structured set of performance measures. When the focus is on quality improvement and on complex public health emergency preparedness systems rather than their components or individuals, qualitative assessment of the system capabilities of public health systems can be more useful than quantitative metrics.24 Ensuring that such assessments are rigorous can be challenging, but a well-established body of social science methods provides a useful approach.25
Acknowledgments
This manuscript was developed with funding support awarded to Harvard School of Public Health under cooperative agreements with the US Centers for Disease Control and Prevention grant 5P01TP000307-01 (Preparedness and Emergency Response Research Center).
References
- 1.US Department of Health and Human Services. Global Health Security Agenda. Toward a world safe & secure from infectious disease threats. http://www.globalhealth.gov/global-health-topics/global-health-security/GHS%20Agenda.pdf Accessed May5, 2014
- 2.Inglesby T, Fischer JE. Moving ahead on the global health security agenda. Biosecur Bioterror 2014;12(2):63-65 [DOI] [PubMed] [Google Scholar]
- 3.Zhang Y, Lopez-Gatell H, Alpuche-Aranda C, Stoto MA. Did advances in global surveillance and notification systems make a difference in the 2009 H1N1 pandemic? A retrospective analysis. PLoS One 2013;8(4):1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.World Health Organization International Health Regulations (2005). 2d ed. Geneva: World Health Organization; 2008 [Google Scholar]
- 5.Stoto MA. Syndromic surveillance in public health practice. In: Institute of Medicine. Infectious Disease Surveillance and Detection (Workshop Report). Washington, DC: National Academies Press; 2007:63-72 [Google Scholar]
- 6.Lipsitch M, Riley S, Cauchemez S, Ghani AC, Ferguson NM. Managing and reducing uncertainty in an emerging influenza pandemic. N Engl J Med 2009;361(2):112-115 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Garske T, Legrand J, Donnelly CA, et al. Assessing the severity of the novel influenza A/H1N1 pandemic. Br Med J 2009;339:b2840. [DOI] [PubMed] [Google Scholar]
- 8.Wong JY, Kelly H, Ip DK, Wu JT, Leung GM, Cowling BJ. Case fatality risk of influenza A (H1N1pdm09): a systematic review. Epidemiology 2013;24(6):830. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Domínguez-Cherit G, Lapinsky SE, Macias AE, et al. Critically ill patients with 2009 influenza A(H1N1) in Mexico. JAMA 2009;302(17):1880. [DOI] [PubMed] [Google Scholar]
- 10.Fineberg HV. Pandemic preparedness and response—lessons from the H1N1 influenza of 2009. N Engl J Med 2014;370(14):1335-1342 [DOI] [PubMed] [Google Scholar]
- 11.Stoto MA. The effectiveness of U.S. public health surveillance systems for situational awareness during the 2009 H1N1 pandemic: a retrospective analysis. PLoS One 2012;7(8):e40984. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Butler D. Portrait of a year-old pandemic. Nature 2010;464:1112-1113 [DOI] [PubMed] [Google Scholar]
- 13.Fineberg HV, Wilson ME. Epidemic science in real time. Science 2009;324(5930):987. [DOI] [PubMed] [Google Scholar]
- 14.World Health Organization. Pandemic Influenza Risk Management. WHO Interim Guidance. Geneva: World Health Organization; 2013. http://www.who.int/influenza/preparedness/pandemic/influenza_risk_management/en/ Accessed May5, 2014 [Google Scholar]
- 15.Lipsitch M, Finelli L, Heffernan RT, Leung GM, Redd SC. Improving the evidence base for decision making during a pandemic: the example of 2009 influenza A/H1N1. Biosecur Bioterror 2011;9(2):89-114 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Lipsitch M, Hayden FG, Cowling BJ, Leung GM. How to maintain surveillance for novel influenza A H1N1 when there are too many cases to count. Lancet 2009;374(9696):1209-1211 [DOI] [PubMed] [Google Scholar]
- 17.Miller E, Hoschler K, Hardelid P, Stanford E, Andrews N, Zambon M. Incidence of 2009 pandemic influenza A H1N1 infection in England: a cross-sectional serological study. Lancet 2010;375(9720):1100-1108 [DOI] [PubMed] [Google Scholar]
- 18.Cowling BJ, Chan KH, Fang VJ, et al. Comparative epidemiology of pandemic and seasonal influenza A in households. N Engl J Med 2010;362(23):2175-2184 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Potter MA, Houck OC, Miner K, Shoaf K. Data for preparedness metrics: legal, economic, and operational. J Public Health Manag Pract 2013;19(Suppl 2):S22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Lochner K, Kawachi I, Kennedy BP. Social capital: a guide to its measurement. Health and Place 1999;5(4):259-270 [DOI] [PubMed] [Google Scholar]
- 21.Nelson C, Chan E, Chandra A, et al. Developing national standards for public health emergency preparedness with a limited evidence base. Disaster Med Public Health Prep 2010;4:285-290 [DOI] [PubMed] [Google Scholar]
- 22.Stoto MA.Measuring and assessing public health emergency preparedness. J Public Health Manag Pract 2013;19(Suppl 2):S16-S21 [DOI] [PubMed] [Google Scholar]
- 23.Stoto MA, Nelson CD; LAMPS investigators. Measuring and Assessing Public Health Emergency Preparedness: A Methodological Primer. September2012. . http://lamps.sph.harvard.edu/images/stories/MeasurementWhitePaper.pdf Accessed May5, 2014
- 24.Piltch-Loeb R, Kraemer J, Stoto MA. A public health emergency preparedness critical incident registry. Biosecur Bioterror 2014;12(3):132-143 [DOI] [PubMed] [Google Scholar]
- 25.Stoto MA, Nelson CD, Klaiman T. Getting from what to why: using qualitative methods in public health systems research. AcademyHealth Issue Brief. November2013. . http://www.academyhealth.org/files/publications/QMforPH.pdf Accessed June15, 2014