Public health institutions at local, regional, and national levels face evolving challenges with limited resources. Multiple forms of data are increasingly available, ranging from streaming statistical data to episodic reports of confirmed disease incidence. While technological tools for collecting and using these data proliferate, economic pressures often preclude growth of concomitant staff with required expertise. The intent here is to provide perspective on evolution of public health surveillance since the late 1990s, suggest how methodological approaches can be improved, and recommend areas of growth given mandates for evidence-based policy and practice.
Remarks in this article stem from my last 18 years’ work on surveillance system development at the Johns Hopkins University Applied Physics Laboratory, as consultant to the US Centers for Disease Control and Prevention, and as board member and research committee chair of the International Society for Disease Surveillance. These efforts have been enriched by collaborations with US and international public health partners, both civilian and military.
HISTORICAL DETERMINANTS
Scattered publications on public health surveillance appeared in the mid-20th century. Large programs arose in the late 1990s in response to bioterrorism concerns. Shortly after 2000, efforts focused mainly on early detection with scant attention to the protocols and resources of agencies monitoring population health. This focus fueled controversy over whether automated systems could detect outbreaks before astute clinicians, controversy that delayed useful system development. Nonetheless, to many, the proper motivation for automated surveillance is extending the clinician’s reach and providing situational awareness based on information outside the immediate clinical setting.
In the past 10 years, emphasis has shifted away from early detection. Surveillance system proponents have cited routine situational awareness benefits,1 including tracking disease spread, all-hazard monitoring, rumor control, and clinical decision support.
Progress in health surveillance has often depended on transient funding, hindering continuity and strategic consistency. White House directive HSPD-10 explicitly targeted bioterrorism, and the subsequent HSPD-21 noted that pillars of that directive “are applicable to a broad array of natural and manmade . . . challenges” beyond terrorism.2 However, translation of high-level directives and guidelines into local investigation and response capacity is often no one’s responsibility. Recent funding from the Centers for Disease Control and Prevention has improved local health department capacity for timely data collection from health care providers. Nevertheless, classical issues of control and information access between national and local agencies complicate cooperation on design and implementation tasks, often left to underfunded health departments.
IMPROVING SURVEILLANCE METHODOLOGY
Achieving in-depth collaboration among domain experts and technology developers is a difficult challenge. Indeed, from working with epidemiologists, clinicians, and engineers, barriers of professional culture can be more problematic than barriers of spoken language. Efficient, useful electronic surveillance requires three categories of expertise: domain-specific (medical, epidemiological, environmental—drivers of the problems), technological (e.g., database, network, programming, visualization), and analytical (e.g., mathematics, statistics, machine learning). Individuals with up-to-date expertise in all three categories are rare. Thus, collaboration among staff with disparate backgrounds must repurpose information for surveillance from electronic health records and other sources in a technology-driven environment. A common collaboration challenge is specification of detection performance metrics, mandated in widely referenced publications on automated surveillance.3
In prospective health monitoring, a typical problem is to classify a day or other interval by whether a current incidence measure merits investigation for onset of a significant public health threat. Even labeling classifications as true- or false-positives, or true- or false-negatives, may be problematic. Also, practical use of these measures requires acceptability criteria such as minimum sensitivity or maximum false-positive rate. However, subject matter experts are often reluctant to specify such criteria. These decisions require operational knowledge as well as trust and transferability of published research. Too often, these decisions have defaulted to implementers lacking requisite domain knowledge.
A general guideline for improving surveillance methodology is to promote system designs that consider all component activities including data collection, filtering, analysis, visualization, and response.4 Designs and contributing research should account for data quality and reliability. More broadly, before decisions about the number of monitored outcomes, frequency of analysis, or spatial resolution of results, designers should account for the human and technology resources available to a health department and its chief public health concerns and requirements.5 Surveillance system design may thus be viewed as an optimization problem, suggesting enlistment of operations research analysts and sampling statisticians.
RECOMMENDED DIRECTIONS
Resources should be focused on general public health surveillance to develop systems, protocols, and relationships to enhance situational awareness under normal circumstances and thereby gain acceptance and trust essential in urgent outbreak situations, whether natural or deliberately caused. The way to achieve progress and support is through local, impactful efforts directed at use cases of widespread concern such as the opioid epidemic. Too many resources have been squandered on all-encompassing framework building. Lessons learned from focused pilot efforts can lead to goals envisioned in high-level directives and guidelines.
In project, conference, and advisory trips related to public health systems on five continents, including the Triple-S initiative for European surveillance coordination,6 I have seen substantial ingenuity addressing perceived health threats in varying infrastructure and cultural settings. Thematic to this journal issue, I recommend providing to staff responsible for everyday health monitoring across these settings, not just guidelines, but also concrete tactics and modular resources for sustainable data acquisition, processing, analysis, and communication of evidence and derived findings.
An important corollary to consideration of monitored populations’ needs and constraints is to devote careful investment to requirements of localities and nations that lack infrastructure, basic needs such as clean water, and trained staff available in advantaged settings. Such areas are subject to infectious diseases no longer endemic elsewhere, include large populations with limited access to care, and may harbor new pandemic threats. Support for technologies and systems that have achieved successes in these areas should be increased.1
Such use case–based programs can be strengthened by encouraging and enabling surveillance staff to work with other divisions specializing in injury, mental health, chronic diseases, or environmental hazards. Success also requires increased and sustained staffing and training expenditures comparable to technology expenditures.
I also recommend initiatives to encourage academic and industry research relevant to public health practice. A key step is improving the availability of sufficient data to enable research while preserving patient privacy. Also essential is cooperation with academia on standardized evaluation methods to elevate the science of population health surveillance to the status of patient-based clinical research. Too many faculty leaders and postdoctoral researchers have abandoned disease surveillance for application domains without these complications.
Another recommendation is to ascertain the practical niche of novel data sources that have inspired dissertations and journal articles but produced negligible benefit to real-life surveillance. Many have published proofs-of-concept of methods exploiting the wealth of information in social media data, but conversations with practitioners have indicated a persistent gap between concept and routine implementation.
Lastly, recent advances in biotechnology fields such as genomic surveillance and clinical laboratory science demonstrate the need to fuse existing and emerging surveillance methodologies,7 but only with sustainable, cost-effective, health-monitoring benefits, not just because technologies exist. In conclusion, I agree with Gardy et al. that it is “time to shake up public health surveillance,”7 but only as informed by substantive, cross-disciplinary collaboration and judicious adaptation of emerging technologies, informed by the goals and constraints of public health infrastructures.
ACKNOWLEDGMENTS
The author acknowledges funding support from the Johns Hopkins Applied Physics Laboratory Stuart S. Janney Publication Program.
Program Area Manager Sheri Lewis supplied helpful advice in the writing of this article.
REFERENCES
- 1.Blazes DL, Happel-Lewis S, editors. Disease Surveillance: Technological Contributions to Global Health Security. Boca Raton, FL: CRC Press; 2016. [Google Scholar]
- 2.The White House. HSPD-21: Public Health and Medical Preparedness. 2007. Available at: https://fas.org/irp/offdocs/nspd/hspd-21.htm. Accessed February 20, 2017.
- 3.Buehler JW, Hopkins RS, Overhage JM, Sosin DM, Tong V. Framework for evaluating public health surveillance systems for early detection of outbreaks: recommendations from the CDC Working Group. MMWR Recomm Rep. 2004;53(RR-5):1–11. [PubMed] [Google Scholar]
- 4.Burkom HS, Loschen WA, Mnatsakanyan ZR, Lombardo JS. Tradeoffs driving policy and research decisions in biosurveillance. Johns Hopkins APL Tech Dig. 2008;27(4):299–312. [Google Scholar]
- 5.Lescano AG, Larasati RP, Sedyaningsih ER et al. Statistical analyses in disease surveillance systems. BMC Proc. 2008;2(suppl 3):S7. doi: 10.1186/1753-6561-2-s3-s7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Triple S Project. Assessment of syndromic surveillance in Europe. Lancet. 2011;378(9806):1833–1834. doi: 10.1016/S0140-6736(11)60834-9. [DOI] [PubMed] [Google Scholar]
- 7.Gardy J, Loman NJ, Rambaut A. Real-time digital pathogen surveillance—the time is now. Genome Biol. 2015;16(1):155. doi: 10.1186/s13059-015-0726-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
