Abstract
Zika virus provides an example for which public health surveillance is based primarily on health care provider notifications to health departments of potential cases. This case-based surveillance is commonly used to understand the spread of disease in a population.
However, case-based surveillance is often biased—whether testing is done and which tests are used and the accuracy of the results depend on a variety of factors including test availability, patient demand, perceptions of transmission, and patient and provider awareness, leading to surveillance artifacts that can provide misleading information on the spread of a disease in a population and have significant public health practice implications.
To better understand this challenge, we first summarize the process that health departments use to generate surveillance reports, then describe factors influencing testing and reporting patterns at the patient, provider, and contextual level. We then describe public health activities, including active surveillance, that influence both patient and provider behavior as well as surveillance reports, and conclude with a discussion about the interpretation of surveillance data and approaches that could improve the validity of surveillance reports.
As with most infectious diseases, public health surveillance for Zika virus is based primarily on health care provider notifications to health departments of potential cases of individuals with Zika virus disease (ZVD). Health departments use these case reports to track the ebb and flow of infections in state and local areas, evaluate the effectiveness of prevention and control programs, and guide public health actions. In particular, surveillance reports inform decisions about whether travel to an area should be discouraged, with attendant costs to the tourism industry, or allowed, running the risk not only of more infections but also spread to the areas to which the tourists return.
Although presented as simple counts of individuals with the condition, Zika surveillance data are the result of a complex process in which contextual, organizational, and human factors influence whether a case is identified. For example, clinicians rely on public health agencies for guidance on how and when to conduct screening and diagnostic testing. Whether testing is done, which tests are used, and the accuracy of the results depend on test availability, patient demand, where transmission is thought to be occurring, and patient and provider awareness of these factors. As we demonstrate in this analysis, all of these factors have changed over the course of the Zika outbreak in the United States. They also vary from place to place and among population groups.
As a consequence, trends and differentials in surveillance data reflect not only actual disease patterns but also “surveillance artifacts” generated by testing and reporting behavior. Furthermore, because the case definition depends in part on whether an individual was likely exposed to the virus by means of residence, travel, or contact with an infected individual, there is an inherent circularity in the number of cases reported. These factors influence many case-based surveillance systems, but the combination may be especially consequential for ZVD.
The goal of this article is to use the ongoing US Zika virus outbreak to illustrate how case-based surveillance data can provide incomplete information on the spread of a disease in a population and to illustrate the challenges for public health practitioners and policymakers drawing conclusions from these data. Rather than summarize the purposes of surveillance generally, we use a case-based approach to illustrate the practical challenges that exist. We first summarize the process that health departments use to generate surveillance reports, then describe the patient, provider, and contextual factors that influence testing and reporting patterns. We then describe public health activities, including active surveillance, that influence both patient and provider behavior as well as surveillance reports. We conclude with a discussion about the interpretation of surveillance data and approaches that could improve the validity of surveillance reports.
CASE-BASED SURVEILLANCE
Surveillance for Zika is based primarily on health care provider reports of cases of individuals with ZVD using a “notifiable disease” system. Providers and laboratories report potential cases to their state or local health departments, which classify them as probable or confirmed cases of ZVD, Zika virus infection, or congenital infection depending on laboratory test results, clinical criteria, and other factors. Typically, the provider passes samples to the laboratory, and the laboratories are expected to inform the state health department. When laboratories are private, reporting is not always complete and timely, and even when laboratories are public, these submissions often lack critical contextual information about the patient. Health departments tabulate the number of cases by date, location, and patient characteristics and publish summary reports.1,2
This system, the standard approach to public health surveillance for centuries, seems simple. But surveillance reports are the result of a complex process involving the following:
1. Public health guidance, which is based on knowledge of how Zika virus is transmitted and where local transmission may be taking place;
2. Testing capacity in both public health and private laboratories;
3. Patients seeking medical attention or testing;
4. Providers’ recommendations to their patients regarding testing and reporting cases to public health authorities;
5. The interpretation of test results and classification of cases; and
6. Public health surveillance activities including active surveillance, mosquito testing, and case classification and tabulation procedures.
Tabulations based on such passive surveillance systems are known to undercount the actual number of cases because some individuals are never diagnosed and others are not reported, and this “iceberg” phenomenon seems to be the case with ZVD. Chevalier et al., for instance, used data collected from screening of blood donations to estimate that there were more than 400 000 Zika infections in Puerto Rico between April and August 2016, compared with 10 000 reported cases in the same period.3 And just as a constant fraction of an iceberg lies below the water line, epidemiologists sometimes assume that a constant fraction of cases are reported, and consequently that the resulting tabulations are often regarded as accurate reflections of trends and differences among population groups.4–6 When it comes to ZVD, however, the validity of that assumption is questionable.
PUBLIC HEALTH GUIDANCE
In the United States, states and territories individually determine whether notification of health departments of Zika virus infection is required, and the Council of State and Territorial Epidemiologists maintains case definitions to ensure consistency.7 As summarized in the Appendix (available as a supplement to the online version of this article at http://www.ajph.org), the Council’s definition classifies cases as probable or confirmed cases of ZVD, Zika virus infection, or congenital infection depending on laboratory test results, clinical criteria, and “epidemiological linkage.”
Epidemiological linkage involves a combination of risk factors such as recent travel to areas with ongoing transmission; having sexual contact with partners who recently traveled to such an area; being a recipient of blood, blood products, or an organ transplant from a person with known infection; or clinical suspicion of mosquito-borne transmission. Whether a case is regarded as epidemiologically linked, therefore, depends on knowledge of where transmission is taking place, which in turn depends—in a circular fashion—on accurate case finding, testing, and surveillance reports. The providers who report a case usually collect the patient information needed to classify a case, and if testing is to be done at public health laboratories, they submit the information at the time they request the test. Positive test results from commercial laboratories are included in public health surveillance systems, although the timeliness and completeness of associated patient information—and the ability to correctly classify the case—varies.
TESTING CAPACITY
Early in the US Zika outbreak, the only laboratories capable of testing samples were state and local health departments. And because testing capacity was limited, state and local health departments (working in collaboration with the Centers for Disease Control and Prevention [CDC]) determined whether a sample was eligible to be tested by using algorithms that involve clinical criteria, especially symptoms such as fever, rash, arthralgia, and conjunctivitis; associated conditions such as Guillain-Barré syndrome; high-risk situations such as pregnancy; discovery of prenatal or neonatal outcomes such as microcephaly or other specified conditions; and epidemiological linkage. These guidelines are adapted by state and local jurisdictions on the basis of knowledge of local transmission and test capacity.
Subsequently, private laboratories began to offer Zika virus testing independently to providers and patients. Some offer both reverse-transcriptase polymerase chain reaction and enzyme-linked immunosorbent assay Zika tests, but confirmatory tests are not always available. We have found a variety of costs of Zika tests ranging from $200 to $3000 (when completely out of pocket), which may not be covered by insurance, creating a potential surveillance bias toward those who have the availability to pay.8 Some states do not allow private Zika testing, and CDC discourages it.9
Early in the outbreak, only those with known risks were eligible to be tested, so the previous probability of infection was likely to be relatively high, and so was the positive predictive value.10 As 2016 and 2017 progressed, however, testing facilities became more available, constraints were relaxed, and, consequently, the previous probability of infection among those tested probably dropped. The logical consequence was that even with good test sensitivity and specificity, an increasing proportion of individuals who tested positive did not actually have Zika virus infection.8,10
PATIENTS’ TEST-SEEKING BEHAVIOR
Whether a case is reported depends, in the first instance, on individuals’ decisions to seek medical assessment. These decisions are influenced by public health and other guidance, individual-level characteristics such as access to health care and personal health beliefs, and the advice they get from their physicians and other health care providers, what the media says about these matters, and their access to different media sources.11
According to surveys conducted over the course of 2016, there was substantial variation in public awareness and knowledge, both over time and among different population groups. National random-digit-dial telephone surveys show that awareness changed in a short period of time. In 1 study, the proportion of the US public aware of Zika rose from 74% in March to close to 95% in August, rising in fairly linear fashion.12,13 There are no comprehensive data on test requests, but 1 private laboratory reported that the demand for testing doubled between July and August of 2016.9
A national survey in March 2016 showed that 75% of individuals were aware Zika virus causes birth defects and 60% were aware that it could be transmitted sexually.14 In a survey conducted in April and May of 2016, investigators at New York University found that 38% of the population knew all key scientific elements of Zika—that it can be sexually transmitted, cause birth defects, and be an asymptomatic infection. The New York University study, when repeated in July and August 2016, found knowledge of these particular characteristics of Zika unchanged despite changes in awareness of the disease increasing nationally.13
Although awareness does not predict health action, theory suggests and research has found in previous emerging disease threats that awareness is a precursor to health behavior change.15–18 Many of the factors that influence whether testing is done also vary across locations and among population groups. For example, awareness of the Zika virus varied among certain demographic groups with women, older adults, non-Hispanic White adults, those with higher incomes, and those with higher education more likely to be aware.13 Berenson et al. looked at knowledge among pregnant women in Zika-prone areas, in particular low-income women attending prenatal clinics in southern Texas in summer 2016, and found that 60% were not aware that Zika can be sexually transmitted.19
The New York City Department of Health and Mental Hygiene (DOHMH) found that Zika testing was relatively less common in census tracts with high densities of individuals from countries with active Zika transmission. On the assumptions that getting a Zika test is a rough proxy for Zika awareness and that individuals born in affected regions are more likely to travel there, this suggests that awareness was lower in the population most at risk. This analysis caused a shift in DOHMH’s intervention strategies initiating visits to public hospitals, local providers, local nail salons, beauty salons, and clinics that women would frequent in target ethnic neighborhoods. By September 2016, there had been an increase in the number of tests ordered in these target census areas.20
PROVIDERS’ BEHAVIOR
Because they are based on notifications to health departments, case counts depend on provider decisions to advise patients to be tested, to send samples for testing, to fill out the correct paperwork (in cases in which the samples have to be sent to the health department or a central laboratory), and for the testing to be carried out correctly by laboratory personnel or phlebotomists. These actions depend in turn on the providers’ understanding of their patients’ risk factors and their reporting responsibilities, the availability and cost of testing facilities, public health and other guidance, and what the media says about these matters.21–23
In general, provider reporting is very incomplete. For example, reviews of physician practices for ordering tests for sexually transmitted infections have found multiple barriers to testing including time, consent processes, burdensome reporting procedures, lack of patient acceptance, competing priorities, and systems issues such as reimbursement. Depending on the disease, providers generally report fewer than 30% of cases, and report weeks after laboratory results are available. Indeed, awareness of notification requirements varies with disease but is generally low.22,24
Furthermore, these factors can vary over time. Studies of passive surveillance during the 2009 H1N1 outbreak, for instance, suggested that the proportion of cases reported declined over time, perhaps because of “surveillance fatigue,” which may happen with patients or providers learning that most cases do not require medical attention or clinical testing.19,20 During the Ebola outbreak in West Africa, people in some communities stopped cooperating with health workers out of fear. As a result, surveillance data suggested a drop in new cases until so many people had become sick in a community that it was no longer possible to conceal them.25 During the Zika outbreak, differences between Puerto Rico and CDC in surveillance systems set up to identify infants and fetuses with Zika-related birth defects have led some to suggest that Puerto Rico is downplaying the extent of its Zika problem.26
PUBLIC HEALTH SURVEILLANCE ACTIVITIES
Public health surveillance systems also include cases that are identified through “active surveillance,” in which health departments proactively work to identify cases. The 2016 South Florida outbreak (the primary US example of active surveillance for Zika) illustrates a number of ways in which cases were identified. For instance, after health officials established that Zika virus was being transmitted in the Wynwood neighborhood of Miami, Florida, in July, efforts were undertaken to identify and submit samples for testing of (1) close household and workplace contacts of the known cases of individuals who had no history of travel to affected areas and (2) employees at workplaces and customers of businesses in areas of known or suspected transmission, especially outdoor workplaces with standing water nearby. Community surveys were also undertaken among individuals who lived in areas of known or suspected local transmission, who lived in areas bordering on these zones, and who attended health clinics that served the affected areas. In these community surveys, all or a representative sample of individuals were tested regardless of whether they exhibited symptoms or there was direct evidence of epidemiological linkage other than living in an affected area.27 Although this might be good public health insofar as it helps identify those at greatest risk of infection, it produces a nonrepresentative sample for surveillance case reports.
By September 1, a total of 29 individuals with laboratory evidence of recent Zika virus infection and likely exposure in the Wynwood area were identified. Even with these extensive efforts, however, the case counts were not complete because most cases are asymptomatic and those individuals do not seek medical care. Furthermore, some might have been infected earlier and did not have Zika virus RNA still present in their urine.27 Indeed, these were not the earliest cases; Grubaugh et al. used genomic epidemiology methods to estimate that there were at least 4, and possibly as many as 40, introductions of Zika virus that led to local transmission in Florida, which started several months before its initial detection in July 2016.28
Similarly, between June and October 2016, the New York City DOHMH conducted several enhanced surveillance efforts for locally acquired mosquito-borne Zika virus infections. These included (1) sentinel surveillance in neighborhoods chosen on the basis of high counts of travel-associated Zika infections or having an environmental habitat conducive to mosquito breeding; (2) enhanced passive surveillance, in particular notifying providers to be alert; (3) prioritizing Zika-associated routine case reports and laboratory testing; and (4) adding a Zika-like illness category to an existing emergency department syndromic surveillance system. Although no evidence of local transmission was detected, these enhanced surveillance efforts identified 15 suspected cases, 308 emergency department visits, and 17 spatiotemporal clusters of emergency department visits for fever that would otherwise not have appeared in surveillance data.29
Testing for Zika virus in mosquito pools is another important public health surveillance activity as mosquito surveillance may trigger active surveillance plus classification of suspected cases. The discovery of mosquitoes infected with the virus in Miami Beach in September 2016 led to active surveillance that identified additional human cases.30,31 Mosquito testing, however, depends on when and where mosquito testing is done, as well as the type of tests performed (e.g., adult mosquitos vs larvae), the motivation for testing (active vs passive surveillance), and the placement of insect traps (e.g., in areas with the greatest likelihood of infection). Mosquito testing is also far from universal. In a study conducted in 381 local vector-control departments and districts in the 10 states in the southern United States most likely to be affected by Zika virus, only 67% conducted routine surveillance for mosquitoes through standardized trapping and species identification.32 This is probably an overestimate if the 46% of departments that did not respond to the survey were less likely to conduct surveillance. Furthermore, local areas in the other states are less likely to conduct mosquito surveillance or even have a vector control department.
Public health surveillance data are reported on the basis of place of residence rather than where the exposure might have taken place. For example, a man who lives in New York City but visited South Florida during a time of local transmission would have his case counted in New York City. This is done partly for pragmatic reasons, such as to avoid double counting. It also obviates the need to make a judgment about where the transmission might have taken place. However, there is a degree of circularity in the practice, as New York City officials might not count his case unless they were aware that transmission was taking place in Florida. For instance, there seem to have been at least 8 individuals from other states or countries who were infected in Florida in 2016 beyond the 56 included in the state counts.30
The timing of case reports also adds a degree of uncertainty to the interpretation of surveillance data. The time of infection is usually not known unless it was connected to a known exposure during a limited trip to an affected area, or, sometimes, sexual contact with an exposed individual. Rather, surveillance reports are typically based on diagnoses made following the onset of symptoms, which, for a condition with mild symptoms such as ZVD, can be difficult to know with precision. Furthermore, delays in reporting to the health department and test completion—especially when testing facilities are limited—can mean that trends in newly confirmed cases seriously lag behind the time of infection.
COMMENTARY
The purpose of surveillance—passive and active—is to detect and measure disease; however, how the inputs of the system are interpreted by decision-makers and the public is another concern. The purpose of this review is to identify existing challenges in making clinical and public health decisions based on the current system’s inputs. Taking into account factors that influence both testing and reporting, it is reasonable to assume that Zika surveillance reports, like most case-based surveillance systems, substantially undercount the number of true infections. Moreover, who is screened, why, and where they live or have traveled all vary over time and among population groups. What tests are done, and when they are done relative to the time of exposure, also vary.
All of these factors may depend on testing capacity, public health guidance, and where local transmission is taking place, as well as public and provider awareness and knowledge. Awareness and knowledge, in turn, depend on what the media says about these matters and individuals’ access to different information sources, personal beliefs, and health services. As a consequence, case count trends as well as geographical and other differentials may reflect surveillance “artifacts” as much as real trends. Furthermore, because the case definition depends in part on epidemiological linkage, and because active surveillance may be triggered by suspicion of local transmission, there is an inherent circularity in the number of cases reported. Differing criteria in epidemiological linkages in different jurisdictions make differences and changes in the data harder to interpret as real difference in incidence and prevalence.
Epidemiologists recognize these potential biases and typically note them and present their surveillance reports with appropriate caveats. However, when a disease is emerging and the need for information is urgent, what seem like subtle methodological points to policymakers and the public can easily become lost.33 Many of these biases can be described with, “the harder one looks, the more cases will be found.” Ambiguity about the actual epidemiological patterns, therefore, can make it easier for state and local officials concerned with economic implications to downplay reported cases and fail to initiate active surveillance and mosquito testing, as some have suggested.30
Beyond awareness of potential biases, a solution is to develop population-based surveillance systems that are less dependent on individuals’ and their physicians’ decisions and, thus, less sensitive to the circular effect described in this article. This consistency could also improve the interpretation of existing data. One possibility would be to use samples drawn for other purposes such as blood donation, as was done in Puerto Rico.3 Although these are not representative populations, statistical modeling based on data about who is likely to donate blood can be used to create population-based estimates.34 Blind surveillance can also be done on a representative sample of women in prenatal care or giving birth, which would provide unbiased estimates for a population of interest. Blinded testing of nationally representative samples can also be used to estimate the prevalence of Zika virus infection, as was done in England and Hong Kong during the 2009 H1N1 influenza pandemic.34,35 None of these solutions would be a panacea—imperfect sensitivity and specificity would still cause estimates of the level of incidence and prevalence to be somewhat inaccurate. However, trends over time and comparisons across locations would be more likely to be accurate.
HUMAN PARTICIPANT PROTECTION
Institutional review was not needed because this work did not involve human participants.
REFERENCES
- 1.Roush S, Birkhead G, Koo D, Cobb A, Fleming D. Mandatory reporting of diseases and conditions by health care professionals and laboratories. JAMA. 1999;282(2):164–170. doi: 10.1001/jama.282.2.164. [DOI] [PubMed] [Google Scholar]
- 2.Chorba TL, Berkelman RL, Safford SK, Gibbs NP, Hull HF. Mandatory reporting of infectious diseases by clinicians. MMWR Recomm Rep. 1990;39(RR-9):1–17. [PubMed] [Google Scholar]
- 3.Chevalier MS, Biggerstaff BJ, Basavaraju SV et al. Use of blood donor screening data to estimate Zika virus incidence, Puerto Rico, April–August 2016. Emerg Infect Dis. 2017;23(5):790–795. doi: 10.3201/eid2305.161873. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Last JM. The iceberg: “completing the clinical picture” in general practice. Int J Epidemiol. 2013;42(6):1608–1613. doi: 10.1093/ije/dyt113. [DOI] [PubMed] [Google Scholar]
- 5.Last JM. Commentary: the iceberg revisited. Int J Epidemiol. 2013;42(6):1613–1615. doi: 10.1093/ije/dyt112. [DOI] [PubMed] [Google Scholar]
- 6.Armstrong D. Commentary: the discovery of hidden morbidity. Int J Epidemiol. 2013;42(6):1617–1619. doi: 10.1093/ije/dyt114. [DOI] [PubMed] [Google Scholar]
- 7.Centers for Disease Control and Prevention. Zika virus disease and Zika virus infection 2016 case definition, approved June 2016. 2016. 16-ID-01. Available at: https://wwwn.cdc.gov/nndss/conditions/zika/case-definition/2016/06. Accessed June 19, 2018.
- 8.Goldfarb IT, Jaffe E, Lyerly AD. Responsible care in the face of shifting recommendations and imperfect diagnostics for Zika virus. JAMA. 2017;318(21):2075–2076. doi: 10.1001/jama.2017.15680. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Rabin RC. Want a Zika test? It’s not easy. New York Times. September 19, 2016. Available at: https://www.nytimes.com/2016/09/20/well/live/want-a-zika-test-its-not-easy.html. Accessed June 19, 2018.
- 10.Lin K, Kraemer J, Piltch-Loeb R, Stoto M. Zika virus testing and screening: clinical interpretation of test results under epidemiologic uncertainty. J Am Board Fam Med. In press. [DOI] [PubMed]
- 11.DiMatteo MR, Sherbourne CD, Hays RD et al. Physicians’ characteristics influence patients’ adherence to medical treatment: results from the Medical Outcomes Study. Health Psychol. 1993;12(2):93–102. doi: 10.1037/0278-6133.12.2.93. [DOI] [PubMed] [Google Scholar]
- 12.The Zika virus: gaps in Americans’ knowledge and support for government action. Chicago, IL: March of Dimes/NORC at the University of Chicago; 2016.
- 13.Abramson D, Piltch-Loeb R. US public’s perception of Zika risk: awareness, knowledge, and receptivity to public health interventions. New York, NY: New York University College of Global Public Health; 2016. [Google Scholar]
- 14.Harvard Opinion Research Program. Many US families considering pregnancy don’t know Zika facts. Harvard Chan School of Public Health. 2016. Available at: https://www.hsph.harvard.edu/news/press-releases/zika-virus-awareness-pregnant-women. Accessed June 19, 2018.
- 15.Cheng C, Ng A-K. Psychosocial factors predicting SARS-preventive behaviors in four major SARS-affected regions. J Appl Soc Psychol. 2006;36(1):222–247. [Google Scholar]
- 16.Jiang X, Elam G, Yuen C et al. The perceived threat of SARS and its impact on precautionary actions and adverse consequences: a qualitative study among Chinese communities in the United Kingdom and the Netherlands. Int J Behav Med. 2009;16(1):58–67. doi: 10.1007/s12529-008-9005-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Rudisill C. How do we handle new health risks? Risk perception, optimism, and behaviors regarding the H1N1 virus. J Risk Res. 2013;16(8):959–980. [Google Scholar]
- 18.Gidado S, Oladimeji AM, Roberts AA et al. Public knowledge, perception and source of Information on Ebola virus disease–Lagos, Nigeria; September, 2014. PLoS Curr. 2015:7. doi: 10.1371/currents.outbreaks.0b805cac244d700a47d6a3713ef2d6db. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Berenson AB, Trinh HN, Hirth JM, Guo F, Fuchs EL, Weaver SC. Knowledge and prevention practices among US pregnant immigrants from Zika virus outbreak areas. Am J Trop Med Hyg. 2017;97(1):155–162. doi: 10.4269/ajtmh.17-0062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Raphael M. Zika virus—preparing for 2017: CDC, state and local response. Oral presentation at: National Association of County and City Health Officials Public Health Preparedness Summit; April 25, 2017; Atlanta, GA.
- 21.James L, Roberts R, Jones RC et al. Emergency care physicians’ knowledge, attitudes, and practices related to surveillance for foodborne disease in the United States. Clin Infect Dis. 2008;46(8):1264–1270. doi: 10.1086/533445. [DOI] [PubMed] [Google Scholar]
- 22.Fill MA, Murphree R, Pettit AC. Health care provider knowledge and attitudes regarding reporting diseases and events to public health authorities in Tennessee. J Public Health Manag Pract. 2017;23(6):581–588. doi: 10.1097/PHH.0000000000000492. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Voss S. How much do doctors know about the notification of infectious diseases? BMJ. 1992;304(6829):755. doi: 10.1136/bmj.304.6829.755. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Burke RC, Sepkowitz KA, Bernstein KT et al. Why don’t physicians test for HIV? A review of the US literature. AIDS. 2007;21(12):1617–1624. doi: 10.1097/QAD.0b013e32823f91ff. [DOI] [PubMed] [Google Scholar]
- 25.Stern J. Hell in the hot zone. Vanity Fair. October 2014. Available at: https://www.vanityfair.com/news/2014/10/ebola-virus-epidemic-containment. Accessed June 19, 2018.
- 26.Branswell H. Feud erupted between CDC, Puerto Rico over reporting of Zika cases, document shows. Stat News. May 1, 2017. Available at: https://www.statnews.com/2017/05/01/zika-virus-puerto-rico-cdc. Accessed June 19, 2018.
- 27.Likos A. Local mosquito-borne transmission of Zika virus—Miami-Dade and Broward Counties, Florida, June–August 2016. MMWR Morb Mortal Wkly Rep. 2016;65(38):1032–1038. doi: 10.15585/mmwr.mm6538e1. [DOI] [PubMed] [Google Scholar]
- 28.Grubaugh ND, Ladner JT, Kraemer MUG et al. Genomic epidemiology reveals multiple introductions of Zika virus into the United States. Nature. 2017;546(7658):401–405. doi: 10.1038/nature22400. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Wahnich A, Clark S, Bloch D et al. Surveillance for mosquitoborne transmission of Zika virus, New York City, NY, USA, 2016. Emerg Infect Dis. 2018;24(5):827–834. doi: 10.3201/eid2405.170764. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Chang D. Florida’s Zika undercount hides extent of virus’ spread, experts say. Miami Herald. September 10, 2016. Available at: http://www.miamiherald.com/news/health-care/article100939277.html. Accessed June 19, 2018.
- 31.Florida Department of Agriculture and Consumer Services. Miami-Dade mosquitoes test positive for Zika. Florida Trend. October 18, 2016. Available at: http://www.floridatrend.com/article/20863/miami-dade-mosquitoes-test-positive-for-zika. Accessed June 19, 2018.
- 32.Mosquito surveillance and control Assessment in Zika virus priority jurisdictions. Washington, DC: National Association of County and City Health Officials; 2016.
- 33.Stoto MA. The effectiveness of US public health surveillance systems for situational awareness during the 2009 H1N1 pandemic: a retrospective analysis. PLoS One. 2012;7(8):e40984. doi: 10.1371/journal.pone.0040984. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Miller E, Hoschler K, Hardelid P, Stanford E, Andrews N, Zambon M. Incidence of 2009 pandemic influenza A H1N1 infection in England: a cross-sectional serological study. Lancet. 2010;375(9720):1100–1108. doi: 10.1016/S0140-6736(09)62126-7. [DOI] [PubMed] [Google Scholar]
- 35.Cowling BJ, Chan KH, Fang VJ et al. Comparative epidemiology of pandemic and seasonal influenza A in households. N Engl J Med. 2010;362(23):2175–2184. doi: 10.1056/NEJMoa0911530. [DOI] [PMC free article] [PubMed] [Google Scholar]