Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
editorial
. 2021 Dec;111(12):2085. doi: 10.2105/AJPH.2021.306553

Surveillance, Surveys, and COVID-19

Denys T Lau 1,2,3,4, Paulina Sosa 1,2,3,4, Nabarun Dasgupta 1,2,3,4, Hua He 1,2,3,4
PMCID: PMC8667822  PMID: 34878882

Surveillance and survey data are critical for informing effective and timely public health actions, particularly during pandemics like COVID-19 and other public health emergencies. Surveillance and survey programs track major life events such as births and deaths, disease distribution and wellness progression, as well as health care utilization across populations, geographies, and time. As our understanding about COVID-19 evolves, our surveillance and survey approaches must quickly adapt to meet the growing data needs of public health officials, researchers, and the public. These reformed programs will form the bedrock for a new generation of health informatics.

Because of COVID-19 safety concerns and varying stay-at-home orders imposed across the country, data collection and processing were disrupted, especially for programs that relied on person-to-person interactions or onsite manual reviews. As protocols and content got modified, surveillance and survey programs needed to address key dimensions of data quality: (1) accuracy and consistency, (2) timeliness, (3) efficiency and burden, and (4) relevance. These issues are raised in this special edition on COVID-19’s impact on public health surveillance and survey programs in the United States.

First, programs needed to ensure data accuracy and consistency. For example, detailed death certification guidance and automated and manual coding instructions for cause of death had to be rapidly developed to help certifiers accurately record COVID-19 deaths. Furthermore, standardizing COVID-19 case definitions on death records is needed to yield more accurate and consistent comparisons across jurisdictions and over time.

Second, programs needed to ensure timely data dissemination. For example, to provide timelier data about the impact of COVID-19 on care providers, preliminary estimates from the National Health Care Surveys will be published via a data dashboard earlier than the release of final official data files. Federal health surveys may have an even more critical role in informing the public, as state-level pandemic dashboards are being decommissioned (https://n.pr/3hEH9cy). Within a media-rich public environment, timely data are now a public expectation, including small companies making occupational health decisions and large health care organizations predicting caseloads. Many data systems could benefit from clearer descriptions of how the data arose and how they should be analyzed.

Third, programs needed to ensure efficiency, striking a balance between minimizing burden to reluctant survey respondents and maximizing safety to collect critical pandemic-related data without sacrificing data quality. For example, like many other health surveys, the Medical Expenditure Panel Survey had to suspend almost all in-person field activities and pivoted to conducting most interviews by telephone, a less expensive option. Furthermore, multiple federal agencies collaborated quickly to launch two online data collection platforms to efficiently collect COVID-19–related information: the National Center for Health Statistics’ Research and Development Survey and the Census Bureau’s Household Pulse Survey.

Finally, programs needed to ensure data relevance by replacing less-prioritized content with new COVID-19–related items. Although changes to major surveys traditionally have phased in slowly to ensure data continuity, more dynamic surveys are required to monitor different aspects of emerging public health crises. For example, the National Health and Nutrition Examination Survey will include antibody testing to provide data on undiagnosed COVID-19 infections. The California Health Interview Survey integrated new COVID-19 items on anti-Asian rhetoric and hate incidents targeting Asian, Native Hawaiian, and Pacific Islander communities in California.

The essays in this special edition address what new research opportunities may be gained from collecting new COVID-19 information, how data quality and continuity may change through program design modifications, and what lessons are gained from this process that may inform future data strategies for other public health challenges. As more data become available, we can examine the fuller impact of COVID-19 on our data systems and the health of the nation.

3. Years Ago

Public Health Surveillance for Zika Virus

Taking into account factors that influence both testing and reporting, it is reasonable to assume that Zika surveillance reports, like most case-based surveillance systems, substantially undercount the number of true infections. Moreover, who is screened, why, and where they live or have traveled all vary over time and among population groups. What tests are done, and when they are done relative to the time of exposure, also vary. All of these factors may depend on testing capacity, public health guidance, . . . as well as public and provider awareness and knowledge. Awareness and knowledge, in turn, depend on what the media says about these matters and individuals’ access to different information sources, personal beliefs, and health services. As a consequence, case count trends as well as geographical and other differentials may reflect surveillance “artifacts” as much as real trends. . . . Differing criteria in epidemiological linkages in different jurisdictions make differences and changes in the data harder to interpret as real difference in incidence and prevalence.

From AJPH, October 2018, p. 1361

9. Years Ago

Self-Reported Influenza-Like Illness During the 2009 Pandemic

Standard surveillance for influenza in the United States involves health care providers describing patient visits for ILI [influenza-like illness] and submitting respiratory specimens for influenza diagnostic testing. The results from such health care–based surveillance conducted during the pH1N1 pandemic . . . indicate that activity peaked in late October 2009. . . . However, a majority of adults and almost half of children with ILI [in the CDC’s Behavioral Risk Factor Surveillance System community survey conducted from September 2009 to March 2010] reported that they did not visit a health care provider for their illness and would not have been captured by health care-based influenza surveillance. Additionally, children, women, the oldest adult respondents, and adults in the Northeast and children in the South census regions were more likely to seek health care, suggesting that the epidemiology of ILI ascertained through routine influenza surveillance systems may differ substantially from that of cases identified using community surveillance.

From AJPH, October 2012, p. e24

Biography

graphic file with name AJPH.2021.306553f1.gif

graphic file with name AJPH.2021.306553f2.gif

graphic file with name AJPH.2021.306553f3.gif

graphic file with name AJPH.2021.306553f4.gif


Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES