Skip to main content
PLOS One logoLink to PLOS One
. 2023 Feb 3;18(2):e0279956. doi: 10.1371/journal.pone.0279956

Real-world performance of SARS-Cov-2 serology tests in the United States, 2020

Carla V Rodriguez-Watson 1,*,#, Anthony M Louder 2,#, Carly Kabelac 2,#, Christopher M Frederick 3,#, Natalie E Sheils 4,#, Elizabeth H Eldridge 5,#, Nancy D Lin 5,#, Benjamin D Pollock 6,#, Jennifer L Gatz 3,#, Shaun J Grannis 3,7,#, Rohit Vashisht 8,#, Kanwal Ghauri 1,#, Camille Knepper 6,#, Sandy Leonard 9,#, Peter J Embi 10,#, Garrett Jenkinson 6,#, Reyna Klesh 9,#, Omai B Garner 11,#, Ayan Patel 12,#, Lisa Dahm 12,#, Aiden Barin 12,#, Dan M Cooper 12,13,#, Tom Andriola 12,14,#, Carrie L Byington 12,#, Bridgit O Crews 15,#, Atul J Butte 8,12,#, Jeff Allen 16,#
Editor: Padmapriya P Banada17
PMCID: PMC9897562  PMID: 36735683

Abstract

Background

Real-world performance of COVID-19 diagnostic tests under Emergency Use Authorization (EUA) must be assessed. We describe overall trends in the performance of serology tests in the context of real-world implementation.

Methods

Six health systems estimated the odds of seropositivity and positive percent agreement (PPA) of serology test among people with confirmed SARS-CoV-2 infection by molecular test. In each dataset, we present the odds ratio and PPA, overall and by key clinical, demographic, and practice parameters.

Results

A total of 15,615 people were observed to have at least one serology test 14–90 days after a positive molecular test for SARS-CoV-2. We observed higher PPA in Hispanic (PPA range: 79–96%) compared to non-Hispanic (60–89%) patients; in those presenting with at least one COVID-19 related symptom (69–93%) as compared to no such symptoms (63–91%); and in inpatient (70–97%) and emergency department (93–99%) compared to outpatient (63–92%) settings across datasets. PPA was highest in those with diabetes (75–94%) and kidney disease (83–95%); and lowest in those with auto-immune conditions or who are immunocompromised (56–93%). The odds ratios (OR) for seropositivity were higher in Hispanics compared to non-Hispanics (OR range: 2.59–3.86), patients with diabetes (1.49–1.56), and obesity (1.63–2.23); and lower in those with immunocompromised or autoimmune conditions (0.25–0.70), as compared to those without those comorbidities. In a subset of three datasets with robust information on serology test name, seven tests were used, two of which were used in multiple settings and met the EUA requirement of PPA ≥87%. Tests performed similarly across datasets.

Conclusion

Although the EUA requirement was not consistently met, more investigation is needed to understand how serology and molecular tests are used, including indication and protocol fidelity. Improved data interoperability of test and clinical/demographic data are needed to enable rapid assessment of the real-world performance of in vitro diagnostic tests.

Introduction

Despite the availability of highly effective COVID-19 vaccines to prevent hospitalization and reduce mortality [1, 2], variants continue to fuel the surge of COVID-19 across the U.S. [3, 4]. High-quality diagnostic and serology tests are essential tools to better understand the epidemiology of COVID-19 and immunity after infection [5, 6]. Viruses and antibodies are primarily detectable within certain temporal windows [79]. However, many individuals infected with SARS-CoV-2 are asymptomatic or may not seek medical care because of mild symptoms [10]. In contrast to molecular diagnostic tests, serologic tests are informative even once the SARS-CoV-2 infection is no longer present [11, 12].

Currently, there are 90 authorized SARS-CoV-2 serology/antibody tests approved for Emergency Use Authorization (EUA) [13]. However, they have not undergone the same evidentiary review standards required for Food and Drug Administration (FDA) clearance due to the COVID-19 national emergency [14, 15]. There is a need to assess the real-world performance of these tests. Further, while large studies have shown that greater than 91% of people with active SARS-CoV-2 infection seroconvert [16, 17], the factors associated with seroconversion (e.g., pre-existing conditions, the severity of COVID-19 presentation) remain elusive.

From a public health perspective, confidence in the ability of serological tests to identify those with recent infections is critical for effective pandemic planning. Estimates of disease prevalence directly inform dynamic population estimates of susceptible, infected, and recovered, which are needed to understand the infectiousness of SARS-CoV-2 [18]. From a clinical perspective, an accurate understanding of SARS-CoV-2 exposure is necessary to understand disease presentation and a clinical course of action, especially when patients do not present with symptoms or present late in their disease course (e.g., post-acute sequelae of SARS-CoV-2). Additionally, identifying factors associated with seropositivity may elucidate potential mechanisms of action that may be foundational in the development of therapy and treatment plans.

To address these gaps, we characterize the performance of serology tests by estimating the positive percent agreement (PPA) of serological samples obtained from people known to be positive for SARS-CoV-2 infection by molecular assay (e.g., PCR). We also sought to identify factors associated with seropositivity. Findings from this study may facilitate understanding of the real-world performance of serology tests, many of which were issued under EUA, and may help inform our understanding of the immune response to SARS-CoV-2.

Materials and methods

Study population and setting

Six health systems (i.e., datasets) collaborated on the Diagnostics Evidence Accelerator (EA): Health Catalyst, Mayo Clinic, Optum Labs, Regenstrief Institute, the University of California Health System, and Aetion and HealthVerity. The EA is a consortium of leading experts in health systems research, regulatory science, data science, and epidemiology, specifically assembled to analyze health system data to address key questions related to COVID-19. The EA provides a platform for rapid learning and research using a common analytic plan. Health Catalyst, Mayo Clinic, and the University of California Health System all utilized electronic health records (EHR) data from their respective healthcare delivery systems. The Regenstrief Institute accessed EHR and public health data from the Indiana Health Information Exchange [19, 20], while Aetion sourced healthcare data from HealthVerity Marketplace encompassing medical claims, pharmacy claims, hospital chargemaster, and data collected directly from laboratories. Optum Labs data included de-identified medical, and pharmacy claims as well as laboratory results data utilized medical, and pharmacy claims from a single, large U.S. insurer as well as data directly from laboratories. We refer to these health systems as datasets A-F for the purposes of anonymity. Data sources included in the analysis are generally categorized as either payer (claims) or healthcare delivery systems. As illustrated in Fig 1, data were drawn from across the U.S. with heavy representation in California, Illinois, Ohio, and Michigan. Characteristics of participating data sources and representative populations are described in the S1 Table.

Fig 1. Geographic coverage of data datasets.

Fig 1

Reprinted from brightcarbon.com under a CC BY license, with permission from Bright Carbon, original copyright (2021). Each color represents the number of data partners with a presence in each state but does not necessarily correspond to the number of people. The darkest color represents those where all six partners had a presence.

Study design

In this retrospective cohort study, we identified patients across different settings (e.g., inpatient, outpatient, emergency department (ED), or long-term care facility) who tested positive for SARS-CoV-2 ribonucleic acid (RNA) by molecular test between March–September 2020 and who received at least one subsequent serological test for SARS-CoV-2 immunoglobulin (Ig) G or Total antibody (Ab) from 14–90 days after the positive RNA test (Fig 2). We analyzed the first serology test in the 14–90-day follow-up period, which ended on December 31, 2020. “Date of RNA positive” served as the index (cohort entry) date and was defined hierarchically as either the date of 1) sample collection; 2) accession; or 3) result. Because the optimal time to observe a positive serology is at least two weeks after the index date, we only include patients who had at least one serology test 14–90 days after the index date [13, 79].

Fig 2. Study design diagram.

Fig 2

To minimize the effect of differential missingness between datasets, we applied the following rules: 1) included all persons with an office or telephone visit in the +/- 14 days around the index date to enable as complete an assessment of presenting symptoms as possible; 2) in claim systems, included only persons with at least six months of enrollment in the year before index; 3) estimated the proportion of patients at each site who had zero encounters in the prior year to contextualize our capture of pre-existing conditions, and 4) excluded variables from analysis if ≥30% of values were missing.

The Western—Copernicus Group (WCG) Institutional Review Board (IRB), the IRB of record for the Reagan-Udall Foundation for the FDA, reviewed the study and determined it to be non-human subjects research. Additionally, all legal and ethical approvals for use of the data included in this study were submitted, reviewed, and/or obtained locally at each contributing dataset by an IRB and/or governing board.

Measures

Outcomes

The primary outcome of interest for the validation analysis was the PPA of positive antibody (IgG or total) from serology tests with positive RNA from molecular tests (e.g., PCR), which served as the reference standard. Serology tests reported in this analysis included: Abbott Architect IgG [21], Euroimmun IgG [22], Diazyme DZ-Lite SARS-CoV-2 IgG CLIA kit [23], Beckman SARS-CoV-2 IgG [24], Ortho Vitros IgG [25], Diasorin Liaison SARS-CoV-2 S1/S2 IgG [26], and Roche Elecsys Total Ab [27]. The Ortho Vitros was the only test used across multiple (3) datasets. We refer to these manufacturers—serological tests as Δ, Θ, Π, Λ, Ξ, Γ, and Ψ for anonymity. Molecular tests most reported in this analysis included: Hologic Panther Fusion [28], Hologic Aptima [29], Roche Cobas [30], Quest rRT-PCR [31], and Thermo Fisher Scientific Combo Kit [32]. We refer to these manufacturers—molecular tests as Σ, Φ, Ω, X, Y, and j for anonymity.

Covariates

We collected demographic, behavioral, and environmental characteristics, baseline clinical presentation, key comorbidities, and test characteristics, including manufacturer, according to a diagram illustrating potential factors associated with serology testing (Fig 3). We identified comorbidities and clinical presentation using phenotypes defined by the International Classification of Diseases 10 (ICD-10), and/or National Drug Codes. We identified comorbidities (pre-existing conditions) in the 365 days before the index date through 15 days before the index date. We provided coding algorithms for groups to use, while some groups used existing algorithms generated by their site. The ICD-10 codes used to identify comorbidities are listed in the S2 Table. We also stratified analyses by RNA tests conducted before June 15, 2020, which marked the beginning of the summer wave of infections in the first year of the pandemic, compared to on or after that date.

Fig 3. Factors potentially associated with serological testing.

Fig 3

Pepe, 2001 Sep;2(3):249–60.

Statistical analysis

Each contributing dataset ran its analysis according to a common protocol. Results were reviewed as a group to ensure alignment with the protocol and to review any protocol deviations. We calculated PPA as: (Number of positive antibody results ÷ Number of positive RNA results) x 100. We calculated PPA based on the first eligible serology test in the follow-up period overall and by age, sex, race, ethnicity, U.S. region, pregnancy status, pre-existing conditions, including but not limited to cardiovascular disease, obesity, hypertension, kidney disease, asthma, dementia, chronic liver disease, and smoking status. We also report the PPA by presenting symptoms, and serology tests at the time of the first serology test. We examined variations in PPA by serology tests and time, and serology tests and symptom presentation. We also examined variations in PPA by geography and care setting over time. We calculated exact (Clopper-Pearson) 95% confidence intervals (CI). We report significant differences where 95% CI have complete separation—although we did not conduct formal statistical comparisons of PPA between groups.

To study the odds of seropositivity, we estimated a model for the association to identify independent risk factors for seropositivity, assuming a binomial distribution for seropositivity status. Results are presented as the odds ratio (OR) and 95% CI that was calculated using score confidence intervals or exact CI [33]. All variables were treated as categorical. Symptoms were reported as a binary variable: “1” if any of the following symptoms were present: fever >100.4, abnormal chest imaging finding, high respiratory rate, low blood pressure, diarrhea, hypoglycemic, chest pain, delirium/confusion, headache, sore throat, cough, shortness of breath, pneumonia, acute respiratory infection, acute respiratory distress, cardiovascular presentation, renal presentation; and “0” otherwise. For datasets with data covering >1 geographic catchment area, geography was included as either one of four U.S. Census regions, or nine U.S. Census divisions based on patient home zip code. Variables with >30% missing/unknown values were excluded from models (except for pregnancy, pre-existing condition, or presenting symptoms, all of which were included). Each dataset used automated backward selection to remove non-significant pre-existing conditions while forcing all other covariates into the model. All analyses were performed using SAS software, version 9.2 or higher (SAS Institute, North Carolina, U.S.); or the Aetion Evidence Platform v4.13 (including R v3.4.2), which includes audit trails of all transformations of raw data and a quality check of the data ingestion process.

Results

Samples sizes across datasets ranged from 660–7,115; a total of 15,615 people with at least one serology test 14–90 days after the index date were included in the analyses. Between 35–65% of patients identified from health care delivery systems had no documented encounter in the system between 365 and 15 days before the index date. In contrast, only 11% of patients from national insurers reported having zero claims in the baseline period. As shown in Table 1, the serotested population was primarily 45–64 years of age (>40%), with a history of cardiovascular disease, including hypertension (8–70%). Race and ethnicity data were robust (<30% missing) in four datasets. The serotested population in those datasets was primarily White (>53%) and non-Hispanic (>65%), From datasets with national representation, persons from the Northeast (New England and Mid-Atlantic) were most represented in this serotested population. In datasets that represent regionally-based healthcare delivery systems, their population reflected their locations: Pacific and Midwest. Information on manufacturer test names was provided in four datasets. Generally, 2–3 primary tests were utilized in each dataset; 4 of 7 tests reported were used in >1 dataset. We did not observe any difference by age or sex in those for whom the test name was known versus unknown. In a single dataset with <30% of missing data on race/ethnicity, we observe over-representation of White and Hispanic people in those for whom the test name was known.

Table 1. Clinical and demographic characteristics of patients with positive RNA and who underwent serological tests.

Datasets A B C D E F
N = 2,938 (%) N = 7,115 (%) N = 660 (%) N = 1,687 (%) N = 977 (%) N = 2,238 (%)
Age (years) <20 118 (3.96) 209 (2.94) 16 (2.42) 34 (2.02) 20 (2.05) 70 (3.13)
20–44 997 (33.42) 2,381 (33.46) 269 (40.76) 496 (29.40) 337 (34.49) 682 (30.47)
45–54 698 (23.40) 1,541 (21.66) 105 (15.91) 301 (17.84) 171 (17.50) 455 (20.33)
55–64 755 (25.31) 1,560 (21.93) 124 (18.79) 365 (21.64) 219 (22.42) 484 (21.63)
65–74 288 (9.65) 977 (13.73) 102 (15.45) 295 (17.49) 142 (14.53) 348 (15.55)
75–84 63 (2.11) 359 (5.05) 30 (4.55) 152 (9.01) 67 (6.86) 152 (6.79)
≥85 19 (0.64) 88 (1.24) 14 (2.12) 44 (2.61) 21 (2.15) 47 (2.10)
Sex Female 1,773 (59.44) 3,890 (54.67) 374 (56.67) 949 (56.25) 556 (56.91) 1,350 (60.32)
Male 1,165 (39.05) 3,225 (45.33) 286 (43.33) 738 (43.75) 421 (43.09) 888 (39.68)
Race Black NA5 211 (2.97) 19 (2.88) 118 (6.99) 66 (6.76) 273 (12.20)
White NA 1,268 (17.82) 355 (53.79) 1,321 (78.30) 760 (77.79) 1,784 (79.71)
Asian NA 35 (0.49) 67 (10.15) 29 (1.72) 44 (4.50) 32 (1.43)
Pacific islander/ native Hawaiian NA NA NA 11 (0.65) 2 (0.20) NA
American Indian or Alaska native NA 1 (0.01) NA 41 (2.43) 12 (1.23) 11 (0.49)
Other NA 866 (12.17) 84 (12.73) NA NA 6 (0.27)
Unknown/ missing 2,938 (100) 4,734 (66.54) 135 (20.45) 167 (9.90) 93 (9.52) 132 (5.90)
Hispanic ethnicity Yes NA 866 (12.17) 178 (26.97) 444 (26.32) 124 (12.69) 245 (10.95)
No NA 1,515 (21.29) 432 (65.45) 1,212 (71.84) 830 (84.95) 1,867 (83.42)
Unknown/ missing 2,938 (100) 4,734 (66.54) 50 (7.58) 31 (1.84) 23 (2.35) 126 (5.63)
Pre-existing conditions1,2 Diabetes 634 (21.25) 1,215 (17.08) 102 (15.45) 307 (18.20) NA 213 (9.52)
Cardiovascular disease 1,332 (44.65) 2,974 (41.80) 307 (46.52) 639 (37.88) NA 116 (5.18)
Hypertension 1,096 (36.74) 2,494 (35.05) 192 (29.09) 532 (31.54) NA 63 (2.82)
Immunocompromised (e.g., HIV, cancer) or auto-immune disorder 349 (11.70)7 708 (9.05) 121 (18.33) 110 (6.52) NA 114 (5.09)
Asthma 334 (11.20) 575 (8.08) 45 (6.82) 131 (7.77) NA 133 (5.94)
Kidney disease 141 (4.73) 317 (4.46) 118 (17.88) 195 (11.56) NA 90 (4.02)
Chronic lung conditions 443 (14.85) 878 (12.34) NA 208 (12.33) NA 59 (2.64)
Any liver disease 227 (7.61) 391 (5.50) 60 (9.09) 81 (4.80) NA 15 (0.67)
Obesity 829 (27.79) 655 (9.21) 83 (12.58) 250 (14.82) NA 169 (7.55)
Dementia 23 (0.77) NA 8 (1.21) 13 (0.77) NA 15 (0.67)
None of above comorbidities 1,033 (34.63) 3,230 (45.40) NA 870 (51.57) NA 1,718 (76.76)
Pregnancy status1,3 Yes 82 (2.75) NA 40 (6.06) NA NA NA
No 1,688 (56.59) NA 334 (50.61) NA NA NA
Unknown/ missing NA 7,115 (100) NA NA NA NA
Geographic divisions and regions4 New England 43 (1.44) 2,669 (37.51) NA NA 0 (0) NA
Mid-Atlantic 1,724 (57.79) NA NA NA 2 (0.20) NA
South Atlantic 333 (11.16) 2,952 (41.49) NA NA 79 (8.09) NA
East south central 16 (0.54) NA NA NA 61 (6.24) NA
West south central 215 (7.21) NA NA NA 1 (0.10) NA
East north central 154 (5.16) 293 (4.12) NA NA 448 (45.85) 2,238 (100)
West north central 22 (0.74) NA NA NA 3 (0.31) NA
Mountain 23 (0.77) 1,201 (16.88) NA NA 374 (38.28) NA
Pacific 210 (7.04) NA 660 (100) NA 8 (0.82) NA
Unknown/ missing 198 (6.64) NA NA 1,687(100) 1 (0.10) NA
Presenting symptoms1 No presenting symptoms identified 1,874 (62.82) 4,193 (58.93) 404 (61.21) 917 (54.36) NA NA
Fever >100.4 80 (2.68) 265 (3.72) NA NA NA NA
Low blood pressure 10 (0.34) NA NA NA NA NA
Diarrhea 32 (1.07) 79 (1.11) 31 (4.70) 47 (2.79) NA NA
Hypoglycemic 7 (0.23) NA NA NA NA NA
Chest pain 120 (4.02) 298 (4.19) 43 (6.52) 68 (4.03) NA NA
Delirium/confusion 67 (2.25) 24 (0.34) NA 126 (7.47) NA NA
Headache 69 (2.31) 146 (2.05) 20 (3.03) 23 (1.36) NA NA
Sore throat 38 (1.27) 95 (1.34) NA 17 (1.01) NA NA
Cough 266 (8.92) 810 (11.38) 100 (15.15) 68 (4.03) NA NA
Shortness of breath 255 (8.55) 538 (7.56) 78 (11.82) 166 (9.84) NA NA
Pneumonia 165 (5.53) 450 (6.32) 78 (11.82) 337 (19.98) NA NA
Acute respiratory infection 62 (2.08) 22 (0.31) 22 (3.33) 298 (17.66) NA NA
Acute respiratory distress, arrest, or failure 53 (1.78) 292 (4.10) 43 (6.52) 10 (0.59) NA NA
Cardiovascular condition 609 (20.42) 1,719 (24.16) 131 (19.58) 598 (34.45) NA NA
Renal condition 61 (2.04) 214 (3.01) 57 (8.64) NA NA NA
≥ 1 symptom above 1,064 (35.67) 2,922 (41.07) 256 (38.79) 770 (45.64) NA NA
Serological test type IgG 2,769 (92.83) 6397 (89.91) 660 (100) 1,617 (95.85) 593 (60.70) 1,911 (85.39)
Total antibody 169 (5.67) 718 (10.09) NA 42 (2.49) 384 (39.30) 327 (14.61)
Unknown/ missing NA NA NA 28 (1.66) NA NA
Manufacturer—serological test name Δ 1,604 (53.77) 983 (13.82) NA NA NA NA
Θ NA NA 43 (6.52) NA NA NA
Π 2 (0.07) NA NA NA 314 (32.14) NA
Λ 4 (0.13) NA 290 (43.94) NA NA NA
Ξ NA NA 60 (9.09) NA NA NA
Γ 637 (21.35) 513 (7.21) NA NA 279 (28.56) NA
Ψ NA NA NA NA 384 (39.90) NA
Unknown/ missing 691 (23.16) 5,619 (78.97) 267 (40.45) 1,687 (100) NA 2,238 (100)
Manufacturer—molecular test name Y NA 44 (0.62) NA NA NA NA
X 267 (8.95) 403 (5.66) NA NA NA NA
Σ 272 (9.12) 367 (5.16) NA NA NA NA
Φ NA 85 (1.19) NA NA NA NA
Ω 6 71 (2.38) NA NA NA NA NA
j 18 (0.60) NA NA NA NA NA
Unknown/missing 2,310 (77.44) 6,216 (87.36) NA NA NA NA
Care setting (where RNA test occurred) Inpatient 86 (2.88) 151 (2.12) 97 (14.70) NA 53 (5.42) NA
Outpatient 1,407 (47.17) 6,685 (93.96) 563 (85.30) NA 777 (79.53) NA
ED 143 (4.79) 279 (3.92) NA NA 147 (15.05) NA
Unknown/ missing 1,302 (43.65) NA NA NA NA NA
Calendar time (based on RNA test) Before June 15, 2020 2,149 (72.04) 3,761 (52.86) 275 (41.67) 476 (28.22) 472 (48.31) 664 (29.67)
On or after June 15, 2020 789 (26.45) 3,354 (47.14) 385 (58.33) 1,211 (71.78) 505 (51.69) 1,574 (70.33)
Smoking status Has a history of smoking NA NA NA 256 (15.17) NA NA
No history NA NA NA 1,431 (84.83) NA NA

1. Phenotypes (code-sets) of ICD-10, medication, and LOINC are provided in the S2 Table. Conditions may be identified using ICD-10, medication, or both.

2. Pre-existing conditions were assessed 365 days before the index date and were not mutually exclusive.

3. Pregnancy Status was assessed up to 40 weeks before the index date (among women only).

4. Geographic regions were based on patient home zip code and defined by the U.S. Census Bureau (https://www2.census.gov/geo/pdfs/maps-data/maps/reference/us_regdiv.pdf) and mapped by census track zip code. States included in each region are as follows: New England: Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont; Mid Atlantic: New Jersey, New York, Pennsylvania; East North Central: Indiana, Illinois, Michigan, Ohio, Wisconsin; West North Central: Iowa, Nebraska, Kansas, North Dakota, Minnesota, South Dakota, Missouri; South Atlantic: Delaware, District of Columbia, Florida, Georgia, Maryland, North Carolina, South Carolina, Virginia, West Virginia; East South Central: Alabama, Kentucky, Mississippi, Tennessee; West South Central: Arkansas, Louisiana, Oklahoma, Texas; Mountain: Arizona, Colorado, Idaho, New Mexico, Montana, Utah, Nevada, Wyoming; Pacific: Alaska, California, Hawaii, Oregon, Washington.

5. Data were not available.

6. Ω not specified and may not have received an EUA.

7. Dataset A is only looking at the autoimmune diseases

Positive percent agreement (PPA) of serology among molecularly confirmed SARS-CoV-2

The overall PPA ranged from 65–90% across analytic datasets (Table 2). The real-world PPA met the EUA requirement of ≥87% in three datasets (A, B, D) [34]. Two of these datasets represented national administrative claims and associated results with the date the sample was collected or received by the laboratory; the third represented data from EHRs and associated results with the date the test was conducted, which is lagged further from the clinical interaction than the former. Overall PPA was likely influenced by the mix of serology tests represented in each dataset. Seven serological tests were reported in this analysis, of which two (Δ and Γ) met the EUA PPA requirements. Two tests were used across multiple datasets and performed similarly above the EUA requirement. PPA by serology test type varied across datasets; with three of five reporting significantly lower PPA from total antibody (PPA range: 69–90%) compared to IgG (PPA range: 87–92%); and two showing no difference. We observed no difference in PPA with antibody tests that target spike compared to nucleocapsid proteins.

Table 2. Positive percent agreement (PPA) 14–90 days after positive RNA test.

Datasets A B C D E F
N = 2,938 N = 7,115 N = 660 N = 1,687 N = 977 N = 2,238
PPA (95% confidence interval)
Overall 0.91 (0.90, 0.92) 0.90 (0.89, 0.90) 0.65 (0.62, 0.69) 0.88 (0.86, 0.89) 0.84 (0.82, 0.86) 0.79 (0.78, 0.81)
Age (Years) <20 0.92 (0.85, 0.96) 0.85 (0.79, 0.89) 0.81 (0.54, 0.96) 0.91 (0.82, 1.00) 0.8 (0.58, 0.92) 0.77 (0.66, 0.86)
20–44 0.90 (0.88, 0.92) 0.87 (0.86, 0.88) 0.62 (0.56, 0.68) 0.85 (0.82, 0.88) 0.87 (0.83, 0.90) 0.75 (0.72, 0.78)
45–54 0.93 (0.91, 0.95) 0.9 (0.89, 0.92) 0.67 (0.57, 0.76) 0.88 (0.85, 0.92) 0.89 (0.84, 0.93) 0.81 (0.77, 0.84)
55–64 0.91 (0.89, 0.93) 0.91 (0.90, 0.92) 0.69 (0.60, 0.77) 0.88 (0.84, 0.91) 0.82 (0.77, 0.87) 0.79 (0.75, 0.82)
65–74 0.93 (0.89, 0.95) 0.92 (0.90, 0.94) 0.65 (0.55, 0.74) 0.92 (0.88, 0.95) 0.79 (0.71, 0.85) 0.83 (0.79, 0.87)
75–84 0.97 (0.89, 1.00) 0.92 (0.89, 0.95) 0.67 (0.47, 0.83) 0.86 (0.81, 0.92) 0.82 (0.71, 0.89) 0.86 (0.79, 0.91)
≥85 0.89 (0.67, 0.99) 0.91 (0.83, 0.96) 0.71 (0.42, 0.92) 0.86 (0.76, 0.97) 0.71 (0.50, 0.86) 0.87 (0.74, 0.95)
Sex Female 0.91 (0.90, 0.93) 0.89 (0.88, 0.90) 0.61 (0.56, 0.66) 0.87 (0.84, 0.89) 0.85 (0.82, 0.88) 0.77 (0.75, 0.80)
Male 0.91 (0.90, 0.93) 0.90 (0.89, 0.91) 0.71 (0.66, 0.76) 0.89 (0.87, 0.91) 0.83 (0.79, 0.86) 0.82 (0.80, 0.85)
Race Black NA5 0.95 (0.91, 0.98) 0.68 (0.43, 0.87) 0.92 (0.88, 0.97) 0.86 (0.76, 0.93) 0.86 (0.81, 0.90)
White NA 0.88 (0.86, 0.90) 0.60 (0.55, 0.65) 0.86 (0.85, 0.88) 0.83 (0.80, 0.85) 0.78 (0.76, 0.80)
Asian NA 0.94 (0.81, 0.99) 0.67 (0.55, 0.78) 0.90 (0.79, 1.00) 0.84 (0.71, 0.92) 0.75 (0.57, 0.89)
Pacific Islander/ Native Hawaiian NA NA NA 1.00 (1.00, 1.00) 1.00 (0.34, 1.00) NA
American Indian or Alaska Native NA 1.00 (0.05, 1.00) NA 1.00 (1.00, 1.00) 0.75 (0.47, 0.91) 1.00 (0.72, 1.00)
Other NA 0.89 (0.88, 0.90) 0.69 (0.58, 0.79) NA NA 1.00 (0.54, 1.00)
Unknown/ Missing NA 0.89 (0.88, 0.90) 0.76 (0.68, 0.83) 0.9 (0.86, 0.95) 0.96 (0.89, 0.99) 0.87 (0.80, 0.92)
Hispanic Ethnicity Yes NA 0.94 (0.93, 0.96) 0.79 (0.73, 0.85) 0.94 (0.91, 0.96) 0.96 (0.91, 0.98) 0.92 (0.88, 0.95)
No NA 0.89 (0.87, 0.91) 0.60 (0.55, 0.65) 0.86 (0.84, 0.88) 0.82 (0.80, 0.85) 0.78 (0.76, 0.79)
Unknown/ Missing NA 0.89 (0.88, 0.90) 0.64 (0.49, 0.77) 0.81 (0.67, 0.95) 0.87 (0.68, 0.95) 0.8 (0.72, 0.87)
Pre-existing Conditions1,2 Diabetes 0.94 (0.92, 0.96) 0.94 (0.92, 0.95) 0.75 (0.66, 0.83) 0.91 (0.88, 0.94) NA 0.85 (0.79, 0.89)
Cardiovascular disease 0.92 (0.90, 0.93) 0.92 (0.91, 0.93) 0.67 (0.62, 0.72) 0.87 (0.84, 0.89) NA 0.83 (0.75, 0.89)
Hypertension 0.93 (0.91, 0.94) 0.92 (0.91, 0.93) 0.71 (0.64, 0.77) 0.89 (0.86, 0.91) NA 0.87 (0.77, 0.94)
Immunocompromised (e.g., HIV, Cancer,) or Auto-immune disorders 0.93 (0.97, 1.00)6 0.88 (0.85, 0.90) 0.56 (0.47, 0.65) 0.70 (0.61, 0.79) NA 0.73 (0.64, 0.81)
Asthma 0.89 (0.85, 0.92) 0.9 (0.87, 0.92) 0.62 (0.47, 0.76) 0.82 (0.76, 0.89) NA 0.74 (0.65, 0.81)
Kidney Disease 0.92 (0.86, 0.96) 0.95 (0.92, 0.97) 0.75 (0.67, 0.83) 0.9 (0.86, 0.94) NA 0.83 (0.74, 0.90)
Chronic Lung conditions 0.90 (0.86, 0.92) 0.90 (0.88, 0.92) NA 0.86 (0.81, 0.90) NA 0.75 (0.62, 0.85)
Any liver disease 0.93 (0.89, 0.96) 0.90 (0.87, 0.93) 0.65 (0.52, 0.77) 0.88 (0.80, 0.95) NA 0.80 (0.52, 0.96)
Obesity 0.93 (0.92, 0.95) 0.92 (0.89, 0.94) 0.81 (0.71, 0.89) 0.88 (0.84, 0.92) NA 0.83 (0.76, 0.88)
Dementia 1.00 (0.85, 1.00) NA 0.75 (0.35, 0.97) 1.00 (1.00, 1.00) NA 0.80 (0.77, 0.81)
None of above comorbidities 0.91 (0.89, 0.93) 0.88 (0.87, 0.89) NA 0.89 (0.87, 0.91) NA 0.79 (0.52, 0.96)
Pregnancy Status1,3 Yes 0.89 (0.80, 0.95) NA 0.58 (0.41 0.73) 0.87 (0.79, 0.96) NA NA
No 0.92 (0.90, 0.93) NA 0.61 (0.56, 0.67) 0.86 (0.84, 0.89) NA NA
Unknown/ Missing NA NA NA NA NA NA
Geographic Divisions and Regions 4 New England 0.77 (0.61, 0.88) 0.9 (0.89, 0.91) NA NA NA NA
Mid-Atlantic 0.92 (0.91, 0.94) NA NA NA 1.00 (0.94, 1.00) NA
South Atlantic 0.90 (0.86, 0.93) 0.89 (0.88, 0.90) NA NA 0.84 (0.74, 0.90) NA
East South Central 0.81 (0.54, 0.96) NA NA NA 0.87 (0.76, 0.93) NA
West South Central 0.93 (0.89, 0.96) NA NA NA 1.00 (0.21, 1.00) NA
East North Central 0.86 (0.80, 0.91) 0.83 (0.79, 0.87) NA NA 0.79 (0.75, 0.82) 0.79 (0.78, 0.81)
West North Central 0.82 (0.60, 0.95) NA NA NA 1.00 (0.44, 1.00) NA
Mountain 0.96 (0.78, 1.00) 0.91 (0.89, 0.92) NA NA 0.91 (0.88, 0.94) NA
Pacific 0.90 (0.85, 0.94) NA 0.65 (0.62, 0.69) NA 0.50 (0.22, 0.78) NA
Unknown/ Missing 0.94 (0.90, 0.97) NA NA 0.88 (0.86, 0.89) 1.00 (0.21, 1.00) NA
Presenting Symptoms1 No presenting symptoms identified 0.91 (0.89, 0.92) 0.88 (0.87, 0.89) 0.63 (0.58, 0.68) 0.86 (0.84, 0.88) NA NA
≥ 1 symptom below 0.93 (0.91, 0.94) 0.92 (0.91, 0.93) 0.69 (0.63, 0.75) 0.89 (0.87, 0.92) NA NA
Fever >100.4 0.95 (0.88, 0.99) 0.91 (0.86, 0.94) NA NA NA NA
Low blood pressure 0.90 (0.55, 1.00) NA NA NA NA NA
Diarrhea 0.88 (0.71, 0.96) 0.96 (0.89, 0.99) 0.61 (0.42, 0.78) 0.85 (0.75, 0.95) NA NA
Hypoglycemic 0.71 (0.29, 0.96) NA NA NA NA NA
Chest pain 0.93 (0.86, 0.97) 0.90 (0.86, 0.93) 0.74 (0.59, 0.86) 0.84 (0.75, 0.93) NA NA
Delirium/Confusion 0.93 (0.83, 0.98) 0.92 (0.73, 0.99) NA 0.96 (0.93, 0.99) NA NA
Headache 0.88 (0.78, 0.95) 0.89 (0.83, 0.94) 0.65 (0.41, 0.85) 0.83 (0.67, 0.98) NA NA
Sore throat 0.92 (0.79, 0.98) 0.85 (0.77, 0.92) NA 0.82 (0.64, 1.00) NA NA
Cough 0.93 (0.89, 0.96) 0.92 (0.89, 0.93) 0.74 (0.64, 0.82) 0.78 (0.68, 0.88) NA NA
Shortness of breath 0.94 (0.90, 0.96) 0.94 (0.91, 0.96) 0.73 (0.62, 0.82) 0.86 (0.80, 0.91) NA NA
Pneumonia 0.96 (0.91, 0.98) 0.97 (0.95, 0.98) 0.82 (0.72, 0.90) 0.93 (0.91, 0.96) NA NA
Acute respiratory infection 0.92 (0.82, 0.97) 0.86 (0.65, 0.97) 0.68 (0.45, 0.86) 0.95 (0.93, 0.98) NA NA
Acute respiratory distress, arrest, or failure 0.96 (0.87, 1.00) 0.91 (0.88, 0.94) 0.84 (0.69, 0.93) 0.8 (0.55, 1.00) NA NA
Cardiovascular condition 0.94 (0.92, 0.96) 0.92 (0.91, 0.94) 0.73 (0.64, 0.80) 0.89 (0.87, 0.92) NA NA
Renal Condition 0.93 (0.84, 0.98) 0.93 (0.89, 0.96) 0.79 (0.66, 0.89) NA NA NA
Serological Test Type IgG 0.92 (0.91, 0.93) 0.9 (0.89, 0.90) 0.65 (0.62, 0.69) 0.88 (0.86, 0.90) 0.87 (0.84, 0.90) 0.79 (0.77, 0.80)
Total Antibody 0.87 (0.81, 0.92) 0.9 (0.87, 0.92) NA 0.69 (0.55, 0.83) 0.8 (0.75, 0.83) 0.83 (0.79, 0.87)
Unknown/ Missing NA NA NA 0.96 (0.90, 1.00) NA NA
Manufacturer—serological test name7 Δ 0.91 (0.89, 0.92) 0.89 (0.87, 0.91) NA NA NA NA
Θ NA NA 0.81 (0.67, 0.92) NA NA NA
Π 0.50 (0.01, 0.99) NA NA NA 0.82 (0.78, 0.86) NA
Λ 1 (0.40, 1.00) NA 0.70 (0.65, 0.76) NA NA NA
Ξ NA NA 0.72 (0.59, 0.83) NA NA NA
Γ 0.92 (0.90, 0.94) 0.91 (0.88, 0.93) NA NA 0.92 (0.89, 0.95) NA
Ψ NA NA NA NA 0.8 (0.75, 0.83) NA
Unknown/ Missing 0.93 (0.91, 0.95) 0.9 (0.89, 0.90) 0.56 (0.50, 0.62) 0.88 (0.86, 0.89) NA 0.79 (0.78, 0.81)
Manufacturer—molecular test name Y NA 0.91 (0.78, 0.97) NA NA NA NA
X 0.90 (0.85, 0.93) 0.84 (0.80, 0.87) NA NA NA NA
Σ 0.94 (0.90, 0.96) 0.92 (0.89, 0.95) NA NA NA NA
Φ NA 0.91 (0.82, 0.96) NA NA NA NA
Ω 0.94 (0.86, 0.98) NA NA NA NA NA
j 0.83 (0.59, 0.96) NA NA NA NA NA
Unknown/Missing 0.91 (0.90, 0.93) 0.90 (0.89, 0.91) NA NA NA NA
Care Setting (where RNA test occurred) Inpatient 0.97 (0.90, 0.99) 0.97 (0.93, 0.99) 0.77 (0.68, 0.85) NA 0.7 (0.56, 0.80) NA
Outpatient 0.92 (0.91, 0.93) 0.89 (0.88, 0.90) 0.63 (0.59, 0.67) NA 0.84 (0.81, 0.86) NA
ED 0.99 (0.95, 1.00) 0.96 (0.93, 0.98) NA NA 0.93 (0.88, 0.96) NA
Unknown/ Missing 0.88 (0.88, 0.91) NA NA NA NA NA
Calendar Time (based on RNA test) Before June 15, 2020 0.92 (0.91, 0.93) 0.92 (0.91, 0.93) 0.61 (0.55, 0.67) 0.92 (0.90, 0.95) 0.84 (0.80, 0.87) 0.80 (0.77, 0.83)
On or after June 15, 2020 0.90 (0.88, 0.92) 0.87 (0.86, 0.98) 0.68 (0.63, 0.73) 0.86 (0.84, 0.88) 0.85 (0.81, 0.88) 0.79 (0.77, 0.81)
Smoking Status Has History of Smoking NA NA NA 0.86 (0.82, 0.91) NA NA
No History NA NA NA 0.88 (0.86, 0.90) NA NA

1. Phenotypes (code-sets) of ICD-10, medication, and LOINC are provided in the S2 Table. Conditions may be identified using ICD-10, medication, or both.

2. Pre-existing conditions were assessed 365 days before the index date and were not mutually exclusive.

3. Pregnancy Status was assessed up to 40 weeks before the index date.

4. Geographic regions were based on patients’ home codes code and defined by the U.S. Census Bureau (https://www2.census.gov/geo/pdfs/maps-data/maps/reference/us_regdiv.pdf) and mapped by census track zip code. States included in each region are as follows: New England: Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont; Mid Atlantic: New Jersey, New York, Pennsylvania; East North Central: Indiana, Illinois, Michigan, Ohio, Wisconsin; West North Central: Iowa, Nebraska, Kansas, North Dakota, Minnesota, South Dakota, Missouri; South Atlantic: Delaware, District of Columbia, Florida, Georgia, Maryland, North Carolina, South Carolina, Virginia, West Virginia; East South Central: Alabama, Kentucky, Mississippi, Tennessee; West South Central: Arkansas, Louisiana, Oklahoma, Texas; Mountain: Arizona, Colorado, Idaho, New Mexico, Montana, Utah, Nevada, Wyoming; Pacific: Alaska, California, Hawaii, Oregon, Washington.

5. Data was not available

6. Dataset A is only looking at the autoimmune diseases

7. Shaded cells represent a small sample size (n<40) or non-robust data capture (>30% missing).

PPA was significantly higher in Black (PPA range: 86–92%), as compared to White (PPA range: 78–86%), persons in at least two of the four datasets reporting robust race/ethnicity data. PPA was significantly higher in Hispanic (PPA range: 79–96%), compared to non-Hispanic (PPA range: 60–86%), patients. PPA appeared highest in those with diabetes (PPA range: 75–94%) and kidney disease (PPA range: 75–95%), and lowest in those with conditions that leave them immunocompromised (PPA range: 56–93%). We observed higher PPA in the inpatient (PPA range: 70–97%) or ED (PPA range: 93–99%) setting compared to outpatient (PPA range: 63–92%). There was some evidence of higher PPA among patients with at least one COVID-19 related symptoms as compared to those with none (PPA range: 63–91%) among two datasets (B and D); and was particularly high for select conditions like pneumonia (PPA range: 82–97%).

However, differences in the PPA by the presence of symptoms do not appear to be explained by the test. A stratified analysis by test comparing those with and without symptoms (Table 3) showed no significant difference in PPA. PPA trends by calendar time were not consistent across datasets.

Table 3. Positive percent agreement (PPA) by serology tests and presence of symptoms.

Manufacturer—serological test name Datasets A B C D E F
N = 2,938 N = 7,115 N = 660 N = 1,687 N = 977 N = 2,238
PPA (95% confidence interval)
Δ No symptoms identified 0.90 (0.88, 0.92) 0.89 (0.86, 0.91) NA NA NA NA
≥ 1 symptom identified 0.92 (0.89, 0.94) 0.89 (0.85, 0.92) NA NA NA NA
Θ No symptoms identified NA1 NA 0.69 (0.39, 0.91) NA NA NA
≥ 1 symptom identified NA NA 0.87 (0.69, 0.96) NA NA NA
Π No symptoms identified 0.00 (0.00, 0.98) NA NA NA NA NA
≥ 1 symptom identified 1.00 (0.03, 1.00) NA NA NA NA NA
Λ No symptoms identified 1.00 (0.4, 1.00) NA 0.69 (0.61, 0.75) NA NA NA
≥ 1 symptom identified 0.00 (0.00, 1.00) NA 0.73 (0.64, 0.81) NA NA NA
Ξ No symptoms identified NA NA 0.69 (0.49, 0.85) NA NA NA
≥ 1 symptom identified NA NA 0.74 (0.55, 0.88) NA NA NA
Γ No symptoms identified 0.92 (0.89, 0.94) 0.88 (0.84, 0.92) NA NA NA NA
≥ 1 symptom identified 0.93 (0.89, 0.96) 0.95 (0.91, 0.98) NA NA NA NA
Unknown/Missing No symptoms identified 0.92 (0.88, 0.94) 0.88 (0.87, 0.89) 0.56 (0.49, 0.64) 0.86 (0.84, 0.88) NA NA
≥ 1 symptom identified 0.95 (0.91, 0.97) 0.92 (0.91, 0.93) 0.56 (0.45, 0.67) 0.89 (0.87, 0.92) NA NA

1. Data was not available.

Factors associated with seropositivity

In adjusted models (Figs 49), the OR for seropositivity was significantly elevated in Hispanic compared to non-Hispanic ethnicity (OR range: 2.59–3.86); among those with pre-existing diabetes (OR range: 1.49–1.56) and obesity (1.63–2.23) as compared to those without pre-existing conditions; and among those observed in the ED compared to outpatient (OR range: 2.49–10.97). The OR for seropositivity was significantly lower in those with pre-existing immunocompromised or autoimmune conditions compared to those without such conditions (OR range: 0.25–0.70). In two of three datasets that included pre-existing cardiovascular disease in the OR model, the OR for seropositivity was significantly lower in persons with, compared to those without, such conditions (OR range: 0.49–0.57). The OR for seropositivity tended to be lower on or after June 15 compared to prior in half the datasets, but differences were not significant in the other half.

Fig 4. Odds of seropositivity dataset A.

Fig 4

Fig 9. Odds of seropositivity dataset F.

Fig 9

Fig 5. Odds of seropositivity dataset B.

Fig 5

Fig 6. Odds of seropositivity dataset C.

Fig 6

Fig 7. Odds of seropositivity dataset D.

Fig 7

Fig 8. Odds of seropositivity dataset E.

Fig 8

Discussion

Serology tests are an important instrument in the toolkit to understand the epidemiology of COVID-19 because of their ability to identify persons with prior infection who may present too late in the infectious period due to mild symptoms, or no symptoms at all. Serology results may inform diagnoses of post-acute SARS-CoV-2 (PASC) and the appropriate treatment course, which may depend on whether patients are at increased risk for severe illness due to insufficient antibody response [35]. The reported sensitivity of the serology tests included in this analysis that were submitted for EUA approval were all >95% [36]. Our analysis of multiple large datasets of patients with confirmed SARS-CoV-2 infection suggests that serology tests performed lower than = expected–with PPA ranges (a measure analogous to sensitivity) from 65–90%.—Our results align with results from smaller, detailed laboratory evaluations that suggest a lack of harmonization, including optimization of cut-off values, may contribute to decreased overall performance. Additionally, our results align with studies that include more representative samples of milder or asymptomatic persons [3739]. Two of seven tests reported across datasets achieved the EUA requirement of PPA ≥ 87%. As we did not have data on specific serology-molecular pairs or meta-information on the tests (including fidelity to protocols for serology and molecular test analysis), these results reflect more on the real-world implementation of the tests rather than the true quality of the tests. Specifically, where the same test was used across multiple datasets, they all performed similarly. For example, the serology test Γ performed similarly high (PPA >90%) across three datasets. However, the overall PPA for tests performed in datasets A and B were higher than in dataset E. A major factor that may have contributed to this difference is that the other serological tests reported to datasets A and B performed above the EUA requirement. In contrast, the other tests reported in dataset E performed below the EUA requirement. Additionally, datasets A and B leveraged administrative claims data and associated RNA and serology results with sample collection or sample receipt date, while dataset E associated results with the date the test was run.

Dataset E also represents those from a healthcare delivery system where serology tests were initially only used for symptomatic patients with at least 12 days of symptoms. This practice shifted after approximately two months (June 1, 2020) to a protocol that required both molecular and serological testing for SARS-CoV-2 as part of pre-procedure screening. This protocol was in effect for another three months (August 31, 2020), after which the healthcare system shifted to unrestricted testing for both molecular and serology tests and saw a substantial drop in the use of serological testing. We expected that procedural “lags” to serotesting, combined with additional lags due to associating results with a date downstream from the clinical interaction, may have further extended the time between infection/symptom onset and the actual time of serology sampling. The impact of this misclassification may be most important for serology samples at the upper bounds of 90 days; where samples were likely >90 days from the point of infection and humoral antibodies more likely to have declined. Despite changes in the protocol over time, we observed no overall or test-specific difference in PPA before or since June 15, 2020 in dataset E. Nevertheless, administrative protocols create lags in serotesting that challenge our assumptions of whether the observed molecular “test date” is a good proxy for symptom onset. Absent any knowledge of such policy, it’s difficult to make broad assumptions regarding patterns in molecular or serology testing unless established clinical protocols are known.

We observed that patients of Hispanic ethnicity compared to non-Hispanic patients, with pre-existing obesity and those who presented in the ED had a higher OR for seropositivity; and similarly higher PPA. These results further support what others have observed that persons with unmanaged diabetes, who are disproportionately people of color, are vulnerable to hyper-inflammation related to COVID-19 [40]. Furthermore, hyper-inflammation, including pro-inflammatory cytokine storm, has been associated with severe disease, reduced viral clearance [41], and sustained antibody production [42]. Although a recent small study showed that while a low viral load is associated with lower antibody response, clinical illness does not guarantee seroconversion [43]. Other studies have demonstrated people with cancer have a lower probability of mounting an immune response from the vaccine, as demonstrated by seroconversion, viral neutralization, and T-cell response [44, 45]. Our results demonstrating lower odds of seropositivity among those with cancer and other immunodeficiencies suggest that the same may be true regarding their antibody response to infection.

Strengths

Our study has many strengths. This was a large assessment of serotesting across the U.S. in diverse datasets leveraging either EHR or claims data. We developed a protocol that incorporated the unique characteristics of each data source and provided a forum to transparently communicate and collaborate on study design and interpretation. We also established a platform to rapidly collect and analyze data from various systems to evaluate process improvement and identify important trends over time. Such a platform may be used to evaluate process improvement and comparisons within data systems. We did extensive characterization of missing data to guide model development and help with interpretation. Additionally, this study was conducted before public availability of COVID-19 vaccines across the U.S., which minimizes the potential for confounding related to vaccine-induced antibodies.

Limitations

A major limitation in this real-world analysis is a large number of missing test names and relevant meta-data, including quality control measures adopted, for both molecular and serological tests. As such, we were unable to account for molecular-serology pairs when assessing PPA or the fidelity with which these tests were performed. A large amount of missing test name information limited our ability to describe trends by the manufacturer. Although, a thorough examination of missing data does not suggest differential missingness by age or sex. Importantly, the intent of this analysis was not to evaluate individual tests, but the performance of serology in the context of real-world implementation of test protocols and varying reference standards. As discussed in our prior manuscript, the sample included in this study included those who were more likely to be serotested for SARS-CoV-2: White, 45–64 years of age, with prior history of cardiovascular disease. Nevertheless, there was still sufficiently large number of people to assess PPA trends among younger ages and in those with and without other pre-existing conditions. Finally, this study was conducted before the surge of the Omicron variant, which has been shown to have a number of mutations on the N-gene and S-gene that reduce the sensitivity of some diagnostic tests [46]. As such, our inference is limited to the SARS-CoV-2 variants prior to Omicron, primarily alpha.

Conclusion

Across large samples of patients with molecularly confirmed SARS-CoV-2, serology tests did not consistently meet the EUA requirement of PPA ≥ 87% in the post-market setting. However, given the limited availability of test names, this analysis serves as a signal that further investigation into how serology and molecular tests are used, including protocol fidelity, is needed to understand ways to improve the real-world performance of serology tests.

Despite differences in testing protocols and data availability, the similarity in performance of serology tests across datasets suggests that serology tests were robust to differences in care settings. However, the real-world PPA for several serology tests did not meet EUA requirements; and the exclusive representation and low use of such tests in certain datasets look to have impacted the overall performance of serology tests in those datasets. Where data were sufficiently robust, we observed that people of Hispanic ethnicity had a higher odd of seropositivity than non-Hispanics. Higher odds of seropositivity in those with pre-existing diabetes or obesity further support the hypothesis that these conditions are associated with more severe disease, reduced viral clearance, and the sustained presence of antibodies. Conversely, lower odds of seropositivity among those with cancer and other immunodeficiencies suggest that immunopathology in these groups associated with the vaccine may extend to infection.

Interpreting results from real-world data collected from clinical and administrative databases is challenging. A clear understanding of testing protocols at the point of care is needed to validate assumptions regarding proxy variables and to interpret results. Incomplete information on race/ethnicity and test name limited our ability to address racial disparities in testing and real-world performance of serological tests. Nevertheless, implementing best practices for analyzing and reporting results from observational data across multiple datasets yields confidence in trends that are repeated. And where results are divergent, we were able to explore how differences in data sources may explain findings and target areas for future investigation. Improved data interoperability to link test names and clinical/demographic data is critical to enable rapid assessment of the real-world performance of in vitro diagnostic tests, particularly in the face of fast-mutating pathogens.

Supporting information

S1 Fig. Study design diagram dataset A.

(TIF)

S2 Fig. Study design diagram dataset B.

(TIF)

S3 Fig. Study design diagram dataset C.

(TIF)

S4 Fig. Study design diagram dataset D.

(TIF)

S5 Fig. Study design diagram dataset E.

(TIF)

S6 Fig. Study design diagram dataset F.

(TIF)

S1 Table. Characteristics of participating data sources and representative populations.

(DOCX)

S2 Table. Phenotype (code-lists) for specified presenting symptoms & pre-existing conditions.

(DOCX)

Acknowledgments

Special thanks to our advisors on this project from the U.S. Food and Drug Administration: Aloka Chakravarty, Tamar Lasky, Gina Valo, Mary Jung, Stephen Lovell, Jacqueline M Major, Daniel Caños, Sara Brenner, and Wendy Rubinstein; and Duke-Margolis: Christina Silcox. We thank all members of the Evidence Accelerator Workgroup for their support and feedback: Roland Romero, James Okusa, Elijah Mari Quinicot, Amar Bhat, Susan Winckler, Alecia Clary, Sadiqa Mahmood, Philip Ballentine, Perry L. Mar, Cynthia Lim Louis, Connor McAndrews, Elitza S. Theel, Cora Han, Pagan Morris, and Charles Wilson. A special thanks and recognition for the contributions and sacrifice of Dr. Michael Waters, our dear colleague, and friend who will be forever in our thoughts. We thank Amir Alishahi Tabriz MD, PhD for his assistance with manuscript preparation.

Data Availability

All relevant data are contained within the paper and its Supporting information files. Person-level data are unavailable.

Funding Statement

Financial support for this work was provided in part by a grant from The Rockefeller Foundation (HTH 030 GA-S). BDP, CK, GJ used funding provided by Yale University-Mayo Clinic Center of Excellence in Regulatory Science and Innovation (CERSI), a joint effort between Yale University, Mayo Clinic, and the U.S. Food and Drug Administration (FDA) (3U01FD005938) (https://www.fda.gov/). AJB was funded by award number A128219 and Grant Number U01FD005978 from the FDA, which supports the UCSF-Stanford Center of Excellence in Regulatory Sciences and Innovation (CERSI). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the HHS or FDA. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Moline HL, Whitaker M, Deng L, Rhodes JC, Milucky J, Pham H, et al. Effectiveness of COVID-19 Vaccines in Preventing Hospitalization Among Adults Aged≥ 65 Years—COVID-NET, 13 States, February–April 2021. Morbidity and Mortality Weekly Report. 2021;70: 1088. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Lopez Bernal J, Andrews N, Gower C, Gallagher E, Simmons R, Thelwall S, et al. Effectiveness of Covid-19 vaccines against the B. 1.617. 2 (Delta) variant. New England Journal of Medicine. 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.England PH. SARS-CoV-2 variants of concern and variants under investigation in England. Public Health England. 2021;11. [Google Scholar]
  • 4.Tao K, Tzou PL, Nouhin J, Gupta RK, de Oliveira T, Kosakovsky Pond SL, et al. The biological and clinical significance of emerging SARS-CoV-2 variants. Nature Reviews Genetics. 2021;22: 757–773. doi: 10.1038/s41576-021-00408-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Hanson KE, Caliendo AM, Arias CA, Englund JA, Lee MJ, Loeb M, et al. Infectious Diseases Society of America guidelines on the diagnosis of COVID-19. Clinical infectious diseases. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Cheng MP, Yansouni CP, Basta NE, Desjardins M, Kanjilal S, Paquette K, et al. Serodiagnostics for Severe Acute Respiratory Syndrome–Related Coronavirus 2: A Narrative Review. Annals of internal medicine. 2020;173: 450–460. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Long Q-X, Liu B-Z, Deng H-J, Wu G-C, Deng K, Chen Y-K, et al. Antibody responses to SARS-CoV-2 in patients with COVID-19. Nature medicine. 2020;26: 845–848. doi: 10.1038/s41591-020-0897-1 [DOI] [PubMed] [Google Scholar]
  • 8.Long Q-X, Tang X-J, Shi Q-L, Li Q, Deng H-J, Yuan J, et al. Clinical and immunological assessment of asymptomatic SARS-CoV-2 infections. Nature medicine. 2020;26: 1200–1204. doi: 10.1038/s41591-020-0965-6 [DOI] [PubMed] [Google Scholar]
  • 9.Sethuraman N, Jeremiah SS, Ryo A. Interpreting diagnostic tests for SARS-CoV-2. Jama. 2020;323: 2249–2251. doi: 10.1001/jama.2020.8259 [DOI] [PubMed] [Google Scholar]
  • 10.Gao Z, Xu Y, Sun C, Wang X, Guo Y, Qiu S, et al. A systematic review of asymptomatic infections with COVID-19. Journal of Microbiology, Immunology and Infection. 2021;54: 12–16. doi: 10.1016/j.jmii.2020.05.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Caini S, Bellerba F, Corso F, Díaz-Basabe A, Natoli G, Paget J, et al. Meta-analysis of diagnostic performance of serological tests for SARS-CoV-2 antibodies up to 25 April 2020 and public health implications. Eurosurveillance. 2020;25: 2000980. doi: 10.2807/1560-7917.ES.2020.25.23.2000980 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Ainsworth M, Andersson M, Auckland K, Baillie JK, Barnes E, Beer S, et al. Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison. The Lancet Infectious Diseases. 2020;20: 1390–1400. doi: 10.1016/S1473-3099(20)30634-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.FDA U. In vitro diagnostics EUAs-serology and other adaptive immune response tests for SARS-CoV-2. 2021.
  • 14.Lassaunière R, Frische A, Harboe ZB, Nielsen AC, Fomsgaard A, Krogfelt KA, et al. Evaluation of nine commercial SARS-CoV-2 immunoassays. MedRxiv. 2020. [Google Scholar]
  • 15.Whitman JD, Hiatt J, Mowery CT, Shy BR, Yu R, Yamamoto TN, et al. Test performance evaluation of SARS-CoV-2 serological assays. MedRxiv. 2020. doi: 10.1101/2020.04.25.20074856 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Wajnberg A, Amanat F, Firpo A, Altman DR, Bailey MJ, Mansour M, et al. Robust neutralizing antibodies to SARS-CoV-2 infection persist for months. Science. 2020;370: 1227–1230. doi: 10.1126/science.abd7728 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Gudbjartsson DF, Norddahl GL, Melsted P, Gunnarsdottir K, Holm H, Eythorsson E, et al. Humoral immune response to SARS-CoV-2 in Iceland. New England Journal of Medicine. 2020;383: 1724–1734. doi: 10.1056/NEJMoa2026116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Overton CE, Stage HB, Ahmad S, Curran-Sebastian J, Dark P, Das R, et al. Using statistics and mathematical modelling to understand infectious disease outbreaks: COVID-19 as an example. Infectious Disease Modelling. 2020;5: 409–441. doi: 10.1016/j.idm.2020.06.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.McDonald CJ, Overhage JM, Barnes M, Schadow G, Blevins L, Dexter PR, et al. The Indiana network for patient care: a working local health information infrastructure. Health affairs. 2005;24: 1214–1220. [DOI] [PubMed] [Google Scholar]
  • 20.Dixon BE, Whipple EC, Lajiness JM, Murray MD. Utilizing an integrated infrastructure for outcomes research: a systematic review. Health Information & Libraries Journal. 2016;33: 7–32. doi: 10.1111/hir.12127 [DOI] [PubMed] [Google Scholar]
  • 21.Abbott® SARS-CoV-2 S1/S2 IgG (REF 6R86-20). 2021. Oct. [Internet]. https://www.fda.gov/media/137383/download.
  • 22.Euroimmun® Anti-SARS-CoV-2 ELISA (IgG) (REF EI 2606–9601 G). 2021. Oct. [Internet]. https://www.fda.gov/media/137609/download.
  • 23.Diazyme Laboratories, Inc. DIAZYME DZ-LITE SARS-CoV-2 IgGCLIA KIT (REF 60900 Rev C). 2021. Oct. [Internet]. https://www.fda.gov/media/139865/download.
  • 24.Beckman Coulter® SARS-CoV-2 S1/S2 IgG (REF C58961). 2021. Oct. [Internet]. https://www.fda.gov/media/139627/download.
  • 25.VITROS Immunodiagnostic Products Anti-SARS-CoV-2 IgG Reagent Pack (REF 619 9919). 2021. Oct. [Internet]. https://www.fda.gov/media/137363/download.
  • 26.DiaSorin Inc, LIAISON® SARS-CoV-2 S1/S2 IgG (REF 311460). 2021. Oct. [Internet]. https://www.fda.gov/media/137359/download.
  • 27.Cobas Elecsys Anti-SARS-CoV-2 (REF 09203095190). 2021. Oct. [Internet]. https://www.fda.gov/media/137605/download.
  • 28.SARS-CoV-2 Assay (Panther Fusion® System). 2021. Oct. [Internet]. https://www.fda.gov/media/136156/download.
  • 29.Aptima® SARS-CoV-2 Assay (Panther® System). 2021. Oct. [Internet]. https://www.fda.gov/media/138096/download.
  • 30.cobas® SARS-CoV-2. Qualitative assay for use on the cobas® 6800/8800 Systems. 2021. Oct. [Internet]. https://www.fda.gov/media/136049/download.
  • 31.Quest Diagnostics. SARS-CoV-2 RNA, Qualitative Real-Time RT-PCR (Test Code 39433). 2021. Oct. [Internet]. https://www.fda.gov/media/136231/download.
  • 32.TaqPath COVID-19 Combo Kit and SARS-CoV-2 RNA. Multiplex real-time RT-PCR test intended for the qualitative detection of nucleic acid from SARS‑CoV‑2. 2021. Oct. [Internet]. https://www.fda.gov/media/13612/download.
  • 33.Administration UF and D. Statistical guidance on reporting results from studies evaluating diagnostic tests. Rockville, MD: US FDA. 2007. [Google Scholar]
  • 34.Administration UF and D. Policy for coronavirus disease-2019 tests during the public health emergency (revised): immediately in effect guidance for clinical laboratories, commercial manufacturers, and Food and Drug Administration staff. United States Food and Drug Administration. United States. Food and Drug Administration; 2020.
  • 35.Fact Sheet For Health Care Providers Emergency Use Authorization (Eua) Of Bamlanivimab And Etesevimab 12222021.: 45.
  • 36.Administration UF and D. EUA authorized serology test performance. 2020.
  • 37.Escribano P, Álvarez-Uría A, Alonso R, Catalán P, Alcalá L, Muñoz P, et al. Detection of SARS-CoV-2 antibodies is insufficient for the diagnosis of active or cured COVID-19. Scientific reports. 2020;10: 1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Harritshøj LH, Gybel-Brask M, Afzal S, Kamstrup PR, Jørgensen CS, Thomsen MK, et al. Comparison of 16 serological SARS-CoV-2 immunoassays in 16 clinical laboratories. Journal of Clinical Microbiology. 2021;59: e02596–20. doi: 10.1128/JCM.02596-20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Ast V, Costina V, Eichner R, Bode A, Aida S, Gerhards C, et al. Assessing the quality of serological testing in the COVID-19 pandemic: results of a European external quality assessment (EQA) scheme for anti-SARS-CoV-2 antibody detection. Journal of clinical microbiology. 2021;59: e00559–21. doi: 10.1128/JCM.00559-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Landstra CP, De Koning EJ. COVID-19 and diabetes: understanding the interrelationship and risks for a severe course. Frontiers in Endocrinology. 2021;12: 599. doi: 10.3389/fendo.2021.649525 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Tay MZ, Poh CM, Rénia L, MacAry PA, Ng LF. The trinity of COVID-19: immunity, inflammation and intervention. Nature Reviews Immunology. 2020;20: 363–374. doi: 10.1038/s41577-020-0311-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Fajnzylber J, Regan J, Coxen K, Corry H, Wong C, Rosenthal A, et al. SARS-CoV-2 viral load is associated with increased disease severity and mortality. Nature communications. 2020;11: 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Liu W, Russell RM, Bibollet-Ruche F, Skelly AN, Sherrill-Mix S, Freeman DA, et al. Predictors of Nonseroconversion after SARS-CoV-2 Infection. Emerging Infectious Diseases. 2021;27: 2454. doi: 10.3201/eid2709.211042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Yazaki S, Yoshida T, Kojima Y, Yagishita S, Nakahama H, Okinaka K, et al. Difference in SARS-CoV-2 Antibody Status Between Patients With Cancer and Health Care Workers During the COVID-19 Pandemic in Japan. JAMA oncology. 2021. doi: 10.1001/jamaoncol.2021.2159 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Massarweh A, Eliakim-Raz N, Stemmer A, Levy-Barda A, Yust-Katz S, Zer A, et al. Evaluation of Seropositivity Following BNT162b2 Messenger RNA Vaccination for SARS-CoV-2 in Patients Undergoing Treatment for Cancer. JAMA oncology. 2021. doi: 10.1001/jamaoncol.2021.2155 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Administration UF and D. SARS-CoV-2 viral mutations: impact on COVID-19 tests. 2021.

Decision Letter 0

Padmapriya P Banada

15 Jul 2022

PONE-D-22-11773Real-World Performance of SARS-Cov-2 Serology Tests in The United States, 2020.PLOS ONE

Dear Dr. Rodriguez-Watson,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The study offers important information on the reliability of the serodiagnosis. Please see the comments from the reviewers and I hope you will find it helpful to improve the quality of the manuscript overall. it is indeed difficult to comprehend some of the images in the pdf version of the article. Please submit figures as recommended by plosOne (authors instructions) and plosOne can generate a pdf including the figures which are generally high quality. Based on the reveiwer's comments I am recommending your article for major revision and will look forward for the revised manuscript.

Please submit your revised manuscript by Aug 29 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Padmapriya P Banada, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

3. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match.

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

4. Thank you for stating the following in the Competing Interests section:

“AJB is a co-founder and consultant to Personalis and NuMedii; consultant to Samsung, Mango Tree Corporation, and in the recent past, 10x Genomics, Helix, Pathway Genomics, and Verinata (Illumina); has served on paid advisory panels or boards for Geisinger Health, Regenstrief Institute, Gerson Lehman Group, AlphaSights, Covance, Novartis, Genentech, Merck, and Roche; is a shareholder in Personalis and NuMedii; is a minor shareholder in Apple, Facebook, Alphabet (Google), Microsoft, Amazon, Snap, Snowflake, 10x Genomics, Illumina, Nuna Health, Assay Depot (Scientist.com), Vet24seven, Regeneron, Sanofi, Royalty Pharma, Pfizer, BioNTech, AstraZeneca, Moderna, Biogen, Twist Bioscience, Pacific Biosciences, Editas Medicine, Invitae, Doximity, and Sutro, and several other non-health related companies and mutual funds; and has received honoraria and travel reimbursement for invited talks from Johnson and Johnson, Roche, Genentech, Pfizer, Merck, Lilly, Takeda, Varian, Mars, Siemens, Optum, Abbott, Celgene, AstraZeneca, AbbVie, Westat, several investment and venture capital firms, and many academic institutions, medical or disease specific foundations and associations, and health systems. AJB receives royalty payments through Stanford University, for several patents and other disclosures licensed to NuMedii and Personalis. AJB’s research has been funded by NIH, Northrup Grumman (as the prime on an NIH contract), Genentech, Johnson and Johnson, FDA, Robert Wood Johnson Foundation, Leon Lowenstein Foundation, Intervalien Foundation, Priscilla Chan and Mark Zuckerberg, the Barbara and Gerson Bakar Foundation, and in the recent past, the March of Dimes, Juvenile Diabetes Research Foundation, California Governor’s Office of Planning and Research, California Institute for Regenerative Medicine, L’Oreal, and Progenity.       

CLB has intellectual property in and receives royalties from BioFire, Inc. She serves as a scientific advisor to IDbyDNA (San Francisco, CA and Salt Lake City, UT); and is on the Board of the Commonwealth Fund.    

CK is a paid employee of Aetion and hold Aetion stock options.

NES is an employee of Optum Labs and owns stock in the parent company UnitedHealth group.

NDL was an employee of Health Catalyst at the time the work was performed.

JLG is a full-time employee of Regenstrief Institute, which provides independent research services to entities including those within the pharmaceutical and medical device industries.

SJG serves as Chief Medical Information Officer for the Indiana Health Information Exchange, and is a founding partner of Uppstroms, LLC.”

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

5. Thank you for stating the following in the Acknowledgments Section of your manuscript:

“Financial support for this work was provided in part by a grant from The Rockefeller Foundation.”

We note that you have provided additional information within the Acknowledgements Section that is not currently declared in your Funding Statement. Please note that funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

“CVRW was funded by a grant from the Rockefeller Foundation.  

BDP, CK, GJ used funding provided by Yale University-Mayo Clinic Center of Excellence in Regulatory Science and Innovation (CERSI), a joint effort between Yale University, Mayo Clinic, and the U.S. Food and Drug Administration (FDA) (3U01FD005938).           

CK, CMF, SJG, PJE, EHE, NDL, and JLG work was funded by a designated sub-grant from the FDA Foundation.

AJB funded by award number A128219 and Grant Number U01FD005978 from the FDA, which supports the UCSF-Stanford Center of Excellence in Regulatory Sciences and Innovation. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the HHS or FDA.

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

6. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

7. One of the noted authors is a group or consortium ”Evidence Accelerator Workgroup”. In addition to naming the author group, please list the individual authors and affiliations within this group in the acknowledgments section of your manuscript. Please also indicate clearly a lead author for this group along with a contact email address.

8. We note that Figure 1  in your submission contain map images which may be copyrighted. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For these reasons, we cannot publish previously copyrighted maps or satellite images created using proprietary data, such as Google software (Google Maps, Street View, and Earth). For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

 a. You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license. 

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

 b. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

The following resources for replacing copyrighted map figures may be helpful:

USGS National Map Viewer (public domain): http://viewer.nationalmap.gov/viewer/

The Gateway to Astronaut Photography of Earth (public domain): http://eol.jsc.nasa.gov/sseop/clickmap/

Maps at the CIA (public domain): https://www.cia.gov/library/publications/the-world-factbook/index.html and https://www.cia.gov/library/publications/cia-maps-publications/index.html

NASA Earth Observatory (public domain): http://earthobservatory.nasa.gov/

Landsat: http://landsat.visibleearth.nasa.gov/

USGS EROS (Earth Resources Observatory and Science (EROS) Center) (public domain): http://eros.usgs.gov/#

Natural Earth (public domain): http://www.naturalearthdata.com/

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The Rodriguez-Watson et al. manuscript describes a synthesis of real-world serology testing for SARS-CoV-2. It analyses agreements between SARS-CoV-2 PCR testing and antibody detection assays in 6 US health systems across different setting (inpatient, outpatient, ED, long-term care). The aims of the manuscript were to address gaps in understanding exposure to SARS-CoV-2 and identifying factors associated with seroconversion.

The manuscript is well written and understandable even though presents a large amount of data.

Although the odd of seropositivity according to demographics gives a good understanding of factors that might affect seroconversion, the authors failed to address the gaps in understanding exposure.

Comments:

1- The manuscript describes agreement (PPA) between PCR testing and serology results at 14-90 days post PCR. The approach to analyzing agreement is not complete with no mention of kappa (Cohen, McNemar test, etc.).

2- Is there a loss of agreement depending on day serology was done?

3- When comparing PPA observed according to ethnicity or other factors, are the differences significant?

4- Line 284, the authors mention test quality. Many studies looking at the quality of several serology tests used in this manuscript have been published, can the authors discuss further and refer to these manuscripts? Does lower agreement observe in this study match the published data?

4- In the section results, no odd ratio is given, only 95%CI. Please add.

5- Factors associated with seropositivity, line 257-266. Are the differences described significant?

6- Line 135 change e.g. by i.e. or remove altogether

7-Line 135, remove comma after IgG, “IgG, [21],”

8- Table 1, “Na” in lowercase, all other table “NA” is capitalised. Please homogenise throughout.

9- Line 313, should read sustained antibody production.

10- Line 359, should read the sustained presence of antibodies.

11- Line 371, should be “in vitro”.

Reviewer #2: Summary:

In this important study, Dr. Rodriguez-Watson and colleagues studied 6 large-scale datasets to understand the 2020 performance of real-world use of EUA-approved SARS-CoV-2 serology testing after a positive molecular test. The group demonstrates substantial real-world variance in the PPA of these tests across health contexts.

Major comments

1. What was done statistically regarding individuals with a 2nd serology test? Line 161 implies only the first test was used. What was the concordance/timing between 1st and 2nd tests? Did 2nd tests, where done, have a different PPA?

2. Dataset C seems to have a broadly lower PPA vs the other datasets, has the smallest N, and is relatively geographically restricted. This dataset does not appear to have any manufacturer molecular test names available, but there is no PPA reported in the Unknown/missing category in table 2 for that variable. Is this dataset usable? It would seem that as both the serological and molecular test characteristics would contribute to the PPA, not knowing the molecular test name at all makes using this dataset problematic.

3. What is known about the contribution of “other” molecular tests to this dataset, such as the adoption of “rapid” PCR testing and “in-house” testing that some institutions produced during this time period? Is it possible to address those tests where both the serological and molecular tests are known? As above, the confounding factor of molecular test characteristics could influence the PPA of the serological tests in question.

4. Time between the molecular and serological test seems like a key point as well. Do you have that data? Does time between tests affect the results?

Minor comments

1. The figures are not showing well (scattered pixels) in my Adobe Acrobat Pro DC view of the PDF. Please ensure that high quality images are used during publication. I can access the tif files, which look right.

2. I do not understand why figures 2-6 list the study period as starting in 2019. Is this a typo or is this correct? The methods list 2020 as start date, which would make sense given the dates of the pandemic.

3. Please check line 380, there may be a comma instead of a period.

4. Perhaps figures 2-7 should be combined into a single summary figure (with total N for the entire study) and the individual flowsheets by cohort might be moved to supplemental. The reader might better grasp the overall study with a simpler summary figure.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Feb 3;18(2):e0279956. doi: 10.1371/journal.pone.0279956.r002

Author response to Decision Letter 0


3 Nov 2022

Dear Dr. Padmapriya Banada:

We thank you and the reviewers for your thoughtful comments on our manuscript. We appreciate the opportunity to respond and believe the revisions have improved the manuscript. Below, please find a table with the summary of reviewer comments and our responses. Please be advised that all line numbers referenced in the responses below correspond to line numbers in the tracked changes version of the manuscript. Please reach out with any additional questions, or if more clarification is required. We look forward to hearing from you.

Regards,

Carla Rodriguez-Watson, PhD, MPH

Director of Research

Reagan-Udall Foundation for the FDA

Comments/Questions Response

Journal Requirements

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

Please see the new version with the correct requirements. Additionally, we have listed manuscript requirements below with the acknowledgment that they are complete.

Journal Requirement Acknowledgement of Completion

Article Title YES

Author Byline YES

Affiliations YES

Corresponding Authorship YES

Contributorship YES

Consortia or other Group Authors YES

Level 1 Heading YES

Figure Citations YES

Figure Captions YES

File Naming for Figures YES

Display/Numbered Equation N/A

Inline Equation N/A

Level 2 Heading YES

Level 3 heading YES

Please submit your manuscript in double-space paragraph format. YES

Tables and Table Citations YES

Reference Citations YES

Supporting Information Citations YES

Acknowledgments- No funding or competing interest information YES

References YES

Supporting Information Captions YES

File Naming for Supporting Information YES

2. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

We have listed our full ethics statement in lines 132-136 of the methods section: “The Western - Copernicus Group (WCG) Institutional Review Board (IRB), the IRB of record for the Reagan-Udall Foundation for the FDA, reviewed the study and determined it to be non-human subjects research. Additionally, all legal and ethical approvals for use of the data included in this study were submitted, reviewed, and/or obtained locally at each contributing dataset by an IRB and/or governing board.”

3. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

Thank you for pointing this out. We have updated the sections with the correct grant numbers in the online submission. We have also included the correct grant numbers below.

Funding Information Section:

Financial support for this work was provided in part by a grant from The Rockefeller Foundation (HTH 030 GA-S).

BDP, CK, GJ used funding provided by Yale University-Mayo Clinic Center of Excellence in Regulatory Science and Innovation (CERSI), a joint effort between Yale University, Mayo Clinic, and the U.S. Food and Drug Administration (FDA) (3U01FD005938) (https://www.fda.gov/).

CK, CMF, SJG, PJE, EHE, NDL, and JLG work was funded by a designated sub-grant from the FDA Foundation.

AJB funded by award number A128219 and Grant Number U01FD005978 from the FDA, which supports the UCSF-Stanford Center of Excellence in Regulatory Sciences and Innovation. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the HHS or FDA.

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

4. Thank you for stating the following in the Competing Interests section:

“CRW receives research funding from Novartis, Merck and AbbVie; and holds minor stock in Gilead. AJB is a co-founder and consultant to Personalis and NuMedii; consultant to Samsung, Mango Tree Corporation, and in the recent past, 10x Genomics, Helix, Pathway Genomics, and Verinata (Illumina); has served on paid advisory panels or boards for Geisinger Health, Regenstrief Institute, Gerson Lehman Group, AlphaSights, Covance, Novartis, Genentech, Merck, and Roche; is a shareholder in Personalis and NuMedii; is a minor shareholder in Apple, Facebook, Alphabet (Google), Microsoft, Amazon, Snap, Snowflake, 10x Genomics, Illumina, Nuna Health, Assay Depot (Scientist.com), Vet24seven, Regeneron, Sanofi, Royalty Pharma, Pfizer, BioNTech, AstraZeneca, Moderna, Biogen, Twist Bioscience, Pacific Biosciences, Editas Medicine, Invitae, Doximity, and Sutro, and several other non-health related companies and mutual funds; and has received honoraria and travel reimbursement for invited talks from Johnson and Johnson, Roche, Genentech, Pfizer, Merck, Lilly, Takeda, Varian, Mars, Siemens, Optum, Abbott, Celgene, AstraZeneca, AbbVie, Westat, several investment and venture capital firms, and many academic institutions, medical or disease specific foundations and associations, and health systems. AJB receives royalty payments through Stanford University, for several patents and other disclosures licensed to NuMedii and Personalis. AJB’s research has been funded by NIH, Northrup Grumman (as the prime on an NIH contract), Genentech, Johnson and Johnson, FDA, Robert Wood Johnson Foundation, Leon Lowenstein Foundation, Intervalien Foundation, Priscilla Chan and Mark Zuckerberg, the Barbara and Gerson Bakar Foundation, and in the recent past, the March of Dimes, Juvenile Diabetes Research Foundation, California Governor’s Office of Planning and Research, California Institute for Regenerative Medicine, L’Oreal, and Progenity.

CLB has intellectual property in and receives royalties from BioFire, Inc. She serves as a scientific advisor to IDbyDNA (San Francisco, CA and Salt Lake City, UT); and is on the Board of the Commonwealth Fund.

CK is a paid employee of Aetion and hold Aetion stock options.

NES is an employee of Optum Labs and owns stock in the parent company UnitedHealth group.

NDL was an employee of Health Catalyst at the time the work was performed.

JLG is a full-time employee of Regenstrief Institute, which provides independent research services to entities including those within the pharmaceutical and medical device industries.

SJG serves as Chief Medical Information Officer for the Indiana Health Information Exchange, and is a founding partner of Uppstroms, LLC.”

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

Thank you for pointing this out. There are no conflicts between competing interests and PLOS One policy on data sharing. We have revised our competing interest in the cover letter to include the requested statement below:

This does not alter our adherence to PLOS ONE policies on sharing data and materials.

5. Thank you for stating the following in the Acknowledgments Section of your manuscript:

“Financial support for this work was provided in part by a grant from The Rockefeller Foundation.”

We note that you have provided additional information within the Acknowledgements Section that is not currently declared in your Funding Statement. Please note that funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement.

Currently, your Funding Statement reads as follows:

“CVRW was funded by a grant from the Rockefeller Foundation.

BDP, CK, GJ used funding provided by Yale University-Mayo Clinic Center of Excellence in Regulatory Science and Innovation (CERSI), a joint effort between Yale University, Mayo Clinic, and the U.S. Food and Drug Administration (FDA) (3U01FD005938).

CK, CMF, SJG, PJE, EHE, NDL, and JLG work was funded by a designated sub-grant from the FDA Foundation.

AJB funded by award number A128219 and Grant Number U01FD005978 from the FDA, which supports the UCSF-Stanford Center of Excellence in Regulatory Sciences and Innovation. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the HHS or FDA.

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

Thank you for bringing this to our attention. We have removed the funding information from the Acknowledgement section and include all funding information as updated above under “Funding Information.”

6. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

Thank you for bring this to our attention. The data that we used for our findings are not bound to some legal or ethical restrictions. All relevant data are contained within the manuscript or its supporting documents. Person-level data are unavailable. Qualified researchers interested in accessing deidentified person-level data should contact the corresponding author for more

Information.

7. One of the noted authors is a group or consortium “Evidence Accelerator Workgroup”. In addition to naming the author group, please list the individual authors and affiliations within this group in the acknowledgments section of your manuscript. Please also indicate clearly a lead author for this group along with a contact email address.

Thank you for bringing this to our attention. The Evidence Accelerator Workgroup refers to our consortium. We only had included those who met the ICMJE authorship as co-authors, but wanted to acknowledge all those who worked behind the scenes to support the work. Given this advice, we will include their names as acknowledgements. We have added the following phrase “We thank all members of the Evidence Accelerator Workgroup for their support and feedback: [list names]” to lines 395-399.

8. We note that Figure 1 in your submission contain map images which may be copyrighted. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For these reasons, we cannot publish previously copyrighted maps or satellite images created using proprietary data, such as Google software (Google Maps, Street View, and Earth). For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

Thank you for bringing this to our attention. We had previously attached the Content Permission Form as part of the submission package. We attached the Content Permission Form as part of the submission package again and added the reprint information to the figure 1’s caption.

Reviewer 1 comments:

1. The manuscript describes agreement (PPA) between PCR testing and serology results at 14-90 days post PCR. The approach to analyzing agreement is not complete with no mention of kappa (Cohen, McNemar test, etc.).

Thank you for the comment. This analysis focused on serology agreement with positive RNA tests. As such, we did not collect any RNA negative results, which are required to assess kappa.

2. Is there a loss of agreement depending on day serology was done?

Yes, there was loss of agreement depending on the day serology test was done, and one of the results we intend to present in a subsequent publication. The current manuscript is intended to discuss the overall agreement between serology and PCR during the window for which agreement is most expected (days 14-90 after PCR) and by key demographic and clinical factors. A subsequent publication will look at how that agreement changes over different time periods.

3. When comparing PPA observed according to ethnicity or other factors, are the differences significant?

We report differences in PPA by race and ethnicity only where there are sufficient data (i.e. missing <30% and sample size ≥ n=40). We consistently observed higher PPA in Hispanic ethnicity compared to non-Hispanic ethnicity, as demonstrated by complete separation of confidence intervals (Clopper-Pearson). We did not conduct direct comparison of PPA across groups. We clarified the statistical analysis section (lines 171-173) to describe the meaning of 'significant differences’ outside of direct comparisons: “We calculated exact (Clopper-Pearson) 95% confidence intervals (CI). We report significant differences where 95% CI have complete separation - although we did not conduct formal statistical comparisons of PPA between groups.”

4. Line 284, the authors mention test quality. Many studies looking at the quality of several serology tests used in this manuscript have been published, can the authors discuss further and refer to these manuscripts? Does lower agreement observe in this study match the published data?

Thank you for the great suggestion. We have now included in the Discussion a comparison of our PPA results to those from sensitivities reported in original EUA submissions; as well as in context of other external evaluations (see lines 290-298):

“The reported sensitivity of the serology tests included in this analysis that were submitted for EUA approval were all >95% [36]. Our analysis of multiple large datasets of patients with confirmed SARS-CoV-2 infection suggests that serology tests performed lower than =expected – with PPA ranges (a measure analogous to sensitivity) from 65-90%. - Our results align with results from smaller, detailed laboratory evaluations that suggest a lack of harmonization, including optimization of cut-off values, may contribute to decreased overall performance. Additionally, our results align with studies that include more representative samples of milder or asymptomatic persons [37–39].” Although many of these evaluations are still limited by much smaller sample sizes than we report (though they are more detailed laboratory studies) and appear limited in the replicability of the same assay result across different laboratories, they do contain more diverse populations and note lower performance compared to initial certification related evaluations.

5. In the section results, no odd ratio is given, only 95%CI. Please add.

Apologies for the confusion, the results given in the parenthesis in this section are actually ranges of OR, not the 95% CIs. We have clarified this in reporting results in this version.

6. Factors associated with seropositivity, line 257-266. Are the differences described significant?

Thank you for the comment. The differences described in the text were significant. Significant differences can be observed in the figures as those whose 95% CI does not cross the ‘1’on the X axis. Across the board, we found age 20-44 yrs to have a lower odds of seropositivity than those 45-54 yrs; Hispanic Ethnicity to have higher odds than non-Hispanic; immunocompromised to have a lower odds of seropositivity than those with no pre-existing conditions. ORs for obesity and presenting with >1 COVID symptom also were also significantly elevated in >1 data source. We have clarified this in the text by adding the word “significantly” to lines 269 and 273.

7. Line 135 change e.g. by i.e. or remove altogether

Thank you for calling this out. To clarify, PCR was not the only molecular conducted. The list included NAAT, RT-PCR so we respectfully leave it as “e.g”

8. Line 135, remove comma after IgG, “IgG, [21],”

Thank you for bring this to our attention. We have removed the comma after IgG. Please see line 141.

9. Table 1, “Na” in lowercase, all other table “NA” is capitalised. Please homogenise throughout.

Thank you for bring this to our attention.

We have updated Table 1 to read NA instead of Na to match the other table.

10. Line 313, should read sustained antibody production.

Thank you for bring this to our attention. We have updated the language in line 331.

11. Line 359, should read the sustained presence of antibodies.

Thank you for bring this to our attention. We have updated the language in line 377-378.

12. Line 371, should be “in vitro”.

Thank you for bring this to our attention. We have updated the language in line 389.

Reviewer #2 comments:

1. What was done statistically regarding individuals with a 2nd serology test? Line 161 implies only the first test was used. What was the concordance/timing between 1st and 2nd tests? Did 2nd tests, where done, have a different PPA?

The majority of person in our cohort had just one serology test done. In order to compare consistently, we picked the first test done on an individual occurring 14 or more days after their positive molecular test. This choice has the added benefit of avoiding bias that would occur if we counted the same individual more than once knowing that individuals are more likely to retest if they get a result which is unexpected.

2. Dataset C seems to have a broadly lower PPA vs the other datasets, has the smallest N, and is relatively geographically restricted. This dataset does not appear to have any manufacturer molecular test names available, but there is no PPA reported in the Unknown/missing category in table 2 for that variable. Is this dataset usable? It would seem that as both the serological and molecular test characteristics would contribute to the PPA, not knowing the molecular test name at all makes using this dataset problematic.

Keen observations! Not all partners reported the name of the molecular test and thus, did not estimate PPA by molecular test (reference). All partners did estimate PPA by serology test. We acknowledge in the limitations that we did not analyze molecular-serology pairs. As you note, characteristics of each test may influence PPA results, though we account for many other factors that may affect results. Because of this limitation, we chose not to report results by specific test name as it may suggest a deficiency that we could not accurately explain.

3. What is known about the contribution of “other” molecular tests to this dataset, such as the adoption of “rapid” PCR testing and “in-house” testing that some institutions produced during this time period? Is it possible to address those tests where both the serological and molecular tests are known? As above, the confounding factor of molecular test characteristics could influence the PPA of the serological tests in question.

Rapid tests were not included in this analysis. One site included an ‘in-house’ test that was not FDA approved or for whom an EUA was not issued; the majority used only FDA approved or EUA tests. As such, we did not conduct the suggested analysis.

4. Time between the molecular and serological test seems like a key point as well. Do you have that data? Does time between tests affect the results?

Yes, we agree with your comment. Analysis of PPA since the time of the molecular test is the focus of a subsequent manuscript. In the current analysis, we focus on tests 14-90 days from positive molecular test to maximize sensitivity of the test.

5. The figures are not showing well (scattered pixels) in my Adobe Acrobat Pro DC view of the PDF. Please ensure that high quality images are used during publication. I can access the tif files, which look right.

Thank you for letting us know. We will resubmit the figures as .tiff files.

6. I do not understand why figures 2-6 list the study period as starting in 2019. Is this a typo or is this correct? The methods list 2020 as start date, which would make sense given the dates of the pandemic.

Thank you for the question. This is correct. March 2019 represents the collection of baseline data such as comorbidities and socioeconomic data. The study follow-up period begins in March 1, 2020 and continues until September 30, 2020.

7. Please check line 380, there may be a comma instead of a period.

Thank you for bring this to our attention. We have removed the comma and changed it to a period. Please see line 399

8. Perhaps figures 2-7 should be combined into a single summary figure (with total N for the entire study) and the individual flowsheets by cohort might be moved to supplemental. The reader might better grasp the overall study with a simpler summary figure.

Thank you for the suggestion. We created a summary figure (Fig 2) that depicts the general study design and the sample size of each partners study cohort; thus renumbering the remaining figures and moving the individual study diagrams to the supplemental figures. Each of the final cohorts (A-F) include patients that have both molecular test and follow up serology test as indicated in the methods. The parallel analysis approach entails that each cohort was analyzed separately according to a common protocol. Since this was not an aggregated analysis, we did not aggregate the numbers across partners.

Attachment

Submitted filename: Response to Reviewers_20221011 CLEAN.docx

Decision Letter 1

Padmapriya P Banada

19 Dec 2022

Real-world performance of SARS-Cov-2 serology tests in the United States, 2020.

PONE-D-22-11773R1

Dear Dr. Rodriguez-Watson,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Padmapriya P Banada, PhD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Thank you for resubmitting your article addressing the comments raised by the reviewers. Thank you for considering the comments constructive. The manuscript is greatly improved and is clear.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for adressing all the comments.

Two very minor typos if this can be changed before publication

line 111 : Figs 2 - remove s

line 161 add space between CI and reference [33]

Reviewer #2: Thank you for addressing my comments and questions. I have no additional concerns or questions at this time. I recommend acceptance and publication.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

Acceptance letter

Padmapriya P Banada

24 Jan 2023

PONE-D-22-11773R1

Real-world performance of SARS-Cov-2 serology tests in the United States, 2020.

Dear Dr. Rodriguez-Watson:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Padmapriya P Banada

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. Study design diagram dataset A.

    (TIF)

    S2 Fig. Study design diagram dataset B.

    (TIF)

    S3 Fig. Study design diagram dataset C.

    (TIF)

    S4 Fig. Study design diagram dataset D.

    (TIF)

    S5 Fig. Study design diagram dataset E.

    (TIF)

    S6 Fig. Study design diagram dataset F.

    (TIF)

    S1 Table. Characteristics of participating data sources and representative populations.

    (DOCX)

    S2 Table. Phenotype (code-lists) for specified presenting symptoms & pre-existing conditions.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers_20221011 CLEAN.docx

    Data Availability Statement

    All relevant data are contained within the paper and its Supporting information files. Person-level data are unavailable.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES