Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2021 Jul 16;142:104914. doi: 10.1016/j.jcv.2021.104914

SARS-CoV-2 serology: Validation of high-throughput chemiluminescent immunoassay (CLIA) platforms and a field study in British Columbia

Inna Sekirov a,b, Vilte E Barakauskas b,c, Janet Simons c,d, Darrel Cook a, Brandon Bates b,c, Laura Burns b,c, Shazia Masud b,e, Marthe Charles b,f, Meghan McLennan g, Annie Mak a, Navdeep Chahil a, Rohit Vijh h, Althea Hayden h, David Goldfarb b,c, Paul N Levett a,b, Mel Krajden a,b, Muhammad Morshed a,b,
PMCID: PMC8282439  PMID: 34304088

Abstract

Background

SARS-CoV-2 antibody testing is required for estimating population seroprevalence and vaccine response studies. It may also increase case identification when used as an adjunct to routine molecular testing. We performed a validation study and evaluated the use of automated high-throughput assays in a field study of COVID-19-affected care facilities.

Methods

Six automated assays were assessed: 1) DiaSorin LIAISONTM SARS-CoV-2 S1/S2 IgG; 2) Abbott ARCHITECTTM SARS-CoV-2 IgG; 3) Ortho VITROSTM Anti-SARS-CoV-2 Total; 4) VITROSTM Anti-SARS-CoV-2 IgG; 5) Siemens SARS-CoV-2 Total Assay; and 6) Roche ElecsysTM Anti-SARS-CoV-2. The validation study included 107 samples (42 known positive; 65 presumed negative). The field study included 296 samples (92 PCR positive; 204 PCR negative or not PCR tested). All samples were tested by the six assays.

Results

All assays had sensitivities >90% in the field study, while in the validation study, 5/6 assays were >90% sensitive and DiaSorin was 79% sensitive. Specificities and negative predictive values were >95% for all assays. Field study estimated positive predictive values at 1–10% disease prevalence were 100% for Siemens, Abbott and Roche, while DiaSorin and Ortho assays had lower PPVs at 1% prevalence, but PPVs increased at 5–10% prevalence. In the field study, addition of serology increased diagnoses by 16% compared to PCR testing alone.

Conclusions

All assays evaluated in this study demonstrated high sensitivity and specificity for samples collected at least 14 days post-symptom onset, while sensitivity was variable 0–14 days after infection. The addition of serology to the outbreak investigations increased case detection by 16%.

Keywords: SARS-CoV-2, COVID-19, Serologic testing, Outbreak investigation

Abbreviations: SARS-CoV-2, severe acute respiratory syndrome coronavirus-2; COVID-19, coronavirus disease 2019; CLIA, chemiluminescent immunoassay; PCR, polymerase chain reaction; CI, confidence interval; PPV, positive predictive value; NPV, negative predictive value; RPR, rapid plasma reagin

1. Introduction

Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2), a member of the Betacoronavirus genus of the Coronaviridae family, and the causative agent of COVID-19 disease, has dominated international attention since its discovery in December 2019 and subsequent rapid spread. The virus exhibits varying degrees of similarity in structural and functional proteins with other Betacoronaviruses [1].

Multiple molecular methods to detect viral nucleic acid were rapidly developed, but the wide spectrum of disease presentation [2], potential for false negative molecular test results [3], and global shortages of molecular diagnostic reagents made it clear that other testing modalities, such as serology, are necessary to help estimate the true spread of this virus through populations. Furthermore, with multiple COVID-19 vaccines currently in deployment worldwide, and additional vaccine candidates in various stages of clinical trials, tests to accurately determine vaccine-induced seroconversion and to differentiate between natural and vaccine-induced immunity will be necessary. There has been a rapid explosion in the number of SARS-CoV-2 serological tests, but their performance varies [4]. To be useful, these tests need to have demonstrated high performance characteristics not only in validation studies, but also in clinical/epidemiological settings.

We undertook a multi-site laboratory validation study of six high throughput SARS-CoV-2 chemiluminescent immunoassays (CLIA) and subsequently evaluated the same assays in a serosurvey at two healthcare facilities affected by COVID-19 outbreaks.

2. Methods

Six high throughput chemiluminescent immunoassay (CLIA) platforms were evaluated: 1) LIAISONTMSARS-CoV-2 S1/S2 IgG (DiaSorin IgG; DiaSorin, Italy); 2) ARCHITECTTM SARS-CoV-2 IgG (Abbott IgG; Abbott, USA); 3) VITROSTM Anti-SARS-CoV-2 Total (Ortho T) and 4) VITROSTM Anti-SARS-CoV-2 IgG (Ortho IgG; Ortho Clinical Diagnostics, USA); 5) SARS-CoV-2 Total Assay (Siemens T; Siemens, USA); and 6) ElecsysTM Anti-SARS-CoV-2 (Roche T; Roche, USA). Table 1 lists the SARS-CoV-2 viral antigen targets and performance characteristics claimed by each manufacturer. All of the serology platforms provide a semi-quantitative signal intensity, which is translated to a categorical reactive or non-reactive result. Signal to cut-off ratios for categorical interpretation are platform-specific (Supplementary Table 2). Reactive test results were examined to determine if there was any association of semi-quantitative test signals with age and sex. Where signals fell beyond the dynamic range of the platform (i.e., “above maximum” or “below minimum” signal), the maximum or minimum of the dynamic range was used as proxy. Consensus reactive and consensus non-reactive samples, i.e., those reactive or non-reactive, respectively, on at least 5/6 platforms, were considered true positive and true negative serologic results.

Table 1.

Manufacturer's specifications for SARS-CoV-2 CLIA serology platforms at the time of the validation study.

Assay DiaSorin IgG Ortho IgG Ortho T Siemens T Abbott IgG Roche T
Analyzer used LIAISONTM XL VITROSTM XT 7600 VITROSTM XT 7600 ADVIA Centaur® XP ARCHITECTTM cobasTM e601
Target epitope S1 and S2 S S1 S1 Receptor Binding Domain N N
Target immunoglobulin IgG IgG Total Ab IgM and IgG IgG Total Ab
Sensitivity at >14 days post-onset (n) 97.6% (41) 89% (65) 100% (49) 100% (47) 100% (88) 100% (29)
Specificity (n) 98.5%
(1,090)
100%
(407)
100%
(400)
99.8% (1,589) 99.6% (1,070) 99.8% (5,272)

Abbreviations: S=spike protein; N=nucleocapsid protein; Ab=antibody

Testing was performed in accordance with manufacturers’ recommendations. A combination of In Vitro Diagnostic and Research Use Only test kits was used.

2.1. Validation using characterized samples

The validation panel consisted of 107 serum or plasma specimens. “Known positive” samples (n = 42) were from 37 COVID-19 patients previously diagnosed by PCR testing, collected at different time points from symptom onset. One patient had two samples >14 days; three patients had one 0–14 and one >14 days; and one patient had one 0–14 and two >14 days. Most samples collected 0–14 days post-onset were from hospitalized patients, while those collected >14 days were from those who were outpatients at the time of collection, with no information available on their hospitalization history or clinical course (Table 2a ). All >14 days samples were collected within 3 months of either PCR-based diagnosis or symptom onset. Presumed negative samples (n = 65) were leftover frozen serum or plasma samples obtained prior to November 2019: 51 from pre-natal and organ donor testing, which accounts for a higher proportion of females and younger ages in the panel; and 14 potential cross-reactive samples (nine known to be serologically positive for another pathogen and five with a known positive result for another respiratory pathogen (confirmed by PCR) within 12 months of serum collection; Table 2b ). All CLIA platforms were evaluated with the same set of samples to facilitate comparability of results.

Table 2a.

Validation study panel composition

N % Male Median age (range)
Known Positive 0-14 days post-onset 10 60 68 (44–86)
>14 days post-onset 32 53 65 (23–89)
Presumed Negative Cross-reactivity panel (Table 2b) 14 36 40 (8–77)
Other (prenatal and organ donor sera obtained prior to COVID-19) 51 2 30 (17–53)

Table 2b.

Cross-reactivity panel composition.

Sample type N
Serum sample known to be serologically positive to another pathogen Toxoplasma gondii IgM 1
Mumps IgM 3
Chikungunya IgM 1
Syphilis RPR 1:32 2
Hepatitis C Virus 2
Serum sample from a patient positive for another respiratory pathogen (confirmed by PCR) within 12 months prior to serum sample collection Influenza A 1
Influenza B 1
Coronavirus HKU1 1
Coronavirus NL63 1
Coronavirus 229E (also seropositive for Epstein Barr Virus IgG) 1

Abbreviations: RPR=rapid plasma reagin

2.2. COVID-19 outbreak field study

For the serosurvey of facilities affected by COVID-19 outbreaks, a total of 296 serum samples were collected from consenting residents and staff as part of a Public Health investigation. All samples were tested on the six CLIA platforms and results were compared against the participants’ COVID-19 status: known positive if ever tested PCR positive for SARS-CoV-2 and collected at least 14 days and less than 2 months post-onset (n = 92); and, unknown if PCR negative or never PCR-tested (n = 204). Specificity estimates were based on consensus negative serologic results.

2.3. Statistical analysis

Ninety-five percent confidence intervals (95%CI) were calculated for overall agreement, kappa statistic, sensitivity, and specificity. For sensitivity and specificity comparisons, McNemar's test was used and p values <0.05 were deemed to be statistically significant. For each platform, comparison of median reactive test signals for males vs. females and consensus vs. non-consensus reactive results was performed using Mann-Whitney test. Pearson correlation was used to assess potential differences in test signals by age.

2.4. Ethics

The study was conducted under the BC Centre for Disease Control's legislated mandate for outbreak investigation. Ethics approval for the study was also obtained from the University of British Columbia Clinical Research Ethics Board (approval #H20-01090).

3. Results

3.1. Validation study

Sensitivity, specificity and estimated positive (PPV) and negative (NPV) predictive values for each assay are shown in Table 3 . DiaSorin IgG showed lower sensitivity than the other assays for samples collected 0–14 days post-onset (60% vs. 90–100%), but the difference was not statistically significant (all McNemar's p values ≥0.125). Overall sensitivity was also lowest for DiaSorin IgG (78.6%) and was highest for Ortho T (100%). Specificities were high for all assays (range 95.4% to 100%). Overall sensitivities and specificities did not differ significantly between any of the assays (all McNemar's p values ≥0.09). PPVs for DiaSorin IgG, Ortho IgG and Ortho T were lower than for the other assays, especially in lower prevalence scenarios. NPVs were high for all assays (range 97.6% to 100%).

Table 3.

Validation study results for SARS CoV-2 CLIA assays.

DiaSorin IgG OrthoIgG OrthoT SiemensT Abbott IgG RocheT
Sensitivity % (95% CI) 0–14 days post-onset
(n = 10)
60 (31.3–53.2) 100 (72.3-100) 100 (72.-100) 90 (59.6-98.2) 90 (59.6-98.2) 90 (59.6-98.2)
>14 days post-onset
(n = 32)
80 (61.4–92.3) 93.3 (77.9–99.2) 100 (88.4-100) 86.7 (69.3-96.2) 96.7 (82.8-99.9) 93.3 (77.9-99.2)
Overall (n = 42) 78.6 (64.1–88.3) 97.6 (87.7–99.6) 100 (91.5–100) 90.5 (77.9–96.2) 95.2 (84.2–98.7) 92.9 (81.0–97.5)
Specificity % (95% CI) Presumed negative n = 65 95.4 (87.3–98.4) 98.5 (91.8–99.7) 98.5 (91.8–99.7) 100 (94.4–100) 100 (94.4–100) 100 (94.4–100)
McNemar's p value for overall sensitivity and specificity Ortho IgG 0.146
Ortho T 0.092 1.0
Siemens T 0.727 0.125 0.063
Abbott IgG 0.388 0.25 0.25 0.625
Roche T 0.549 0.125 1.0 1.0 1.0
Estimated PPV%/NPV%* 1% prevalence 14.7/99.8 39.7/100 40.2/100 100/99.9 100/99.9 100/99.9
5% prevalence 47.3/97.6 77.4/99.9 77.8/100 100/99.5 100/99.7 100/99.6
10% prevalence 75.5/97.6 87.8/99.7 88.1/100 100/99.0 100/99.5 100/99.2

Abbreviations: CI=confidence interval; PPV= positive predictive value; NPV=negative predictive value

PPV/NPV calculations are based on cumulative performance at all time points.

None of the three serum samples collected from patients previously diagnosed with endemic non-SARS-CoV-2 coronaviruses showed cross-reactivity on any of the assays. HKU1 is a Betacoronavirus with higher homology to SARS-CoV-2, while NL63 and 229E are Alphacoronaviruses with lower homology. DiaSorin IgG was reactive for three presumed negative prenatal samples, and both Ortho T and Ortho IgG were reactive with one syphilis positive sample (RPR 1:32). These were presumed to be false reactive SARS-CoV-2 tests. No reactive results were observed for the Siemens T, Abbott IgG and Roche T assays for any of the presumed negative samples.

Overall assay agreements and kappas are shown in Supplementary Table 1; overall agreements were >90% except for DiaSorin IgG vs. Roche T (88.8%). Highest agreement was between Abbott IgG and Roche T (98.1%), both of which are based on nucleocapsid antigen.

3.2. Outbreak field study

Of 296 samples in the outbreak investigation, 92 were from confirmed SARS CoV-2 PCR positive participants, and 204 were from participants who tested either PCR negative or were not PCR-tested. CLIA sensitivities for the PCR-confirmed participants were not significantly different and ranged from 95.7% to 98.9% (Table 4 ). All but four samples had consensus reactive CLIA results; 84/92 samples were reactive on all and 4/92 on five assays. Of the four non-consensus results, three were reactive on four and one on three CLIA assays. Sensitivities in the field study were similar to those in the validation study for samples collected >14 days post-onset, except for DiaSorin IgG, which had higher sensitivity in the field study (95.7% vs. 78.6%). Estimated specificities of the assays ranged from 96.8% to 98.9%. Reactive results that were deemed to be false positive were distributed randomly among all assays. There were no significant differences in sensitivity or specificity for any of the assays using outbreak samples (all McNemar's p values ≥0.4).

Table 4.

Outbreak field study SARS CoV-2 serological results.

PCR positive (n = 92)Female = 75%Mean age 70.4Median age 76.5 (20–102) PCR negative/not done or serology consensus non-reactive (n = 189)Female = 71% (1 unknown sex)Mean age 59.1Median age 57 (22–102)
Reactive Non-Reactive % Sensitivity (95% CI) Reactive Non-Reactive Estimated % Specificity (95% CI)
DiaSorin IgG 88 4 95.7 (89.4–98.3) 6 183 96.8 (93.3–98.5)
Abbott IgG 88 4 95.7 (89.4–98.3) 3 186 98.4 (95.4–99.5)
Ortho IgG 91 1 98.9 (94.1–99.8) 2 187 98.9 (96.2–99.7)
Ortho T 91 1 98.9 (94.1–99.8) 4 185 97.9 (94.7–99.2)
Siemens T 90 2 97.8 (92.4–99.4) 2 187 98.9 (96.2–99.7)
Roche T 91 1 98.9 (94.1–99.8) 3 186 98.4 (95.4–99.5)

McNemar's p values for sensitivity and specificity comparisons between the assays were all ≥0.4.

There were 15 participants of initially unknown status who were consensus reactive (n = 12 for all assays and n = 3 for five assays). Thus, the addition of serologic testing to the outbreak investigation resulted in identification of an additional 16.3% (15/92) cases compared to PCR testing alone.

3.3. Field study signal intensities

Median signal intensities did not differ significantly for males vs. females for any of the assays (Supplementary Table 2). In addition, there was no strong correlation between signal intensity and age for PCR positive participants (data not shown).

Median signal intensities for consensus reactive results were 2- to 14-fold higher than for non-consensus reactives; the differences were statistically significant for the Ortho T, Ortho IgG, Abbott IgG and Roche T assays (Supplementary Table 3). This suggests that falsely reactive tests for a given platform are more likely to have lower signals, but there were similarly low signals in all the assays among samples with consensus reactive results, implying that signal intensity alone is not a useful criterion for suggesting the likelihood of a false reactive test.

4. Discussion

This study demonstrated high sensitivity for all the SARS CoV-2 assays we assessed, although some differences were observed. In the validation study, for samples collected >14 days post-onset, sensitivity was >90% for all assays except DiaSorin IgG, which had a non-significantly lower sensitivity than the other assays. Charlton et al. [5] also reported low sensitivity (48%) for this assay at 0-14 days, which increased to 73% by days 15–21. Low sensitivity early in infection could be a result of an assay detecting only IgG antibodies, but both the Ortho and Abbott IgG-specific assays had higher 0–14 day sensitivities which were similar to the total antibody assays. Other assay validation studies have reported similarly high sensitivities for samples collected after at least 14 days [6], [7], [8], [9], [10]. In the outbreak field study, where all samples were collected >14 days and within 2 months post-onset, >95% sensitivity was observed for all assays.

Given the lower sensitivity for some of the assays 0–14 days after infection, and the resulting likelihood of missing early infections, current serological assays may not be a useful diagnostic tool for routine clinical use. However, the outbreak investigations revealed an increased diagnostic yield of 16% compared to PCR testing alone, suggesting that serology would be useful as an adjunct to molecular testing in clinical situations where the PCR diagnostic window might have been missed [11]. The additional cases detected by serology are likely an under-estimate as other studies have demonstrated low or absent antibody responses for individuals with mild disease [12,13], while those requiring hospitalization tended to have higher responses [14].

In the field study, there was consensus agreement (at least five assays reactive or at least five assays negative) for the majority of known positive (95.7%) and negative (98.4%) patient samples, but no assay had 100% accuracy. Discordant results may indicate variability in serological responses among patients or in antibody detection by the assays. When using a given assay for clinical diagnostic purposes, especially where PPV is low, orthogonal testing with a second assay, perhaps with a different antigen target, might increase confidence that dual reactive results indicate true antibody positivity [15,16].

The utility of these serology platforms for estimation of vaccine responses and population seroprevalence would be enhanced if it was known that the antibodies detected correlate with neutralizing capacity, and it has been reported that anti-spike antibodies correlate more closely with neutralizing antibodies than anti-nucleocapsid antibodies [17]. A study by Kohmer et al. [18] demonstrated that antibodies in all but one COVID-19 PCR positive patients who were subsequently tested by serology were neutralizing, but that study evaluated different commercial platforms than those in our study.

Strengths of this study include the large number of assays evaluated and the availability of a large number of samples from SARS-CoV-2 outbreaks. A limitation of the validation study is the small number of samples collected early after infection which may have impacted the sub-group sensitivity analysis. The small number also precluded assessment of antibody kinetics during seroconversion. The large number of reactive Siemens T results that exceeded the maximum range of the assay likely impacted assessment of sex differences in test signals, although there were no significant sex differences for the other assays which had almost no out of range results.

In conclusion, all six SARS-CoV-2 serologic assays evaluated in this study showed high sensitivity, specificity and positive predictive value for samples collected ≥14 days after onset, suggesting that these assays will be useful for assessment of population seroprevalence and response to vaccines. Furthermore, the addition of serology to PCR testing during outbreaks increased the overall case yield by 16%.

Declaration of Competing Interests

MK has received grants/contracts paid to his institution from Roche and Hologic related to human papillomavirus research, and from Siemens, DiaSorin, and Ortho, unrelated to the present work. DC has received travel expenses and speaker honoraria from Hologic, unrelated to the present work. All other authors report no conflicts of interest.

Acknowledgments

Author contributions

IS: Study planning, data acquisition and analysis, study methodology, BCCDC laboratory lead

VEB: Data acquisition and analysis, BC Children's Hospital laboratory lead

JS: Data acquisition and analysis; Providence Health Laboratory lead

DC: Data analysis, drafting and finalizing manuscript

BB: BC Children's Hospital laboratory testing

LB: BC Children's Hospital laboratory test supervision

SM: Planning study, clinical sample and data acquisition, data analysis

MC: Planning study, clinical sample and data acquisition, data analysis

MMc: Field study lead and clinical sample collection

AM: BCCDC laboratory test supervision

C: BCCDC laboratory test supervision

RV: Field study clinical sample collection and data analysis

AH: Planning study, Vancouver Coastal Health lead

DG: Clinical sample and data acquisition

PNL: Planning study, data analysis, drafting manuscript

MK: Overall study lead at BCCDC laboratory

MM: Conceptualization, study methodology, drafting manuscript

All authors: Critical review and approval of the final submitted manuscript

Funding

This work was supported by Genome BC (grant number COV-050). The study sponsor had no role in the collection and interpretation of data, nor in the decision to submit the article for publication. All test reagents were supplied by the respective manufacturers at no charge. The manufacturers had no input into the study analysis or the decision to publish the results.

Acknowledgements

The authors acknowledge the contributions of Ms. Amanda Yu and Dr. Lovedeep Gondara for providing advice on statistical analyses.

Footnotes

Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.jcv.2021.104914.

Appendix. Supplementary materials

mmc1.docx (101KB, docx)

References

  • 1.Wu A, Peng Y, Huang B, Ding X, Wang X, Niu P, et al. Genome composition and divergence of the novel coronavirus (2019-nCoV) originating in China. Cell Host Microbe. 2020;27(3):325–328. doi: 10.1016/j.chom.2020.02.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Wu D, Wu T, Liu Q, Yang Z. The SARS-CoV-2 outbreak: what we know. Int. J. Infect. Dis. 2020;94:44–48. doi: 10.1016/j.ijid.2020.03.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Yan Y, Chang L, Wang L. Laboratory testing of SARS-CoV, MERS-CoV, and SARS-CoV-2 (2019-nCoV): current status, challenges, and countermeasures. Rev. Med. Virol. 2020;30(3):e2106. doi: 10.1002/rmv.2106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Deeks JJ, Dinnes J, Takwoingi Y, Davenport C, Spijker R, Taylor-Phillips S, et al. Antibody tests for identification of current and past infection with SARS-CoV-2. Cochrane Database Syst. Rev. 2020;(6) doi: 10.1002/14651858.CD013652. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Charlton CL, Kanji JN, Johal K, Bailey A, Plitt SS, MacDonald C, et al. Evaluation of six commercial mid- to high-volume antibody and six point-of-care lateral flow assays for detection of SARS-CoV-2 antibodies. J. Clin. Microbiol. 2020;58(10) doi: 10.1128/JCM.01361-20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Flinck H, Rauhio A, Luukinen B, Lehtimäki T, Haapala AM, Seiskari T, et al. Comparison of 2 fully automated tests detecting antibodies against nucleocapsid N and spike S1/S2 proteins in COVID-19. Diagn. Microbiol. Infect. Dis. 2021;99(1) doi: 10.1016/j.diagmicrobio.2020.115197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Coste AT, Jaton K, Papadimitriou-Olivgeris M, Greub G, Croxatto A. Comparison of SARS-CoV-2 serological tests with different antigen targets. J. Clin. Virol. 2021;134 doi: 10.1016/j.jcv.2020.104690. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Schnurra C, Reiners N, Biemann R, Kaiser T, Trawinski H, Jassoy C. Comparison of the diagnostic sensitivity of SARS-CoV-2 nucleoprotein and glycoprotein-based antibody tests. J. Clin. Virol. 2020;129 doi: 10.1016/j.jcv.2020.104544. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Trabaud MA, Icard V, Milon MP, Bal A, Lina B, Escuret V. Comparison of eight commercial, high-throughput, automated or ELISA assays detecting SARS-CoV-2 IgG or total antibody. J. Clin. Virol. 2020;132 doi: 10.1016/j.jcv.2020.104613. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Therrien C, Serhir B, Bélanger-Collard M, Skrzypczak J, Shank DK, Renaud C, et al. Multicenter evaluation of the clinical performance and the neutralizing antibody activity prediction properties of ten high throughput serological assays used in clinical laboratories. J. Clin. Microbiol. 2020 doi: 10.1128/JCM.02511-20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Pancrazzi A, Magliocca P, Lorubbio M, Vaggelli G, Galano A, Mafucci M, et al. Comparison of serologic and molecular SARS-CoV 2 results in a large cohort in Southern Tuscany demonstrates a role for serologic testing to increase diagnostic sensitivity. Clin. Biochem. 2020;84:87–92. doi: 10.1016/j.clinbiochem.2020.07.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Pickering S, Betancor G, Galão RP, Merrick B, Signell AW, Wilson HD, et al. Comparative assessment of multiple COVID-19 serological technologies supports continued evaluation of point-of-care lateral flow assays in hospital and community healthcare settings. PLoS Pathog. 2020;16(9) doi: 10.1371/journal.ppat.1008817. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Oved K, Olmer L, Shemer-Avni Y, Wolf T, Supino-Rosin L, Prajgrod G, et al. Multi-center nationwide comparison of seven serology assays reveals a SARS-CoV-2 non-responding seronegative subpopulation. EClinicalMedicine. 2020;29 doi: 10.1016/j.eclinm.2020.100651. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Chua KYL, Vogrin S, Bittar I, Horvath JH, Wimaleswaran H, Trubiano JA, et al. Clinical evaluation of four commercial immunoassays for the detection of antibodies against established SARS-CoV-2 infection. Pathology. 2020;52(7):778–782. doi: 10.1016/j.pathol.2020.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Centers for Disease Control and Prevention. Interim Guidelines for COVID-19 Antibody Testing in Clinical and Public Health Settings 2020. Available from: https://www.cdc.gov/coronavirus/2019-ncov/lab/resources/antibody-tests-guidelines.html.
  • 16.Xu G, Emanuel AJ, Nadig S, Mehrotra S, Caddell BA, Curry SR, et al. Evaluation of orthogonal testing algorithm for detection of SARS-CoV-2 IgG antibodies. Clin. Chem. 2020 doi: 10.1093/clinchem/hvaa210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Grzelak L, Temmam S, Planchais C, Demeret C, Tondeur L, Huon C, et al. A comparison of four serological assays for detecting anti-SARS-CoV-2 antibodies in human serum samples from different populations. Sci. Transl. Med. 2020;12(559) doi: 10.1126/scitranslmed.abc3103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Kohmer N, Westhaus S, Rühl C, Ciesek S, Rabenau HF. Clinical performance of different SARS-CoV-2 IgG antibody tests. J. Med. Virol. 2020;92(10):2243–2247. doi: 10.1002/jmv.26145. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

mmc1.docx (101KB, docx)

Articles from Journal of Clinical Virology are provided here courtesy of Elsevier

RESOURCES