Skip to main content
PLOS Medicine logoLink to PLOS Medicine
. 2022 May 26;19(5):e1004011. doi: 10.1371/journal.pmed.1004011

Accuracy of rapid point-of-care antigen-based diagnostics for SARS-CoV-2: An updated systematic review and meta-analysis with meta-regression analyzing influencing factors

Lukas E Brümmer 1,#, Stephan Katzenschlager 2,#, Sean McGrath 3, Stephani Schmitz 4, Mary Gaeddert 1, Christian Erdmann 5, Marc Bota 6, Maurizio Grilli 7, Jan Larmann 2, Markus A Weigand 2, Nira R Pollock 8, Aurélien Macé 9, Berra Erkosar 9, Sergio Carmona 9, Jilian A Sacks 9, Stefano Ongarello 9, Claudia M Denkinger 1,10,*
Editor: Amitabh Bipin Suthar11
PMCID: PMC9187092  PMID: 35617375

Abstract

Background

Comprehensive information about the accuracy of antigen rapid diagnostic tests (Ag-RDTs) for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is essential to guide public health decision makers in choosing the best tests and testing policies. In August 2021, we published a systematic review and meta-analysis about the accuracy of Ag-RDTs. We now update this work and analyze the factors influencing test sensitivity in further detail.

Methods and findings

We registered the review on PROSPERO (registration number: CRD42020225140). We systematically searched preprint and peer-reviewed databases for publications evaluating the accuracy of Ag-RDTs for SARS-CoV-2 until August 31, 2021. Descriptive analyses of all studies were performed, and when more than 4 studies were available, a random-effects meta-analysis was used to estimate pooled sensitivity and specificity with reverse transcription polymerase chain reaction (RT-PCR) testing as a reference. To evaluate factors influencing test sensitivity, we performed 3 different analyses using multivariable mixed-effects meta-regression models. We included 194 studies with 221,878 Ag-RDTs performed. Overall, the pooled estimates of Ag-RDT sensitivity and specificity were 72.0% (95% confidence interval [CI] 69.8 to 74.2) and 98.9% (95% CI 98.6 to 99.1). When manufacturer instructions were followed, sensitivity increased to 76.3% (95% CI 73.7 to 78.7). Sensitivity was markedly better on samples with lower RT-PCR cycle threshold (Ct) values (97.9% [95% CI 96.9 to 98.9] and 90.6% [95% CI 88.3 to 93.0] for Ct-values <20 and <25, compared to 54.4% [95% CI 47.3 to 61.5] and 18.7% [95% CI 13.9 to 23.4] for Ct-values ≥25 and ≥30) and was estimated to increase by 2.9 percentage points (95% CI 1.7 to 4.0) for every unit decrease in mean Ct-value when adjusting for testing procedure and patients’ symptom status. Concordantly, we found the mean Ct-value to be lower for true positive (22.2 [95% CI 21.5 to 22.8]) compared to false negative (30.4 [95% CI 29.7 to 31.1]) results. Testing in the first week from symptom onset resulted in substantially higher sensitivity (81.9% [95% CI 77.7 to 85.5]) compared to testing after 1 week (51.8%, 95% CI 41.5 to 61.9). Similarly, sensitivity was higher in symptomatic (76.2% [95% CI 73.3 to 78.9]) compared to asymptomatic (56.8% [95% CI 50.9 to 62.4]) persons. However, both effects were mainly driven by the Ct-value of the sample. With regards to sample type, highest sensitivity was found for nasopharyngeal (NP) and combined NP/oropharyngeal samples (70.8% [95% CI 68.3 to 73.2]), as well as in anterior nasal/mid-turbinate samples (77.3% [95% CI 73.0 to 81.0]). Our analysis was limited by the included studies’ heterogeneity in viral load assessment and sample origination.

Conclusions

Ag-RDTs detect most of the individuals infected with SARS-CoV-2, and almost all (>90%) when high viral loads are present. With viral load, as estimated by Ct-value, being the most influential factor on their sensitivity, they are especially useful to detect persons with high viral load who are most likely to transmit the virus. To further quantify the effects of other factors influencing test sensitivity, standardization of clinical accuracy studies and access to patient level Ct-values and duration of symptoms are needed.


Lukas Brümmer and co-workers report an updated systematic review and meta-analysis on the accuracy of antigen-based rapid diagnostic tests for SARS-CoV-2 infection.

Author summary

Why was this study done?

  • Antigen rapid diagnostic tests (Ag-RDTs) for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) have proven to be a cornerstone in the fight against the Coronavirus Disease 2019 (COVID-19) pandemic.

  • In an earlier analysis, we found Ag-RDTs to be 76.3% sensitive and 99.1% specific, but with sensitivity varying between test manufacturers, the way tests were performed, and the patients in which they were used.

  • We now present an updated analysis and explore the factors influencing Ag-RDTs’ sensitivity and driving heterogeneity in the results in further detail.

What did the researchers do and find?

  • We searched multiple preprint and peer-reviewed databases for clinical accuracy studies evaluating Ag-RDTs for SARS-CoV-2 at the point of care.

  • Ag-RDTs proved to be 76.3% (95% confidence interval (CI) 73.7 to 78.7) sensitive and 99.1% (95% CI 98.8 to 99.3) specific, when performed as per the manufacturer’s instructions.

  • Sensitivity increased by 2.9 percentage points (95% CI 1.7 to 4.0) for each unit the mean cycle threshold (Ct)-value, a semiquantitative measurement of the real-time polymerase chain reaction test decreased, and sensitivity was highest in samples with a Ct-value <20 (i.e., a high viral load; sensitivity of 97.9% [95% CI 96.9 to 98.9]).

  • Higher sensitivity was also found in samples originating from symptomatic compared to asymptomatic persons, especially when study participants were still within the first week of symptom onset, but these effects were mainly driven by the sample’s viral load.

What do these findings mean?

  • Compared to our previous analysis, Ag-RDTs continue to show high sensitivity and excellent specificity in detecting SARS-CoV-2.

  • With viral load being the main driver behind test sensitivity, Ag-RDTs detect almost all of the persons with high viral load, who are at the greatest risk of transmitting the virus.

  • While it is unlikely that the overall performance of Ag-RDTs will substantially change, further research is needed to analyze the accuracy of Ag-RDTs for different virus variants and sample types, as well as methods of test performance (e.g., self-performed, instrument based) in more detail.

Introduction

Antigen rapid diagnostic tests (Ag-RDTs) for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) have proven to be a cornerstone in fighting the Coronavirus Disease 2019 (COVID-19) pandemic, as they provide results quickly and are easy to use [1]. Nevertheless, the Ag-RDTs’ performance differs widely between manufacturers, the way they are performed, and the patients in which they are used [2,3]. Thus, a comprehensive synthesis of evidence on commercially available Ag-RDTs and the factors influencing their accuracy is vital to guide public health decision makers in choosing the right test for their needs [4].

Starting in October 2020, we conducted a living systematic review (available online at www.diagnosticsglobalhealth.org, updated weekly until August 31, 2021), summarizing the accuracy of commercially available Ag-RDTs reported in scientific literature. To equip public health decision makers with the latest findings, we published the results of our first review as soon as possible in February 2021 (including literature until December 15, 2020) [5]. After peer-review and including studies for 4 further months (until April 30, 2021), we published an updated review. Here, when performed as per the manufacturer’s instructions, pooled estimates of Ag-RDT sensitivity and specificity were 76.3% (95% confidence interval (CI) 73.1% to 79.2%) and 99.1% (95% CI 98.8% to 99.4). The most sensitive test was the SARS-CoV-2 Antigen Test (LumiraDx, United Kingdom; henceforth called LumiraDx) [4].

Since our last update, many additional studies have been published with a substantial increase in studies assessing asymptomatic participants, allowing for further sub-analysis of findings [3,6]. In addition, we and others found Ag-RDT sensitivity to decrease significantly in persons with lower viral load. Viral load is usually estimated through the number of cycles, i.e., the cycle threshold (Ct) value, a reverse transcriptase polymerase chain reaction (RT-PCR) needs to be performed until viral RNA can be detected, with a low Ct-value indicating a high viral load. Furthermore, sensitivity decreased in asymptomatic persons or persons with more than 7 days since symptom onset (DOS > 7) [4]. However, studies including symptomatic patients enroll persons typically within days since onset of symptoms [7], when viral load is highest [8,9]. On the contrary, studies including only asymptomatic persons have a higher chance of including persons at a later stage in the disease and thus with lower viral load. Therefore, the decrease in Ag-RDT sensitivity might only be driven by viral load, irrespective of persons’ symptom status.

With the present work, we aim not only to give an updated overview on the accuracy of commercially available Ag-RDTs, but also to further explore the impact of viral load, the presence of symptoms, and testing procedure on the accuracy of Ag-RDTs.

Methods

We developed a study protocol following standard guidelines for systematic reviews [10,11], which is available in the Supporting information (S1 Text). We also completed the PRISMA checklist (S1 PRISMA Checklist) and registered the review on PROSPERO (registration number: CRD42020225140).

Search strategy

We performed a search of the databases PubMed, Web of Science, medRxiv, and bioRxiv. The search terms were developed with an experienced medical librarian (MG), using combinations of subject headings (when applicable) and text-words for the concepts of the search question, and checked against an expert assembled list of relevant papers. The main search terms were “Severe Acute Respiratory Syndrome Corona-virus 2,” “COVID-19,” “Betacoronavirus,” “Coronavirus,” and “Point of Care Testing” with no language restrictions. The full list of search terms is available in S2 Text. Also, 1 author (LEB) manually searched the website of FIND, the global alliance for diagnostics (https://www.finddx.org/sarscov2-eval-antigen/), for additional relevant studies, and search results were checked by a second author (SK). We performed the search biweekly through August 31, 2021. The last manual search of the FIND website was performed on September 10, 2021. In addition to conducting the present review, we updated our website www.diagnosticsglobalhealth.org weekly with the latest search results based on the methods outlined below.

Inclusion criteria

We included studies evaluating the accuracy of commercially available Ag-RDTs to establish a diagnosis of SARS-CoV-2 infection at the point-of-care (POC), against RT-PCR or cell culture as reference standard. We included all study populations irrespective of age, presence of symptoms, or study location. No language restrictions were applied. We considered cohort studies, nested cohort studies, case–control or cross-sectional studies, and randomized studies. We included both peer-reviewed publications and preprints.

We excluded studies, in which patients were tested for the purpose of monitoring or ending quarantine. Also, publications with a population size smaller than 10 were excluded (although the size threshold of 10 is arbitrary, such small studies are more likely to give unreliable estimates of sensitivity and specificity). Analytical accuracy studies, where tests are performed on spiked samples with a known quantity of virus, were also excluded.

Index tests

Ag-RDTs for SARS-CoV-2 aim to detect infection by recognizing viral proteins (typically the SARS-CoV-2 nucleoprotein). Most Ag-RDTs dedicated for POC deployment use specific labeled antibodies attached to a nitrocellulose matrix strip (lateral flow assay) to capture and detect the viral antigen. Successful binding of the antigen to the antibodies is either detected visually by the appearance of a line on the matrix strip or through a specific reader instrument for fluorescence detection. Other POC instrument-based tests use chips or cartridges that enable an automated immunoassay testing procedure. Ag-RDTs typically provide results within 10 to 30 minutes [3].

Reference standard

Viral culture detects viable virus that is relevant for transmission but is only available in research settings. Since RT-PCR tests are more widely available and SARS-CoV-2 RNA (as reflected by RT-PCR Ct-value) highly correlates with SARS-CoV-2 antigen quantities [12], we considered RT-PCR an acceptable reference standard for the purposes of this systematic review. Where an international standard for the correlation of the viral load to the Ct-values was used, we also report the viral load [13].

Study selection and data extraction

Two reviewers (LEB and SS, LEB and CE, or LEB and MB) reviewed the titles and abstracts of all publications identified by the search algorithm independently, followed by a full-text review of those eligible, to select the articles for inclusion in the systematic review. Any disputes were solved by discussion or by a third reviewer (CMD).

Studies that assessed multiple Ag-RDTs or presented results based on differing parameters (e.g., various sample types) were considered as individual data sets. At first, 4 authors (SK, CE, SS, and MB) extracted 5 randomly selected papers in parallel to align data extraction methods. Afterwards, data extraction and the assessment of methodological quality and independence from test manufacturers (see below) were performed by 1 author per paper (LEB, SK, CE, SS, or MB) and reviewed by a second (LEB, SK, SS, or MB). Any differences were resolved by discussion or by consulting a third author (CMD). The data items extracted can be found in the Supporting information (S1 Table).

Assessment of methodological quality

The quality of the clinical accuracy studies was assessed by applying the QUADAS-2 tool [14]. The tool evaluates 4 domains: study participant selection, index test, reference standard, and flow and timing. For each domain, the risk of bias is analyzed using different signaling questions. Beyond the risk of bias, the tool also evaluates the applicability of each included study to the research question for every domain. We prepared a QUADAS-2 assessment guide specific to the needs of this review, which can be found in the Supporting information (S3 Text).

Assessment of independence from manufacturers

We examined whether a study received financial support from a test manufacturer (including the free provision of Ag-RDTs), whether any study author was affiliated with a test manufacturer, and whether a respective conflict of interest was declared. Studies were judged not to be independent from the test manufacturer if at least 1 of these aspects was present; otherwise, they were considered to be independent.

Statistical analysis and data synthesis

We extracted raw data from the studies and recalculated performance estimates where possible based on the extracted data. Also, some primary studies reported the median Ct-value along with the first and third interquartile range (IQR) and/or minimum and maximum values rather than the sample mean and standard deviation. To incorporate these studies in our analyses, we applied the quantile estimation approach [15] to estimate the mean and standard deviation of the Ct-values. In an effort to use as much of the heterogeneous data as possible, the cutoffs for the Ct-value groups were relaxed by 2 to 3 points within each range. The <20 group included values reported up to ≤20, the <25 group included values reported as ≤24 or <25 or 20 to 25, and the <30 group included values from ≤29 to ≤33 and 25 to 30. The ≥25 group included values reported as ≥25 or 25 to 30, and the ≥30 group included values from ≥30 to ≥35. For the same reason, when categorizing by age, the age group <18 years (children) included samples from persons whose age was reported as <16 or <18 years, whereas the age group ≥18 years (adults) included samples from persons whose age was reported as ≥16 or ≥18 years. Also, for the symptom duration groups, the ≤7 days group included ≤4, ≤5, ≤6, 6 to 7, ≤7, and ≤9 days, and the >7 days group included >5, 6 to 10, 6 to 21, >7, and 8 to 14 days. Relaxing the boundaries for the Ct-value, age, and duration of symptoms subgroup resulted in some overlap within the respective groups. Predominant variants of concern (VoC) for each study were analyzed using the online tool CoVariants [16] with respect to the stated study period. The respective VoCs were extracted according the current WHO listing [17]. The raw data can be found in the Supporting information (S2 Table) and with more details online (https://doi.org/10.11588/data/T3MIB0).

If 4 or more data sets were available with at least 20 RT-PCR-positive samples per data set for a predefined analysis, a meta-analysis was performed. We report pooled estimates of sensitivity and specificity for SARS-CoV-2 detection along with 95% CIs using a bivariate model (implemented with the “reitsma” command from the R package “mada,” version 0.5.10). Summary receiver operating characteristic (sROC) curves were created for the 2 Ag-RDTs with the highest sensitivity. In subgroup analyses (below), where papers presented data only on sensitivity, a univariate random effects inverse variance meta-analysis was performed (using the “metagen” command from the R package “meta,” version 5.1–1, and the “rma” command from the R package “metafor,” version 3.0–2). When there were fewer than 4 studies for an index test, only a descriptive analysis was performed, and accuracy ranges are reported.

We prepared forest plots for the sensitivity and specificity of each test and visually evaluated the heterogeneity between studies. In addition, heterogeneity was assessed by calculating Cochran’s Q and I2 indices. Because there is no standard method taking into account the correlation between the sensitivity and specificity in bivariate models, we calculated these indices from a pooled diagnostic odds ratio using the “madauni” function from the “mada” package. However, while this was the only approach possible, we do not view it as fully statistical stringent and present the resulting Cochran’s Q and I2 only in the Supporting information (S3 Table). For the univariate models, the heterogeneity measures were obtained from the “metagen” model output directly and are reported in the results section.

We predefined subgroups for meta-analysis based on the following characteristics: Ct-value range, testing procedure in accordance with manufacturer’s instructions as detailed in the instructions for use (IFU) (henceforth called IFU-conforming) versus not IFU-conforming, age (<18 versus ≥18 years), sample type, presence or absence of symptoms, symptom duration (≤7 days versus >7 days), viral load, and predominant SARS-CoV-2 variant. We also provide mean Ct-value across true positive (TP) and false negative (FN) test results. For categorization by sample type, we assessed (1) nasopharyngeal (NP) alone or combined with other (e.g., oropharyngeal [OP]); (2) OP alone; (3) anterior nasal (AN) or mid-turbinate (MT); (4) a combination of bronchoalveolar lavage and throat wash (BAL/TW); or (5) saliva.

We applied multivariable linear mixed-effect meta-regression models to explore factors that affect diagnostic test sensitivity. Based on our previous analysis [4], we a priori defined an individual’s time since infection and sample type and condition as underlying factors, influencing test sensitivity through an individual’s symptom status (symptomatic versus asymptomatic), the sample’s viral load (estimated by the mean Ct-value as presented in the study for the sub cohort of interest), and the testing procedure (IFU- versus not IFU-conforming). We performed 3 different analyses, each of which obtained unadjusted and adjusted estimates (i.e., an estimate of the association between a factor and test sensitivity, holding the other covariates in the model constant) of the effect of factors on test sensitivity.

In the first analysis, we estimated the direct effect of symptom status, viral load, and testing procedure on test sensitivity. For the second and third analysis, we restricted the meta-regression models to data sets of symptomatic persons due to a lack of data. Specifically, the second analysis assessed the effect of time since infection (estimated as the sample mean of symptom duration), viral load, and testing procedure on test sensitivity. The third analysis also assessed the effect of time since infection, viral load, and testing procedure on test sensitivity, but depicted the time since infection as a binary covariate of the symptom duration subgroup (≤7 versus >7 days). Further details on the implementation of the meta-regression models and the underlying casual diagrams are available in the Supporting information (Figs A and B in S4 Text). Data sets with less than 5 RT-PCR positives were excluded. We considered an effect to be statistically significant when the regression coefficient’s 95% CI did not include 0. The analyses were performed using the “metafor” R package, version 3.0–2 [18].

As recommended to investigate publication bias for diagnostic test accuracy meta-analyses, we performed the Deeks’ test for funnel-plot asymmetry [19] (using the “midas” command in Stata, version 15); a p-value < 0.10 for the slope coefficient indicates significant asymmetry.

Sensitivity analysis

Three sensitivity analyses were performed: estimation of sensitivity and specificity excluding case–control studies, estimation of sensitivity and specificity excluding not peer-reviewed studies, and estimation of sensitivity and specificity excluding studies that were potentially influenced through test manufacturers. We compared the results of each sensitivity analysis against the overall results to assess the potential bias introduced by case–control, not peer-reviewed, and manufacturer-influenced studies.

Results

Summary of studies

The systematic search resulted in 31,254 articles. After removing duplicates, 11,462 articles were screened, and 433 papers were considered eligible for full-text review. Of these, 259 were excluded because they did not present primary data or the Ag-RDT was not commercially available. For similar reasons, we also excluded 4 studies from the FIND website. A list of the studies excluded and their reason for exclusion can be found in the Supporting information (S5 Text). This left 174 studies from the systematic search [20193] as well as further 20 studies from the FIND website [194213] to be included in the review (Fig 1).

Fig 1. PRISMA flow diagram.

Fig 1

Based on Page and colleagues [214]. Ag-RDT, antigen rapid diagnostic tests; IFU, instructions for use.

At the end of the data extraction process, 21 studies were still in preprint form [20,21,25,51,54,59,62,69,73,78,88,104,120,125,133,164,171,172,177,178,190]. All studies included were written in English, except for 3 in Spanish [57,66,138], 1 in Turkish [99], and 1 in French [157]. Out of the 194 studies, 26 conducted a case–control study [25,36,38,70,71,76,85,88,9294,97,98,100,107,112,139,144,148,149,155,160,170,172,186,188], while the remaining 168 were cross-sectional or cohort studies. The reference method was RT-PCR in all except 1 study, which used viral culture [139].

The 194 studies were divided into 333 data sets. Across these, 76 different Ag-RDTs were evaluated (75 lateral flow assays, of which 63 are interpreted visually and 12 required an automated, proprietary reader; 1 assay is an automated immunoassay). The most common reasons for testing were the occurrence of symptoms (98 data sets, 29.4% of data sets) and screening of asymptomatic persons with (3; 0.9%) or without (22; 6.6%) close contact to a SARS-CoV-2 confirmed case. In 142 (42.6%) of the data sets, individuals were tested due to more than 1 of the reasons mentioned and for 68 (20.4%) the reason for testing was unclear.

In total, 221,878 Ag-RDTs were performed, with a mean number of samples per data set of 666 (range 15 to 22,994). The age of the individuals tested was specified for only 90,981 samples, of which 84,119 (92.5%) were from adults (age group ≥18) and 6,862 (7.5%) from children (age group <18). Symptomatic persons comprised 74,118 (33.4%) samples, while 97,982 (44.2%) samples originated from asymptomatic persons, and for 49,778 (22.4%) samples, the participant’s symptom status was not stated by the authors. The most common sample type evaluated was NP and mixed NP/OP (117,187 samples, 52.8%), followed by AN/MT (86,354 samples, 38.9%). There was substantially less testing done on the other sample types, with 3,586 (1.6%) tests done from OP samples, 1,256 (0.6%) from saliva, 219 (0.1%) from BAL/TW, and for 13,276 (6.0%) tests the type of sample was not specified in the respective studies.

A summary of the tests evaluated in clinical accuracy studies, including study author and sample size, as well as study design aspects that could potentially influence test performance, such as sample type, sample condition, IFU conformity, and symptom status, can be found in the Supporting information (S2 Table). The Standard Q test (SD Biosensor, South Korea; distributed in Europe by Roche, Germany; henceforth called Standard Q) was the most frequently used with 57 (17.1%) data sets and 36,246 (16.3%) tests, while the Panbio test (Abbott Rapid Diagnostics, Germany; henceforth called Panbio) was assessed in 55 (16.5%) data sets with 38,620 (17.4%) tests performed. Detailed results for each clinical accuracy study are available in the Supporting information (S1 Fig).

Methodological quality of studies

The findings on study quality using the QUADAS-2 tool are presented in Fig 2A and 2B. In 294 (88.3%) data sets, a relevant study population was assessed. However, for only 68 (20.4%) of the data sets, the selection of study participants was considered representative of the setting and population chosen (i.e., they avoided inappropriate exclusions or a case–control design and enrollment occurred consecutive or randomly).

Fig 2.

Fig 2

(a) Methodological quality of the clinical accuracy studies (risk of bias). (b) Methodological quality of the clinical accuracy studies (applicability).

The conduct and interpretation of the index tests were considered to have low risk of bias in 176 (52.9%) data sets (e.g., through appropriate blinding of persons interpreting the visual read-out). However, for 155 (46.5%) data sets, sufficient information to clearly judge the risk of bias was not provided. In only 151 (45.3%) data sets, the Ag-RDTs were performed according to IFU, while 138 (41.4%) were not IFU-conforming, potentially impacting the diagnostic accuracy; for 44 (13.2%) data sets, the IFU status was unclear. The most common deviations from the IFU were (1) use of samples that were prediluted in transport media not recommended by the manufacturer (113 data sets, 12 unclear); (2) use of banked samples (103 data sets, 12 unclear); and (3) a sample type that was not recommended for Ag-RDTs (8 data sets, 11 unclear).

In 126 (37.8%) data sets, the reference standard was performed before the Ag-RDT, or the operator conducting the reference standard was blinded to the Ag-RDT results, resulting in a low risk of bias. In almost all other data sets (206; 61.9%), this risk could not be assessed, due to missing information and for 1 data set (0.3%) intermediate concern was raised. The applicability of the reference test was judged to be of low concern for all data sets, as viral culture or RT-PCR are considered to adequately define the target condition for the purpose of this study.

In 327 (98.2%) data sets, the sample for the index test and reference test were obtained at the same time, while this was unclear in 6 (1.8%). In 227 (68.2%) data sets, the same RT-PCR assay was used as the reference of all included samples, while in 85 (25.5%) data sets, multiple RT-PCR assays were used as the reference. The RT-PCR systems used most frequently were the Cobas SARS-CoV-2 Test (Roche, Germany; used in 79 data sets [23.7%]), the Allplex 2019-nCoV Assay (Seegene, South Korea; used in 61 data sets [18.3%]), and the GeneXpert (Cepheid, United States, CA; used in 34 data sets [10.2%]). For 21 (6.3%) data sets, the RT-PCR used as reference standard was unclear. The RT-PCR system, its limit of detection (if publicly available from the manufacturer) and sample type used in each data set can be found in the Supporting information (S2 Table). Furthermore, for 19 (5.7%) data sets, there was a concern that not all selected study participants were included in the analysis.

Finally, 45 (23.2%) of the studies received financial support from the Ag-RDT manufacturer. In 13 of these as well as in 2 others (in total 7.7% of all studies), employment of the authors by the manufacturer of the Ag-RDT studied was indicated. The respective studies are listed in the Supporting information (S6 Text). Overall, a competing interest was found in 47 (24.2%) of the studies. Detailed assessment of each QUADAS domain can be found in the Supporting information (S2 Fig).

Detection of SARS-CoV-2 infection

Overall, 38 data sets were excluded from the meta-analysis, as they included fewer than 20 RT-PCR positive samples. An additional 28 data sets were missing either sensitivity or specificity and were only considered for univariate analyses. The remaining 267 data sets, evaluating 198,584 tests, provided sufficient data for bivariate analysis. The results are presented in Fig 3A–3E. Detailed results for the subgroup analysis are available in the Supporting information (S3S7 Figs).

Fig 3. (a–f) Pooled sensitivity and specificity by IFU conformity, Ct-value*, sample type, symptom status, duration of symptoms, and age.

Fig 3

*Low Ct-values are the RT-PCR semiquantitative correlate for a high virus concentration, only sensitivity calculated. AN, anterior nasal; CI, confidence interval; IFU, instructions for use; MT, mid-turbinate; N, number of; NP, nasopharyngeal; RT-PCR, reverse transcription polymerase chain reaction.

Including any test and type of sample, the pooled estimates of sensitivity and specificity were 72.0% (95% CI 69.8 to 74.2) and 98.9% (95% CI 98.6 to 99.1), respectively. When comparing IFU and non-IFU-conform testing, sensitivity markedly differed with 76.3% (95% CI 73.7 to 78.7) compared to 66.7% (95% CI 62.6 to 70.6), respectively. Pooled specificity was similar in both groups: 99.1% (95% CI 98.8 to 99.3) and 98.4% (95% CI 97.8 to 98.8), respectively (Fig 3A).

Subgroup analysis by Ct-value

We use Ct-value as a semiquantitative correlate for the sample’s viral load [12]. As a point of reference, we assume as a median conversion that a Ct-value of 25 corresponds to a viral load of 1.5 * 106 RNA copies per milliliter of transport media, but this varies between the types of RT-PCRs used for measuring viral load [144,215].

In samples with Ct-values <20, a very high estimate of sensitivity was found (97.9% [95% CI 96.9 to 98.9]). The pooled sensitivity for Ct-values <25 was markedly better at 90.6% (95% CI 88.3 to 93.0) compared to the group with Ct ≥ 25 at 54.4% (95% CI 47.3 to 61.5). A similar pattern was observed when the Ct-values were analyzed using cutoffs <30 or ≥30, resulting in an estimated sensitivity of 76.8% (95% CI 73.1 to 80.4) and 18.7% (95% CI 13.9 to 23.4), respectively (Fig 3B).

When pooling Ct-value estimates for TP Ag-RDT results (TP; 5,083 samples, 69 data sets) and FN (2,390 samples, 76 data sets) Ag-RDT results, the mean Ct-values were 22.2 (95% CI 21.5 to 22.8) and 30.2 (95% CI 29.6 to 30.9), respectively (S8 Fig). Across both TP and FN samples, mean Ct-value was 26.3 (95% CI 25.5 to 27.1). This demonstrates that RT-PCR positive samples missed by Ag-RDT have a substantially lower viral load (higher Ct-value) compared to those that were detected. Individual forest plots for each data set with mean Ct-values are presented in the Supporting information (S9 Fig).

Subgroup analysis by sample type

Most data sets evaluated NP or combined NP/OP swabs (197 data sets and 104,341 samples) as the sample type for the Ag-RDT. NP or combined NP/OP swabs achieved a pooled sensitivity of 70.8% (95% CI 68.3 to 73.2) and specificity of 98.8% (95% CI 98.6 to 99.1). Data sets that used AN/MT swabs for Ag-RDTs (52 data sets and 84,020 samples) showed a summary estimate for sensitivity of 77.3% (95% CI 73.0 to 81.0) and specificity of 99.1% (95% CI 98.6 to 99.4). However, 2 studies that reported direct head-to-head comparison of NP and AN/MT samples from the same participants using the same Ag-RDT (Standard Q) reported equivalent performance [116,117]. In contrast, saliva swabs (4 data sets, 1,216 samples) showed the lowest pooled sensitivity with only 50.1% (95% CI 7.7 to 92.3) (Fig 3C). In 3 of the data sets utilizing a saliva sample, saliva was collected as whole mouth fluid (sensitivity from 8.1% [95% CI 2.7 to 17.8] to 55.6% [95% CI 35.3 to 74.5]) [24,92,154]. The fourth used a cheek swab for sample collection (sensitivity 100% [95% CI 90.3 to 100]) [55].

Due to only 3 data sets with 3,586 samples, we were not able to estimate pooled sensitivity and specificity for OP samples. Median sensitivity and specificity were 59.4% (range 50.0% to 81.0%) and 99.1% (range 99.0% to 100.0%), respectively. We were also not able to perform a subgroup meta-analysis for BAL/TW due to insufficient data, with only 1 study with 73 samples evaluating the Biocredit Covid-19 Antigen rapid test kit (RapiGEN, South Korea; henceforth called Rapigen), Panbio and Standard Q available and sensitivity ranging between 33.3% and 88.1% [155]. However, the use of BAL/TW sampling would be considered not IFU-conforming.

Subgroup analysis in symptomatic and asymptomatic participants

Within the data sets possible to meta-analyze, 55,186 (43.2%) samples were from symptomatic and 72,457 (56.8%) from asymptomatic persons. The pooled sensitivity for symptomatic persons was markedly higher compared to asymptomatic persons with 76.2% (95% CI 73.3 to 78.9) versus 56.8% (95% CI 50.9 to 62.4). Specificity was above 98.6% for both groups (Fig 3D).

Subgroup analysis comparing symptom duration

Data were analyzed for 9,470 persons from 26 data sets with symptoms less than 7 days, while for persons with symptoms ≥7 days, fewer data were available (620 persons, 13 data sets). The pooled sensitivity estimate for individuals with symptoms <7 days was 81.9% (95% CI 77.7 to 85.5), which is markedly higher than the 51.8% (95% CI 41.5 to 61.9) sensitivity for individuals tested ≥7 days from onset of symptoms (Fig 3D).

Subgroup analysis by virus variant

The 188 data sets with 153,522 samples were conducted in settings where the SARS-CoV-2 wild type was dominant. Here, sensitivity was 72.3% (95% CI 69.7 to 74.7) and specificity was 99.0% (95% CI 98.7 to 99.2). When the alpha variant (26 data sets, 19,512 samples) was the main variant, sensitivity slightly decreased to 67.0% (95% CI 58.5 to 74.5), but with overlapping CIs, and specificity remained similar (99.3% [95% CI 98.7 to 99.6]). In settings where the wild type and the alpha variant were codominant (6 data sets, 8,753 samples), sensitivity and specificity were 72.0% (95% CI 57.9 to 82.8) and 99.6% (95% CI 98.4 to 99.9), respectively.

Data were also available for the Beta, Gamma, Delta, Epsilon, Eta, and Kappa variant, but too limited to meta-analyze. Of these, most data were available for the Gamma variant, with sensitivity ranging from 84.6% to 89.9% (3 data sets, 886 samples) [202,209,213]. The main virus variant for each data set is listed in the Supporting information (S2 Table). All studies included in this review were conducted before the occurrence of the Omicron variant.

Subgroup analysis by age

For adults (age group ≥18), it was possible to pool estimates across 62,433 samples, whereas the pediatric group (age group <18) included 5,137 samples. There was only a small difference with overlapping CIs in sensitivity with 74.8% (95% CI 71.5 to 77.8) and 69.8% (95% CI 61.0 to 77.3) for the adult and pediatric group, respectively. For those data sets that reported a median Ct-value per age group, the Ct-value was slightly lower in the adult (median 22.6, Q1 = 20.5, Q3 = 24.6, 48 data sets) compared to the pediatric group (median 23.2, Q1 = 20.3, Q3 = 25.2, 3 data sets). Specificity was similar in both groups with over 99% (Fig 3E).

Meta-regression

The first analysis, assessing all variables that could influence sensitivity (symptom status, testing procedure [IFU-conforming versus not IFU-conforming], and mean Ct-value), included 65 data sets of symptomatic and 18 of asymptomatic persons. The second and third analysis assessed only symptomatic persons with 28 and 50 data sets, respectively. The full list of data sets for each analysis and detailed results are available in the Supporting information (Tables A–D in S4 Text).

In the first analysis, we found viral load (as estimated by Ct-value) to be the driving factor of sensitivity. Sensitivity was estimated to increase by 2.9 percentage points (95% CI 1.7 to 4.0) for every unit the mean Ct-value decreased (Table B in S4 Text), after adjusting for symptom status and testing procedure (Fig 4). In addition, sensitivity was estimated to be 20.0 percentage points (95% CI 13.7 to 26.3) higher for samples from symptomatic compared to asymptomatic participants. However, when controlling for testing procedure and mean Ct-value, this difference declined to only 11.1 percentage points (95% CI 4.8 to 17.4). The difference between IFU-conforming versus not IFU-conforming testing procedure was not significant (5.2 percentage points [95% CI ‒2.6 to 13.0] higher for IFU-conforming) after controlling for symptom status and mean Ct-value.

Fig 4. Pooled estimate of sensitivity across mean Ct-values holding symptom status and IFU-status constant at their respective means.

Fig 4

Dotted lines are the corresponding 95% CIs. The size of each point is a function of the weight of the data set in the model, where larger data sets have larger points. CI, confidence interval; Ct, cycle threshold; IFU, instructions for use.

When assessing only symptomatic participants, test sensitivity was estimated to decrease by 3.2 percentage points (95% CI ‒1.5 to 7.9) for every 1 day increase in average duration of symptoms (mean duration of symptoms ranged from 2.75 to 6.47 days). However, with the CI including the value 0, this effect was not statistically significant. When controlling for mean Ct-value and testing procedure, the estimated effect of the average duration of symptoms was close to 0 (0.7 percentage points [95% CI ‒5.0 to 6.4], Table C in S4 Text).

Concordantly, for samples collected after 7 days of symptom onset sensitivity were estimated to be 22.9 percentage points (95% CI 10.3 to 35.4) lower compared to those collected within 7 days. When controlling for mean Ct-value and testing procedure, the model still showed a decrease in sensitivity for samples collected after 7 days of symptom onset, but again closer to 0 and no longer statistically significant (‒13.8 percentage points [95% CI ‒27.7 to 0.1], Table D in S4 Text).

Analysis of individual tests

Based on 179 data sets with 143,803 tests performed, we were able to perform bivariate meta-analysis of the sensitivity and specificity for 12 different Ag-RDTs (Fig 5). Across these, pooled estimates of sensitivity and specificity on all samples were 71.6% (95% CI 69.0 to 74.1) and 99.0% (95% CI 98.8 to 99.2), which were very similar to the overall pooled estimate across all meta-analyzed data sets (72.0% and 98.9%, above).

Fig 5. Bivariate analysis of 12 Ag-RDTs.

Fig 5

Pooled sensitivity and specificity were calculated based on reported sample size, true positives, true negatives, false positives, and false negatives. Ag-RDT, antigen rapid diagnostic test; CI, confidence interval; N, number of.

The highest pooled sensitivity was found for the SARS-CoV-2 Antigen Test (LumiraDx, UK; henceforth called LumiraDx) and the Standard Q nasal test (SD Biosensor, South Korea; distributed in Europe by Roche, Germany; henceforth called Standard Q nasal) with 82.7% (95% CI 73.2 to 89.4) and 81.4% (95% CI 73.8 to 87.2), respectively. However, all tests except the COVID-19 Ag Respi-Strip (Coris BioConcept, Belgium; henceforth called Coris; sensitivity 48.4% [95% CI 36.1 to 61.0]) had CIs that were overlapping. The pooled specificity was above 98% for all of the tests, except for the Standard F test (SD Biosensor, South Korea; henceforth called Standard F) and LumiraDx with specificities of 97.9% (95% CI 96.9 to 98.5) and 96.9% (95% CI 94.4 to 98.3), respectively. Hierarchical summary receiver operating characteristic for LumiraDx and Standard Q nasal are available in the Supporting information (S10 Fig).

For 2 Ag-RDTs, we were only able to perform a univariate analysis, due to insufficient data. Sensitivities for the COVID-19 Rapid Antigen Test Cassette (SureScreen, UK; henceforth called SureScreen V) and the Nadal COVID-19 Ag Test (Nal von Minden, Germany; henceforth called Nadal) were similar with 57.7% (95% CI 40.9 to 74.4) and 56.6% (95% CI 26.9 to 86.3), respectively (S11 Fig). Specificity was only possible to calculate for the Nadal, which was lowest throughout the per test analysis with 91.1% (95% CI 80.2 to 100). For the remaining 62 Ag-RDTs, there were insufficient numbers of data sets for a uni-or bivariate meta-analysis. However, performance estimates and factors potentially influencing these are descriptively analyzed in the Supporting information (S4 Table) for each of the 62 tests.

For Panbio and Standard Q, it was also possible to pool sensitivity per Ct-value subgroup for each individual test. Panbio and Standard Q reached sensitivities of 97.2% (95% CI 95.3 to 99.2) and 98.1% (95% CI 96.3 to 99.9) for Ct-value <20, 89.8% (95% CI 85.4 to 94.3) and 92.6% (95% CI 88.5 to 96.7) for Ct-value <25 and 73.7% (95% CI 66.0 to 81.3) and 75.7% (95% CI 67.9 to 83.4) for Ct-value <30, respectively. For Ct-value ≥20 sensitivities for Panbio and Standard Q were 89.2% (95% CI 82.1 to 96.3) and 89.0% (95% CI 81.0 to 96.9), 51.2% (95% CI 39.4 to 63.0) and 56.4% (95% CI 45.1 to 67.8) for Ct-value ≥25, and 22.8% (95% CI 12.2 to 33.4) and 20.4% (95% CI 10.5 to 30.3) for Ct-value ≥30, respectively (S4A–S4F Fig). For BinaxNow (Abbott Rapid Diagnostics, Germany), LumiraDx, SD Biosensor, Standard F, Coris, and INNOVA SARS-CoV-2 Antigen Rapid Qualitative Test (Innova Medical Group, United States of America; henceforth called Innova), sufficient data to pool sensitivity was only available for certain Ct-values, which are available in the Supporting information (S4A–S4F Fig) as well. In addition, for 8 tests it was possible to calculate pooled sensitivity and specificity estimates only including data sets that conformed to the IFU. These are also listed in the Supporting information (S5 Table).

In total, 31 studies accounting for 106 data sets conducted head-to-head clinical accuracy evaluations of different tests using the same sample(s) from the same participant. These data sets are outlined in the Supporting information (S2 Table). Nine studies performed their head-to-head evaluation as per IFU and on symptomatic individuals. Across 4 studies, the Standard Q nasal (sensitivity 80.5% to 91.2%) and the Standard Q (sensitivity 73.2% to 91.2%) showed a similar range of sensitivity [116,130,216]. One study reported a sensitivity of 60.4% (95% CI 54.9 to 65.6) for the Standard Q and 56.8% (95% CI 51.3 to 62.2) for the Panbio in a mixed study population of symptomatic, asymptomatic, and high-risk contact persons [190]. Another study described a sensitivity of 56.4% (95% CI 44.7 to 67.6) for the Rapigen and 52.6% (95% CI 40.9 to 64) for the SGTi-flex COVID-19 Ag (Sugentech, South Korea) [164]. One study included only very few samples and using a not IFU-conforming sample type (BAL), limiting the ability to draw conclusions from the results [155].

Publication bias

The results of the Deeks’ test for all data sets with complete results (p = 0.24), Standard Q publications (p = 0.39), Panbio publications (p = 0.81), and Lumira (p = 0.61) demonstrate no significant asymmetry in the funnel plots, which suggests no publication bias. All funnel plots are listed in the Supporting information (S12 Fig)

Sensitivity analysis

We performed 3 sensitivity analyses including 213 data sets for non-case–control studies, 216 data sets including only peer-reviewed studies, and 190 data sets including only data sets without any manufacturer influence. When excluding case–control studies, the sensitivity and specificity remained at 71.9% (95% CI 69.4 to 74.2) and 99.0% (95% CI 98.8 to 99.2), respectively. Similarly, when assessing only peer-reviewed studies, sensitivity and specificity did not change significantly with 71.1% (95% CI 68.5 to 73.6) and 98.9% (95% CI 98.6 to 99.1), respectively. If studies that could have potentially been influenced by test manufacturers were excluded, sensitivity decreased marginally, but with overlapping CIs (sensitivity of 70.3% [95% CI 67.6 to 72.9] and specificity of 99.0% [95% CI 98.7 to 99.2]).

Discussion

After reviewing 194 clinical accuracy studies, we found Ag-RDTs to be 76.3% (95% CI 73.7 to 78.7) sensitive and 99.1% (95% CI 98.8 to 99.3) specific in detecting SARS-CoV-2 compared to RT-PCR when performed according to manufacturers’ instructions. While sensitivity was higher in symptomatic compared to asymptomatic persons, especially when persons were still within the first week of symptom onset, the main driver behind test sensitivity was a sample’s viral load. LumiraDx and Standard Q nasal were the most accurate tests, but heterogeneity in the design of the studies evaluating these tests potentially favored test specific estimates.

Using Ct-value as a semiquantitative correlate for viral load, there was a significant correlation between test sensitivity and viral load, with sensitivity increasing by 2.9 percentage points for every unit decrease in mean Ct-value when controlling for symptom status and testing procedure. The pooled Ct-value for TP was on average over 8 points lower than for FN results (Ct-value of 22.2 for TP compared to 30.2 for FN results). Viral load being the deciding factor for test sensitivity confirms prior work [12].

Furthermore, sensitivity was found to be higher when samples were from symptomatic (76.2% sensitivity) compared to asymptomatic participants (56.8% sensitivity). This was confirmed in the regression model, estimating sensitivity to be 20.0 percentage points higher in samples that originated from symptomatic participants. In our previous analysis, we assumed that the increase in sensitivity is not due to the symptom status as such, but results from the fact that in symptomatic study, populations chances are higher to include participants at the beginning of the disease with high viral load [4]. In the present analysis, this assumption shows to be largely true. Controlling for Ct-value, the RT-PCR correlate for viral load, the effect of symptomatic versus asymptomatic participants on test sensitivity strongly decreased to 11.1 percentage points. As others found symptomatic and asymptomatic individuals to have the same viral load when at the same stage of the disease [8], we would have expected the regression coefficient to have decreased even further to 0. This nonzero difference in sensitivity between symptomatic and asymptomatic participants may be due to the lack of access to individual participant Ct-values, which required our analyses to control for the mean Ct-value over all participants in a data set rather than the individual Ct-values. Furthermore, some variability is likely introduced by the testing for the Ag-RDT and the RT-PCR not to occur from the sample. Therefore, some degree of residual confounding is likely present.

We also found sensitivity to be higher when participants were tested within 7 days of symptom onset (81.9% sensitivity) compared to >7 days (51.8% sensitivity). Concordantly, our regression model estimated that sensitivity decreases by 3.2 percentage points for every 1-day increase in mean symptom duration. Again, this decrease in sensitivity is driven by viral load as was seen when controlling for Ct-value. Importantly, it is not yet clear how the emergence of new SARS-CoV-2 VoC and the growing vaccination coverage will affect Ag-RDTs sensitivity in the early days after symptom onset. Most of the studies included in this analysis were performed at the time the wild type and Alpha variant were circulating. Test sensitivity was slightly lower for the Alpha variant compared to the wild type (67.0% [95% CI 58.5 to 74.5] versus 72.3% [95% CI 69.7 to 74.7]). However, conclusions on differences in performance between variants are difficult to draw as between study heterogeneity was substantial and, while this does not preclude a difference between groups, CIs were widely overlapping. Furthermore, pooled sensitivity for studies where Alpha and wild type were codominant (72.0% [95% CI 57.9 to 82.8]) were similar to that of the wild type alone. Similar Ag-RDT sensitivity was also found with the Delta variant compared to wild type, and for the Omicron SARS-CoV-2 variant initial data suggests similar clinical performance as well, although analytical performance pointed toward a potentially lower performance [217220]. Vaccination did not affect viral kinetics in the first week [221] and is unlikely to do so for the Omicron variant [222]. To further inform public health decision makers on the best strategy to apply Ag-RDTs, clinical accuracy studies in settings with high prevalence of the Omicron variant are urgently needed.

Looking at specific tests, LumiraDx and Standard Q nasal showed the highest sensitivity, performing above the 80% sensitivity target defined by WHO. However, while the Standard Q nasal was 99.1% (95% CI 98.4 to 99.5) specific, the LumiraDx only had a specificity of 96.9% (95% CI 94.4 to 98.3), which is just below the WHO target of 97%. The reason for the lower specificity is unclear, particularly as independent analytical studies also confirmed the test had no cross-reactivity [106]. Sample to sample variability must be considered, particularly as the sensitivity of the index tests approaches that of the reference test. The 2 most often evaluated tests, namely Panbio (32,370 samples, sensitivity of 71.9%) and Standard Q (35,664 samples, sensitivity of 70.9%), performed slightly below the overall average. Similarly, Panbio and Standard Q were also the most extensively evaluated Ag-RDTs in the prior analysis, and with a sensitivity slightly above average [4]. Nonetheless, this updated analysis indicates that limited added value is to be expected from any further analysis of Ag-RDTs’ overall sensitivity or the sensitivity of the most widely evaluated tests. However, it will be important to continue to reassess tests’ analytical sensitivity for detection of new specific variants (e.g., Omicron). In addition, with a recent WHO guideline on self-performed Ag-RDTs having laid the scientific foundation [223], it would be of interest to further evaluate the accuracy and ease of use of self-performed Ag-RDTs, or specific characteristics of instrument-based Ag-RDTs.

Furthermore, sensitivity strongly differed between studies that conducted the Ag-RDTs as per manufacturer’s instructions and those that did not (sensitivity of 66.7% for not IFU-conforming versus 76.3% for IFU-conforming). This was also reflected in our regression model, where test performance decreased when not following manufacturer’s instructions; however, this was not significant (‒5.2 percentage points [95% CI ‒13.0 to 2.6]). In regards to sample types, saliva showed a markedly lower sensitivity of 50.1%, compared to NP or AN/MT samples, confirming what we found in our previous analysis [4]. Especially in light of the current debate on whether saliva or throat swabs might be a more sensitive sample to detect the SARS-CoV-2 Omicron variant than NP or AN/MT samples [224226], further research is urgently needed to quantify the difference in viral load resulting from different sample types and thus the effect of sample type on test sensitivity.

In concordance with the above, many studies reporting an unusually low sensitivity performed the Ag-RDT not as per IFU [30,32,44,70,101,112,137,188] or used saliva samples [24,154,159,227]. However, 2 studies with IFU-conforming testing procedure on NP or AN/MT sample still showed a low sensitivity. This quite likely results from the on average low viral load in 1 study [53] and the asymptomatic study population in the other [179]. On the contrary, compared to the other studies unusual high sensitivity was found in studies where average viral load was high [49,88,148,149] or participants were mainly within the first week of symptom onset [46,58,139].

The main strength of our study lies in its living approach. The ability to update our methodology as the body of evidence grows has enabled an improved analysis. For example, while data were too heterogenous for a meta-regression during the prior analysis, with additional data sets we are now able to analyze the relationship between an Ag-RDT’s sensitivity, the samples’ Ct-value, and the participants’ symptom status in depth. Similarly, we decided to focus on clinical accuracy studies for POC Ag-RDTs in this current review as analytical accuracy studies require a dedicated approach to be comparable. Furthermore, the main results of our latest extractions are publicly available on our website. This has not only equipped public health professionals with an up-to-date overview on the current research landscape [228,229], but also led other researchers and the test manufacturers to check our data, improving the quality of our report through continuous peer-review.

Nonetheless, our study is limited in that we use RT-PCR as a reference standard to assess the accuracy of Ag-RDTs, which are generally much more sensitive than Ag-RDTs [230] and might be a less appropriate reference standard than viral culture [139,231,232]. However, viral culture is available in research settings only and its validity as a true proxy of actual transmissibility is not proven; therefore, we find RT-PCR a suitable reference standard for the clinical accuracy studies included in this review. Furthermore, we fully acknowledge that Ct-value is only an estimate of viral load, and that the correlation between Ct-value and viral load varies between RT-PCR assays, potentially affecting the sensitivity and specificity of the evaluated Ag-RDTs [215]; nonetheless, we believe that the analysis of pooled Ct-value data across a very large data set is a useful strategy to understand the overall contribution of viral load to Ag-RDT performance. Moreover, we are aware that the test specific sensitivities and specificities can be influenced by differences in study design. However, we aimed to counterbalance this effect by assessing relevant aspects in study design for each study and analyzing outliers. To enhance comparability in between clinical accuracy studies, future studies should include individuals at a similar stage in the disease, use the same sample types, and adhere to the WHO standard for measuring SARS-CoV-2 viral load [13]. Finally, our study only includes literature up until August 31, 2021. Thus, we were not able to analyze information on Delta or Omicron variants, and look to future research to close this gap in literature.

Conclusions

In summary, Ag-RDTs detect most of the persons infected with SARS-CoV-2 when performed according to the manufacturers’ instructions. While this confirms the results of our previous analysis, the present analysis highlights that the sample’s viral load is the most influential factor underlying test sensitivity. Thus, Ag-RDTs can play a vital role in detecting persons with high viral load and therefore likely to be at highest risk of transmitting the virus. This holds true even in the absence of patient symptoms or differences in the duration of symptoms. To foster further research analyzing specific Ag-RDTs and the factors influencing their sensitivity in more detail, standardization of clinical accuracy studies and access to patient level Ct-value and duration of symptoms are essential.

Supporting information

S1 PRISMA Checklist. PRISMA checklist.

(DOCX)

S1 Fig. Forest plots of all Ag-RDTs.

Ag-RDT, antigen rapid diagnostic test; CI, confidence interval; FN, false negative; FP, false positive; TN, true negative; TP, true positive.

(PDF)

S2 Fig. Details of QUADAS assessment.

(PDF)

S3 Fig. Forest plots for subgroup analysis by Ct-values.

CI, confidence interval; Ct, cycle threshold.

(PDF)

S4 Fig. Forest plots for subgroup analysis by Ct-values per test.

CI, confidence interval; Ct, cycle threshold.

(PDF)

S5 Fig. Forest plots for subgroup analysis by IFU versus non-IFU.

CI, confidence interval; FN, false negative; FP, false positive; IFU, instructions for use; TN, true negative; TP, true positive.

(PDF)

S6 Fig. Forest plots for subgroup analysis by sample type.

CI, confidence interval; FN, false negative; FP, false positive; TN, true negative; TP, true positive.

(PDF)

S7 Fig. Forest plots for subgroup analysis by symptomatic versus asymptomatic.

CI, confidence interval.

(PDF)

S8 Fig. Forest plot for subgroup analysis by mean Ct-values for TP and FN samples.

CI, confidence interval; Ct, cycle threshold; FN, false negative; TP, true positive.

(PDF)

S9 Fig. Forest plots for subgroup analysis by mean Ct-values for TP and FN samples.

CI, confidence interval; Ct, cycle threshold; FN, false negative; TP, true positive.

(PDF)

S10 Fig. HSROC curve Standard Q nasal and LumiraDx Ag-RDT.

Ag-RDT, antigen rapid diagnostic test; HSROC, Hierarchical summary receiver-operating characteristic.

(PDF)

S11 Fig. Forest plot for univariate analysis for Nadal and SureScreen V.

CI, confidence interval.

(PDF)

S12 Fig. Funnel plots for all, LumiraDx, Panbio, and Standard Q studies.

(PDF)

S1 Table. List of parameters extracted from studies.

(XLSX)

S2 Table. Summary of tests.

(XLSX)

S3 Table. Overall and sensitivity analysis.

(XLSX)

S4 Table. Tests analyzed descriptively not included in meta-analysis.

(XLSX)

S5 Table. Test specific IFU analysis.

(XLSX)

S1 Text. Study protocol submitted to PROSPERO.

(DOCX)

S2 Text. Search strategy.

(DOCX)

S3 Text. QUDAS-2 assessment interpretation guide.

(DOCX)

S4 Text. Details meta-regression.

(DOCX)

S5 Text. List of studies excluded.

(DOCX)

S6 Text. Studies potentially influenced by the test manufacturer.

(DOCX)

Abbreviations:

Ag-RDT

antigen rapid diagnostic test

AN

anterior nasal

CI

confidence interval

COVID-19

Coronavirus Disease 2019

Ct

cycle threshold

FN

false negative

IFU

instructions for use

IQR

interquartile range

MT

mid-turbinate

NP

nasopharyngeal

OP

oropharyngeal

POC

point-of-care

RT-PCR

reverse transcription polymerase chain reaction

SARS-CoV-2

Severe Acute Respiratory Syndrome Coronavirus 2

sROC

Summary receiver operating characteristic

TP

true positive

VoC

variants of concern

Data Availability

All data are available from https://doi.org/10.11588/data/T3MIB0.

Funding Statement

The study was supported by the Ministry of Science, Research and Arts of the State of Baden-Wuerttemberg, Germany (no grant number; https://mwk.badenwuerttemberg.de/de/startseite/) and internal funds from the Heidelberg University Hospital (no grant number; https://www.heidelberg-university-hospital.com/de/) to CMD. Further, this project was funded by United Kingdom (UK) aid from the British people (grant number: 300341-102; Foreign, Commonwealth & Development Office (FCMO), former UK Department of International Development (DFID); www.gov.uk/fcdo), and supported by a grant from the World Health Organization (WHO; no grant number; https://www.who.int) and a grant from Unitaid (grant number: 2019-32-FIND MDR; https://unitaid.org) to Foundation of New Diagnostics (FIND; JAS, SC, SO, AM, BE). This study was also funded by the National Science Foundation GRFP (grant number DGE1745303) to SM. For the publication fee we acknowledge financial support by Deutsche Forschungsgemeinschaft within the funding programme „Open Access Publikationskosten” (no grant number; https://www.dfg.de/en/index.jsp), as well as by Heidelberg University (no grant number; https://www.uni-heidelberg.de/en). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.World Health Organization. Recommendations for national SARS-CoV-2 testing strategies and diagnostic capacities. 2021. [Google Scholar]
  • 2.Dinnes J, Deeks JJ, Berhane S, Taylor M, Adriano A, Davenport C, et al. Rapid, point-of-care antigen and molecular-based tests for diagnosis of SARS-CoV-2 infection. Cochrane Database Syst Rev. 2021:3:CD013705. doi: 10.1002/14651858.CD013705.pub2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.World Health Organization. Antigen-detection in the diagnosis of SARS-CoV-2 infection: interim guidance. 2021. [Google Scholar]
  • 4.Brümmer LE, Katzenschlager S, Gaeddert M, Erdmann C, Schmitz S, Bota M, et al. Accuracy of novel antigen rapid diagnostics for SARS-CoV-2: A living systematic review and meta-analysis. PLoS Med. 2021;18(8):e1003735. doi: 10.1371/journal.pmed.1003735 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Brümmer LE, Katzenschlager S, Gaeddert M, Erdmann C, Schmitz S, Bota M, et al. The accuracy of novel antigen rapid diagnostics for SARS-CoV-2: a living systematic review and meta-analysis. medRxiv [Preprint, updated version published in PLoS Medicine]; published March 01, 2021. doi: 10.1371/journal.pmed.1003735 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.World Health Organization. Antigen-detection in the diagnosis of SARS-CoV-2 infection. WHO Reference Number: WHO/2019-nCoV/Antigen_Detection/20211, 2021.
  • 7.Nehme M, Braillard O, Alcoba G, Aebischer Perone S, Courvoisier D, Chappuis F, et al. COVID-19 Symptoms: Longitudinal Evolution and Persistence in Outpatient Settings. Ann Intern Med. 2021;174(5):723–5. doi: 10.7326/M20-5926 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Kissler SM, Fauver JR, Mack C, Olesen SW, Tai C, Shiue KY, et al. Viral dynamics of acute SARS-CoV-2 infection and applications to diagnostic and public health strategies. PLoS Biol. 2021;19(7):e3001333. doi: 10.1371/journal.pbio.3001333 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Wang Y, Zhang L, Sang L, Ye F, Ruan S, Zhong B, et al. Kinetics of viral load and antibody response in relation to COVID-19 severity. J Clin Investig. 2020;130(10):5235–44. doi: 10.1172/JCI138759 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Leeflang MM, Deeks JJ, Gatsonis C, Bossuyt PM. Cochrane Diagnostic Test Accuracy Working Group. Systematic reviews of diagnostic test accuracy. Ann Intern Med. 2008;149(12):889–97. doi: 10.7326/0003-4819-149-12-200812160-00008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9, W64. doi: 10.7326/0003-4819-151-4-200908180-00135 [DOI] [PubMed] [Google Scholar]
  • 12.Pollock N, Savage T, Wardell H, Lee R, Mathew A, Stengelin M, et al. Correlation of SARS-CoV-2 Nucleocapsid Antigen and RNA Concentrations in Nasopharyngeal Samples from Children and Adults Using an Ultrasensitive and Quantitative Antigen Assay. J Clin Microbiol. 2020;59:e03077–20. doi: 10.1128/JCM.03077-20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.World Health Organization. First WHO International Standard for SARS-CoV-2 RNA. 2021. [Google Scholar]
  • 14.Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529–36. doi: 10.7326/0003-4819-155-8-201110180-00009 [DOI] [PubMed] [Google Scholar]
  • 15.McGrath S, Zhao X, Steele R, Thombs BD, Benedetti A, DEPRESsion Screening Data Collaboration. Estimating the sample mean and standard deviation from commonly reported quantiles in meta-analysis. Stat Methods Med Res. 2020:962280219889080. doi: 10.1177/0962280219889080 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Hodcroft EB. 2021. CoVariants: SARS-CoV-2 Mutations and Variants of Interest. [cited 2022 March 20] Available from: https://covariants.org/. [Google Scholar]
  • 17.World Health Organization. Tracking SARS-CoV-2 variants. 2022. [PubMed] [Google Scholar]
  • 18.Viechtbauer W. Conducting Meta-Analyses in R with the metafor Package. J Stat Softw. 2010;36(3):1–48. doi: 10.18637/jss.v036.i03 [DOI] [Google Scholar]
  • 19.van Enst WA, Ochodo E, Scholten RJ, Hooft L, Leeflang MM. Investigation of publication bias in meta-analyses of diagnostic test accuracy: a meta-epidemiological study. BMC Med Res Methodol. 2014;14:70. doi: 10.1186/1471-2288-14-70 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Abdul-Mumin A, Abubakari A, Agbozo F, Abdul-Karim A, Nuertey BD, Mumuni K, et al. Field evaluation of specificity and sensitivity of a standard SARS-CoV-2 antigen rapid diagnostic test: A prospective study at a teaching hospital in Northern Ghana. medRxiv [Preprint]; published September 17, 2021. doi: 10.1101/2021.06.03.21258300 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Abdulrahman A, Mustafa F, Al Awadhi AI, Al Ansari Q, Al Alawi B, Al Qahtani M. Comparison of SARS-COV-2 nasal antigen test to nasopharyngeal RT-PCR in mildly symptomatic patients. medRxiv [Preprint]; published December 08, 2020. doi: 10.1101/2020.11.10.20228973 [DOI] [Google Scholar]
  • 22.Abusrewil Z, Alhudiri I, Kaal H, Edin El Meshri S, Ebrahim F, Dalyoum T, et al. Time scale performance of rapid antigen testing for SARS-COV-2: evaluation of ten rapid antigen assays. J Med Virol. 2021:1–7. doi: 10.1002/jmv.27186 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Agarwal J, Das A, Pandey P, Sen M, Garg J. “David vs. Goliath”: A simple antigen detection test with potential to change diagnostic strategy for SARS-CoV-2. J Infect Dev Ctries. 2021;15(7):904–909. doi: 10.3855/jidc.13925 [DOI] [PubMed] [Google Scholar]
  • 24.Agulló V, Fernández-González M, Ortiz de la Tabla V, Gonzalo-Jiménez N, García JA, Masiá M, et al. Evaluation of the rapid antigen test Panbio COVID-19 in saliva and nasal swabs: A population-based point-of-care study. J Inf Secur. 2020;82(5):186–230. doi: 10.1016/j.jinf.2020.12.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Akashi Y, Kiyasu Y, Takeuchi Y, Kato D, Kuwahara M, Muramatsu S, et al. Evaluation and clinical implications of the time to a positive results of antigen testing for SARS-CoV-2. medRxiv [Preprint]; published June 13, 2021. doi: 10.1016/j.jiac.2021.10.026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Akingba OL, Sprong K, Hardie DR. Field performance evaluation of the PanBio rapid SARS-CoV-2 antigen assay in an epidemic driven by 501Y.v2 (lineage B.1.351) in the Eastern Cape, South Africa. J Clin Virol Plus. 2021;1(1–2):100013. doi: 10.1016/j.jcvp.2021.100013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Albert E, Torres I, Bueno F, Huntley D, Molla E, Fernandez-Fuentes MA, et al. Field evaluation of a rapid antigen test (Panbio COVID-19 Ag Rapid Test Device) for COVID-19 diagnosis in primary healthcare centres. Clin Microbiol Infect. 2020;27(3):472.e7 –472.e10. doi: 10.1016/j.cmi.2020.11.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Alemany A, Baro B, Ouchi D, Ubals M, Corbacho-Monné M, Vergara-Alert J, et al. Analytical and Clinical Performance of the Panbio COVID-19 Antigen-Detecting Rapid Diagnostic Test. J Inf Secur. 2020;82(5):186–230. doi: 10.1016/j.jinf.2020.12.033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Allan-Blitz LT, Klausner JD. A Real-World Comparison of SARS-CoV-2 Rapid Antigen vs. Polymerase Chain Reaction Testing in Florida. J Clin Microbiol. 2021;59(10):e0110721. doi: 10.1128/JCM.01107-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Baccani I, Morecchiato F, Chilleri C, Cervini C, Gori E, Matarrese D, et al. Evaluation of Three Immunoassays for the Rapid Detection of SARS-CoV-2 Antigens. Diagn Microbiol Infect Dis. 2021;101(2):115434. doi: 10.1016/j.diagmicrobio.2021.115434 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Bachman CM, Grant BD, Anderson CE, Alonzo LF, Garing S, Byrnes SA, et al. Clinical validation of an open-access SARS-COV-2 antigen detection lateral flow assay, compared to commercially available assays. PLoS ONE. 2021;16(8):e0256352. doi: 10.1371/journal.pone.0256352 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Baro B, Rodo P, Ouchi D, Bordoy A, Saya Amaro E, Salsench S, et al. Performance characteristics of five antigen-detecting rapid diagnostic test (Ag-RDT) for SARS-CoV-2 asymptomatic infection: a head-to-head benchmark comparison. J Inf Secur. 2021;82(6):269–75. doi: 10.1016/j.jinf.2021.04.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Beck ET, Paar W, Fojut L, Serwe J, Jahnke RR. Comparison of Quidel Sofia SARS FIA Test to Hologic Aptima SARS-CoV-2 TMA Test for Diagnosis of COVID-19 in Symptomatic Outpatients. J Clin Microbiol. 2020;59(2):e02727–0. doi: 10.1128/jcm.02727-20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Berger A, Ngo Nsoga M-T, Perez Rodriguez FJ, Abi Aad Y, Sattonnet P, Gayet-Ageron A, et al. Diagnostic accuracy of two commercial SARS-CoV-2 Antigen-detecting rapid tests at the point of care in community-based testing centers. PLoS ONE. 2020;16(3):e0248921. doi: 10.1371/journal.pone.0248921 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Bianco G, Boattini M, Barbui AM, Scozzari G, Riccardini F, Coggiola M, et al. Evaluation of an antigen-based test for hospital point-of-care diagnosis of SARS-CoV-2 infection. J Clin Virol. 2021;139:104838. doi: 10.1016/j.jcv.2021.104838 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Blairon L, Cupaiolo R, Thomas I, Piteüs S, Wilmet A, Beukinga I, et al. Efficacy comparison of three rapid antigen tests for SARS-CoV-2 and how viral load impact their performance. J Med Virol. 2021;93:5783–8. doi: 10.1002/jmv.27108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Bornemann L, Kaup O, Kleideiter J, Panning M, Ruprecht B, Wehmeier M. Real-life evaluation of the Sofia SARS-CoV-2 antigen assay in a large tertiary care hospital. J Clin Virol. 2021;140:104854. doi: 10.1016/j.jcv.2021.104854 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Bouassa MRS, Veyer D, Péré H, Bélec L. Analytical performances of the point-of-care SIENNA COVID-19 Antigen Rapid Test for the detection of SARS-CoV-2 nucleocapsid protein in nasopharyngeal swabs: A prospective evaluation during the COVID-19 second wave in France. Int J Infect Dis. 2021;106:8–12. doi: 10.1016/j.ijid.2021.03.051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Brihn A, Chang J, OY K, Balter S, Terashita D, Rubin Z, et al. Diagnostic Performance of an Antigen Test with RT-PCR for the Detection of SARS-CoV-2 in a Hospital Setting—Los Angeles County, California, June-August 2020. Morb Mortal Wkly Rep. 2021;70(19):702–6. doi: 10.15585/mmwr.mm7019a3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Bruzzone B, De Pace V, Caligiuri P, Ricucci V, Guarona G, Pennati BM, et al. Comparative diagnostic performance of different rapid antigen detection tests for COVID-19 in the real-world hospital setting. Int J Infect Dis. 2021;107:215–8. doi: 10.1016/j.ijid.2021.04.072 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Bulilete O, Lorente P, Leiva A, Carandell E, Oliver A, Rojo E, et al. Panbio rapid antigen test for SARS-CoV-2 has acceptable accuracy in symptomatic patients in primary health care. J Inf Secur. 2021;82:391–8. doi: 10.1016/j.jinf.2021.02.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Caramello V, Boccuzzi A, Basile V, Ferraro A, Macciotta A, Catalano A, et al. Are antigenic tests useful for detecting SARS-CoV-2 infections in patients accessing to emergency departments? Results from a North-West Italy Hospital. J Inf Secur. 2021;83:237–79. doi: 10.1016/j.jinf.2021.05.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Carbonell-Sahuquillo S, Lázaro-Carreño MI, Camacho J, Barrés-Fernández A, Albert E, Torres I, et al. Evaluation of a rapid antigen detection test (Panbio COVID-19 Ag Rapid Test Device) as a point-of-care diagnostic tool for COVID-19 in a Pediatric Emergency Department. J Med Virol. 2021:1–5. doi: 10.1002/jmv.27220 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Caruana G, Croxatto A, Kampouri E, Kritikos A, Opota O, Foerster M, et al. ImplemeNting SARS-CoV-2 Rapid antigen testing in the Emergency wArd of a Swiss univErsity hospital: the INCREASE study. Microorganisms. 2021;9(4):798. doi: 10.3390/microorganisms9040798 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Caruana G, Lebrun LL, Aebischer O, Opota O, Urbano L, de Rham M, et al. The dark side of SARS-CoV-2 rapid antigen testing: screening asymptomatic patients. New Microbes New Infect. 2021;42:100899. doi: 10.1016/j.nmni.2021.100899 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Cassuto NG, Gravier A, Colin M, Theillay A, Pires-Roteira D, Pallay S, et al. Evaluation of a SARS-CoV-2 antigen-detecting rapid diagnostic test as a self-test: diagnostic performance and usability. J Med Virol. 2021:1–7. doi: 10.1002/jmv.27249 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Cento V, Renica S, Matarazzo E, Antonello M, Colagrossi L, Di Ruscio F, et al. Frontline Screening for SARS-CoV-2 Infection at Emergency Department Admission by Third Generation Rapid Antigen Test: Can We Spare RT-qPCR? Viruses-Basel. 2021;13(5):818. doi: 10.3390/v13050818 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Cerutti F, Burdino E, Milia MG, Allice T, Gregori G, Bruzzone B, et al. Urgent need of rapid tests for SARS CoV-2 antigen detection: Evaluation of the SD-Biosensor antigen test for SARS-CoV-2. J Clin Virol. 2020;132:104654. doi: 10.1016/j.jcv.2020.104654 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Chaimayo C, Kaewnaphan B, Tanlieng N, Athipanyasilp N, Sirijatuphat R, Chayakulkeeree M, et al. Rapid SARS-CoV-2 antigen detection assay in comparison with real-time RT-PCR assay for laboratory diagnosis of COVID-19 in Thailand. Virol J. 2020;17(1):177. doi: 10.1186/s12985-020-01452-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Chiu R, Kojima N, Mosley G, Cheng KK, Pereira D, Brobeck M, et al. Evaluation of the INDICAID COVID-19 Rapid Antigen Test in symptomatic populations and asymptomatic community testing. Microbiol Spectr. 2021;9(1):e0034221. doi: 10.1128/Spectrum.00342-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Christensen K, Ren H, Chen S, Cooper C, Young S. Clinical evaluation of BD Veritor SARS-CoV-2 and Flu A+B Assay for point-of-care (POC) System. medRxiv [Preprint]; published May 05, 2021. doi: 10.1101/2021.05.04.21256323 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Ciotti M, Maurici M, Pieri M, Andreoni M, Bernardini S. Performance of a rapid antigen test in the diagnosis of SARS-CoV-2 infection. J Med Virol. 2021;93:2988–91. doi: 10.1002/jmv.26830 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Dankova Z, Novakova E, Skerenova M, Holubekova V, Lucansky V, Dvorska D, et al. Comparison of SARS-CoV-2 Detection by Rapid Antigen and by Three Commercial RT-qPCR Tests: A Study from Martin University Hospital in Slovakia. Int J Environ Res Public Health. 2021;18(13). doi: 10.3390/ijerph18137037 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Del Vecchio C, Brancaccio G, Brazzale AR, Lavezzo E, Onelia F, Franchin E, et al. Emergence of N antigen SARS-CoV-2 genetic variants escaping detection of antigenic tests. medRxiv [Preprint]; published March 26, 2021. doi: 10.1101/2021.03.25.21253802 [DOI] [Google Scholar]
  • 55.Di Domenico M, De Rosa A, Di Gaudio F, Internicola P, Bettini C, Salzano N, et al. Diagnostic Accuracy of a New Antigen Test for SARS-CoV-2 Detection. Int J Environ Res Public Health. 2021;18(12):6310. doi: 10.3390/ijerph18126310 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Dierks S, Bader O, Schwanbeck J, Gross U, Weig MS, Mese K, et al. Diagnosing SARS-CoV-2 with Antigen Testing, Transcription-Mediated Amplification and Real-Time PCR. J Clin Med. 2021;10(11):2404. doi: 10.3390/jcm10112404 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Domínguez Fernández M, Peña Rodríguez MF, Lamelo Alfonsín F, Bou AG. Experience with Panbio rapid antigens test device for the detection of SARS-CoV-2 in nursing homes. Enferm Infecc Microbiol Clin. 2021:S0213–05X. doi: 10.1016/j.eimc.2020.12.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Drain PK, Ampajwala M, Chappel C, Gvozden AB, Hoppers M, Wang M, et al. A Rapid, High-Sensitivity SARS-CoV-2 Nucleocapsid Immunoassay to Aid Diagnosis of Acute COVID-19 at the Point of Care: A Clinical Performance Study. Infect Dis Ther. 2021;10(2):753–61. doi: 10.1007/s40121-021-00413-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Drevinek P, Hurych J, Kepka Z, Briksi A, Kulich M, Zajac M, et al. The sensitivity of SARS-CoV-2 antigen tests in the view of large-scale testing. medRxiv [Preprint]; published November 24, 2020. doi: 10.1101/2020.11.23.20237198 [DOI] [PubMed] [Google Scholar]
  • 60.Eleftheriou I, Dasoula F, Dimopoulou D, Lebessi E, Serafi E, Spyridis N, et al. Real-life evaluation of a COVID-19 rapid antigen detection test in hospitalized children. J Med Virol. 2021. doi: 10.1002/jmv.27149 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Escrivá BF, Mochón MDO, González RM, García CS, Pla AT, Ricart AS, et al. The effectiveness of rapid antigen test-based for SARS-CoV-2 detection in nursing homes in Valencia, Spain. J Clin Virol. 2021;143:104941. doi: 10.1016/j.jcv.2021.104941 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Faíco-Filho KS, Finamor Júnior FE, Moreira LVL, Lins PRG, Justo AFO, Bellei N. Evaluation of the Panbio COVID-19 Ag Rapid Test at an Emergency Room in a Hospital in São Paulo, Brazil. medRxiv [Preprint]; published March 24, 2021. doi: 10.1101/2021.03.15.21253313 [DOI] [Google Scholar]
  • 63.Favresse J, Gillot C, Oliveira M, Cadrobbi J, Elsen M, Eucher C, et al. Head-to-Head Comparison of Rapid and Automated Antigen Detection Tests for the Diagnosis of SARS-CoV-2 Infection. J Clin Med. 2021;10(2):265. doi: 10.3390/jcm10020265 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Fenollar F, Bouam A, Ballouche M, Fuster L, Prudent E, Colson P, et al. Evaluation of the Panbio Covid-19 rapid antigen detection test device for the screening of patients with Covid-19. J Clin Microbiol. 2020;59:e02589–20. doi: 10.1128/JCM.02589-20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Ferguson J, Dunn S, Best A, Mirza J, Percival B, Mayhew M, et al. Validation testing to determine the sensitivity of lateral flow testing for asymptomatic SARS-CoV-2 detection in low prevalence settings: Testing frequency and public health messaging is key. PLoS Biol. 2021;19(4):e3001216. doi: 10.1371/journal.pbio.3001216 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Fernández MD, Estévez AS, Alfonsín FL, Arevalo GB. Usefulness of the LumiraDx SARS-COV-2 antigen test in nursing home. Enferm Infecc Microbiol Clin. 2021. doi: 10.1016/j.eimc.2021.06.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Fernandez-Montero A, Argemi J, Rodríguez JA, Ariño AH, Moreno-Galarraga L. Validation of a rapid antigen test as a screening tool for SARS-CoV-2 infection in asymptomatic populations. Sensitivity, specificity and predictive values. EClinicalMedicine. 2021:37:100954. doi: 10.1016/j.eclinm.2021.100954 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Ferté T, Ramel V, Cazanave C, Lafon ME, Bébéar C, Malvy D, et al. Accuracy of COVID-19 rapid antigenic tests compared to RT-PCR in a student population: The StudyCov study. J Clin Virol. 2021;141:104878. doi: 10.1016/j.jcv.2021.104878 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Filgueiras P, Corsini C, Almeida NBF, Assis J, Pedrosa ML, de Oliveira A, et al. COVID-19 Rapid Antigen Test at hospital admission associated to the knowledge of individual risk factors allow overcoming the difficulty of managing suspected patients in hospitals COVID-19 Rapid Antigen Test facilitates the management of suspected patients on hospital admission. medRxiv [Preprint]; published January 08, 2021. doi: 10.1101/2021.01.06.21249282 [DOI] [Google Scholar]
  • 70.Fourati S, Langendorf C, Audureau E, Challine D, Michel J, Soulier A, et al. Performance of six rapid diagnostic tests for SARS-CoV-2 antigen detection and implications for practical use. J Clin Virol. 2021;142:104930. doi: 10.1016/j.jcv.2021.104930 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Frediani JK, Levy JM, Rao A, Bassit L, Figueroa J, Vos MB, et al. Multidisciplinary assessment of the Abbott BinaxNOW SARS-CoV-2 point-of-care antigen test in the context of emerging viral variants and self-administration. Sci Rep. 2021;11(1):14604. doi: 10.1038/s41598-021-94055-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.García-Fiñana M, Hughes DM, Cheyne CP, Burnside G, Stockbridge M, Fowler TA, et al. Performance of the Innova SARS-CoV-2 antigen rapid lateral flow test in the Liverpool asymptomatic testing pilot: population based cohort study. Br Med J. 2021;374:n1637. doi: 10.1136/bmj.n1637 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Gomez Marti JL, Gribschaw J, McCullough M, Mallon A, Acero J, Kinzler A, et al. Differences in detected viral loads guide use of SARS-CoV-2 antigen-detection assays towards symptomatic college students and children. medRxiv [Preprint]; published February 01, 2021. doi: 10.1101/2021.01.28.21250365 [DOI] [Google Scholar]
  • 74.Gremmels H, Winkela BMF, Schuurmana R, Rosinghb A, Rigterc NAM, Rodriguezd O, et al. Real-life validation of the Panbio COVID-19 Antigen Rapid Test (Abbott) in community-dwelling subjects with symptoms of potential SARS-CoV-2 infection. EClinicalMedicine. 2020;31:100677. doi: 10.1016/j.eclinm.2020.100677 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Gupta A, Khurana S, Das R, Srigyan D, Singh A, Mittal A, et al. Rapid chromatographic immunoassay-based evaluation of COVID-19: A cross-sectional, diagnostic test accuracy study & its implications for COVID-19 management in India. Indian J Med Res. 2020;153(1):126. doi: 10.4103/ijmr.IJMR_3305_20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Halfon P, Penaranda G, Khiri H, Garcia V, Drouet H, Philibert P, et al. An optimized stepwise algorithm combining rapid antigen and RT-qPCR for screening of COVID-19 patients. PLoS ONE. 2021;16(9):e0257817. doi: 10.1371/journal.pone.0257817 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Harris DT, Badowski M, Jernigan B, Sprissler R, Edwards T, Cohen R, et al. SARS-CoV-2 Rapid Antigen Testing of Symptomatic and Asymptomatic Individuals on the University of Arizona Campus. Biomedicine. 2021;9(5):539. doi: 10.3390/biomedicines9050539 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Herrera V, Hsu V, Adewale A, Hendrix T, Johnson L, Kuhlman J, et al. Testing of Healthcare Workers Exposed to COVID19 with Rapid Antigen Detection. medRxiv [Preprint]; published August 18, 2020. doi: 10.1101/2020.08.12.20172726 [DOI] [Google Scholar]
  • 79.Holzner C, Pabst D, Anastasiou OE, Dittmer U, Manegold RK, Risse J, et al. SARS-CoV-2 rapid antigen test: Fast-safe or dangerous? An analysis in the emergency department of an university hospital. J Med Virol. 2021;93:5323–7. doi: 10.1002/jmv.27033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Homza M, Zelena H, Janosek J, Tomaskova H, Jezo E, Kloudova A, et al. Five Antigen Tests for SARS-CoV-2: Virus Viability Matters. Viruses. 2021;13(4). doi: 10.3390/v13040684 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Homza M, Zelena H, Janosek J, Tomaskova H, Jezo E, Kloudova A, et al. Covid-19 antigen testing: better than we know? A test accuracy study Infectious Diseases. 2021;53(9):661–8. doi: 10.1080/23744235.2021.1914857 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Houston H, Gupta-Wright A, Toke-Bjolgerud E, Biggin-Lamming J, John L. Diagnostic accuracy and utility of SARS-CoV-2 antigen lateral flow assays in medical admissions with possible COVID-19. J Hosp Infect. 2021;110:203–5. doi: 10.1016/j.jhin.2021.01.018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Ifko M, Skvarc M. Use of Immunochromatographic SARS-CoV-2 Antigen Testing in Eight Long-Term Care Facilities for the Elderly. Health. 2021;9(7):868. doi: 10.3390/healthcare9070868 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Iglὁi Z, Velzing J, van Beek J, van de Vijver D, Aron G, Ensing R, et al. Clinical evaluation of the Roche/SD Biosensor rapid antigen test with symptomatic, non-hospitalized patients in a municipal health service drive-through testing site. Emerg Infect Dis. 2020;27(5):1323–9. doi: 10.3201/eid2705.204688 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Jääskeläinen AE, Ahava MJ, Jokela P, Szirovicza L, Pohjala S, Vapalahti O, et al. Evaluation of three rapid lateral flow antigen detection tests for the diagnosis of SARS-CoV-2 infection. J Clin Virol. 2021;137:104785. doi: 10.1016/j.jcv.2021.104785 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.James AE, Gulley T, Kothari A, Holder K, Garner K, Patil N. Performance of the BinaxNOW COVID-19 Antigen Card test relative to the SARS-CoV-2 real-time reverse transcriptase polymerase chain reaction assay among symptomatic and asymptomatic healthcare employees. Infect Control Hosp Epidemiol. 2021:1–3. doi: 10.1017/ice.2021.20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Jegerlehner S, Suter-Riniker F, Jent P, Bittel P, Nagler M. Diagnostic accuracy of a SARS-CoV-2 rapid antigen test in real-life clinical settings. Int J Infect Dis. 2021;109:118–22. doi: 10.1016/j.ijid.2021.07.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Johnson C, Ferguson K, Smith T, Wallace D, McConnell K, Conserve D, et al. Evaluation of the Panbio SARS-CoV-2 rapid antigen detection test in the Bahamas. medRxiv [Preprint]; published July 15, 2021. doi: 10.1101/2021.07.13.21260402 [DOI] [Google Scholar]
  • 89.Jung C, Levy C, Varon E, Biscardi S, Batard C, Wollner A, et al. Diagnostic Accuracy of SARS-CoV-2 Antigen Detection Test in Children: A Real-Life Study. Front Pediatr. 2021;9:647274. doi: 10.3389/fped.2021.647274 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Kahn M, Schuierer L, Bartenschlager C, Zellmer S, Frey R, Freitag M, et al. Performance of antigen testing for diagnosis of COVID-19: a direct comparison of a lateral flow device to nucleic acid amplification based tests. BMC Infect Dis. 2021;21(1):798. doi: 10.1186/s12879-021-06524-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Kanaujia R, Ghosh A, Mohindra R, Singla V, Goyal K, Gudisa R, et al. Rapid antigen detection kit for the diagnosis of SARS-CoV-2—are we missing asymptomatic patients? Indian J Med Microbiol. 2021. doi: 10.1016/j.ijmmb.2021.07.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Kannian P, Lavanya C, Ravichandran K, Gita JB, Mahanathi P, Ashwini V, et al. SARS-CoV2 antigen in whole mouth fluid may be a reliable rapid detection tool. Oral Dis. 2021;00:1–2. doi: 10.1111/odi.13793 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93.Karon BS, Donato L, Bridgeman AR, Blommel JH, Kipp B, Maus A, et al. Analytical sensitivity and specificity of four point of care rapid antigen diagnostic tests for SARS-CoV-2 using real-time quantitative PCR, quantitative droplet digital PCR, and a mass spectrometric antigen assay as comparator methods. Clin Chem. 2021:hvab138. doi: 10.1093/clinchem/hvab138 [DOI] [PubMed] [Google Scholar]
  • 94.Kenyeres B, Ánosi N, Bányai K, Mátyus M, Orosz L, Kiss A, et al. Comparison of four PCR and two point of care assays used in the laboratory detection of SARS-CoV-2. J Virol Methods. 2021;293:114165. doi: 10.1016/j.jviromet.2021.114165 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Kernéis S, Elie C, Fourgeaud J, Choupeaux L, Delarue SM, Alby ML, et al. Accuracy of saliva and nasopharyngeal sampling for detection of SARS-CoV-2 in community screening: a multicentric cohort study. Eur J Clin Microbiol Infect Dis. 2021:1–10. doi: 10.1007/s10096-021-04327-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96.Kilic A, Hiestand B, Palavecino E. Evaluation of Performance of the BD Veritor SARS-CoV-2 Chromatographic Immunoassay Test in Patients with Symptoms of COVID-19. J Clin Microbiol. 2021;59(5). doi: 10.1128/JCM.00260-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 97.Kim D, Lee J, Bal J, Seo SK, Chong CK, Lee JH, et al. Development and Clinical Evaluation of an Immunochromatography-Based Rapid Antigen Test (GenBody COVAG025) for COVID-19 Diagnosis. Viruses-Basel. 2021;13(5). doi: 10.3390/v13050796 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98.Kim HW, Park M, Lee JH. Clinical Evaluation of the Rapid STANDARD Q COVID-19 Ag Test for the Screening of Severe Acute Respiratory Syndrome Coronavirus 2. Ann Lab Med. 2022;42(1):100–4. doi: 10.3343/alm.2022.42.1.100 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 99.Kipritci Z, Keskin AU, Ciragil P, Topkaya AE. Evaluation of a Visually-Read Rapid Antigen Test Kit (SGA V-Chek) for Detection of SARS-CoV-2 Virus. Mikrobiyol Bul. 2021;55(3):461–4. doi: 10.5578/mb.20219815 [DOI] [PubMed] [Google Scholar]
  • 100.Koeleman JGM, Brand H, de Man SJ, Ong DSY. Clinical evaluation of rapid point-of-care antigen tests for diagnosis of SARS-CoV-2 infection. Eur J Clin Microbiol Infect Dis. 2021:1–7. doi: 10.1007/s10096-021-04274-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Kohmer N, Toptan T, Pallas C, Karaca O, Pfeiffer A, Westhaus S, et al. The Comparative Clinical Performance of Four SARS-CoV-2 Rapid Antigen Tests and Their Correlation to Infectivity In Vitro. Journal of. Clin Med. 2021;10(2). doi: 10.3390/jcm10020328 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102.Kolwijck E, Brouwers-Boers M, Broertjes J, van Heeswijk K, Runderkamp N, Meijer A, et al. Validation and implementation of the Panbio COVID-19 Ag rapid test for the diagnosis of SARS-CoV-2 infection in symptomatic hospital healthcare workers. Infect Prev Pract. 2021;3(2):100142. doi: 10.1016/j.infpip.2021.100142 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.Korenkov M, Poopalasingam N, Madler M, Vanshylla K, Eggeling R, Wirtz M, et al. Evaluation of a rapid antigen test to detect SARS-CoV-2 infection and identify potentially infectious individuals. J Clin Microbiol. 2021;59(9):e0089621. doi: 10.1128/JCM.00896-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 104.Krüger LJ, Gaeddert M, Köppel L, Brümmer LE, Gottschalk C, Miranda IB, et al. Evaluation of the accuracy, ease of use and limit of detection of novel, rapid, antigen-detecting point-of-care diagnostics for SARS-CoV-2. medRxiv [Preprint]; published October 04, 2020. doi: 10.1101/2020.10.01.20203836 [DOI] [Google Scholar]
  • 105.Krüger LJ, Gaeddert M, Tobian F, Lainati F, Gottschalk C, Klein JAF, et al. The Abbott PanBio WHO emergency use listed, rapid, antigen-detecting point-of-care diagnostic test for SARS-CoV-2-Evaluation of the accuracy and ease-of-use. PLoS ONE. 2021;16(5):e0247918. doi: 10.1371/journal.pone.0247918 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 106.Krüger LJ, Klein JAF, Tobian F, Gaeddert M, Lainati F, Klemm S, et al. Evaluation of accuracy, exclusivity, limit-of-detection and ease-of-use of LumiraDx: An antigen-detecting point-of-care device for SARS-CoV-2. Infection. 2021:1–12. doi: 10.1007/s15010-021-01681-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 107.Krüttgen A, Cornelissen CG, Dreher M, Hornef MW, Imöhl M, Kleinesa M. Comparison of the SARS-CoV-2 Rapid antigen test to the real star Sars-CoV-2 RT PCR kit. J Virol Methods. 2020;288:114024. doi: 10.1016/j.jviromet.2020.114024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108.Kumar KK, Sampritha UC, Maganty V, Prakash AA, Basumatary J, Adappa K, et al. Pre-Operative SARS CoV-2 Rapid Antigen Test and Reverse Transcription Polymerase Chain Reaction: A conundrum in surgical decision making. Indian J Ophthalmol. 2021;69(6):1560–2. doi: 10.4103/ijo.IJO_430_21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 109.Kurihara Y, Kiyasu Y, Akashi Y, Takeuchi Y, Narahara K, Mori S, et al. The evaluation of a novel digital immunochromatographic assay with silver amplification to detect SARS-CoV-2. J Infect Chemother. 2021;27(10):1493–7. doi: 10.1016/j.jiac.2021.07.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 110.L’Huillier A, Lacour M, Sadiku D, Gadiri M, De Siebenthal L, Schibler M, et al. Diagnostic accuracy of SARS-CoV-2 rapid antigen detection testing in symptomatic and asymptomatic children in the clinical setting. J Clin Microbiol. 2021;59(9):e00991–21. doi: 10.1128/JCM.00991-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111.Lambert-Niclot S, Cuffel A, Le Pape S, Vauloup-Fellous C, Morand-Joubert L, Roque-Afonso AM, et al. Evaluation of a Rapid Diagnostic Assay for Detection of SARS-CoV-2 Antigen in Nasopharyngeal Swabs. J Clin Microbiol. 2020;58(8):e00977–20. doi: 10.1128/JCM.00977-20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 112.Lee J, Kim SY, Huh HJ, Kim N, Sung H, Lee H, et al. Clinical Performance of the Standard Q COVID-19 Rapid Antigen Test and Simulation of its Real-World Application in Korea. Ann Lab Med. 2021;41(6):588–92. doi: 10.3343/alm.2021.41.6.588 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 113.Leixner G, Voill-Glaninger A, Bonner E, Kreil A, Zadnikar R, Viveiros A. Evaluation of the AMP SARS-CoV-2 rapid antigen test in a hospital setting. Int J Infect Dis. 2021;108:353–6. doi: 10.1016/j.ijid.2021.05.063 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 114.Leli C, Matteo LD, Gotta F, Cornaglia E, Vay D, Megna I, et al. Performance of a SARS CoV-2 antigen rapid immunoassay in patients admitted to the Emergency Department. Int J Infect Dis. 2021;110:135–40. doi: 10.1016/j.ijid.2021.07.043 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 115.Linares M, Pérez-Tanoira R, Carrero A, Romanyk J, Pérez-García F, Gómez-Herruz P, et al. Panbio antigen rapid test is reliable to diagnose SARS-CoV-2 infection in the first 7 days after the onset of symptoms. J Clin Virol. 2020;133:104659. doi: 10.1016/j.jcv.2020.104659 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 116.Lindner A, Nikolai O, Rohardt C, Burock S, Hülso C, Bölke A, et al. Head-to-head comparison of SARS-CoV-2 antigen-detecting rapid test with professional-collected nasal versus nasopharyngeal swab. Eur Respir J. 2020;57(5):2004430. doi: [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 117.Lindner AK, Nikolai O, Kausch F, Wintel M, Hommes F, Gertler M, et al. Head-to-head comparison of SARS-CoV-2 antigen-detecting rapid test with self-collected nasal swab versus professional-collected nasopharyngeal swab. Eur Respir J. 2021;57(4). doi: [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 118.Lindner AK, Nikolai O, Rohardt C, Kausch F, Wintel M, Gertler M, et al. Diagnostic accuracy and feasibility of patient self-testing with a SARS-CoV-2 antigen-detecting rapid test. J Clin Virol. 2021;141:104874. doi: 10.1016/j.jcv.2021.104874 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 119.Liotti FM, Menchinelli G, Lalle E, Palucci I, Marchetti S, Colavita F, et al. Performance of a novel diagnostic assay for rapid SARS-CoV-2 antigen detection in nasopharynx samples. Clin Microbiol Infect. 2020;27:487–8. doi: 10.1016/j.cmi.2020.09.030 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 120.Lunca C, Cojocaru C, Gurzu IL, Petrariu FD, Cojocaru E. Performance of antigenic detection of SARS-CoV-2 in nasopharyngeal samples. medRxiv [Preprint]; published July 16, 2021. doi: 10.1101/2021.07.12.21260263 [DOI] [Google Scholar]
  • 121.Menchinelli G, De Angelis G, Cacaci M, Liotti FM, Candelli M, Palucci I, et al. SARS-CoV-2 Antigen Detection to Expand Testing Capacity for COVID-19: Results from a Hospital Emergency Department Testing Site. Diagnostics. 2021;11(7). doi: 10.3390/diagnostics11071211 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 122.Merino-Amador P, González-Donapetry P, Domínguez-Fernández M, González-Romo F, Sánchez-Castellano M, Seoane-Estevez A, et al. Clinitest rapid COVID-19 antigen test for the diagnosis of SARS-CoV-2 infection: A multicenter evaluation study. J Clin Virol. 2021;143:104961. doi: 10.1016/j.jcv.2021.104961 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 123.Merino-Amador P, Guinea J, Muñoz-Gallego I, González-Donapetry P, Galán J-C, Antona N, et al. Multicenter evaluation of the Panbio COVID-19 Rapid Antigen-Detection Test for the diagnosis of SARS-CoV-2 infection. Clin Microbiol Infect. 2020;27(5):758–61. doi: 10.1016/j.cmi.2021.02.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124.Mertens P, De Vos N, Martiny D, Jassoy C, Mirazimi A, Cuypers L, et al. Development and Potential Usefulness of the COVID-19 Ag Respi-Strip Diagnostic Assay in a Pandemic Context. Front Med. 2020;7:225. doi: 10.3389/fmed.2020.00225 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 125.Micocci M, Buckle P, Hayward G, Allen J, Davies K, Kierkegaard P, et al. Point of Care Testing using rapid automated Antigen Testing for SARS-COV-2 in Care Homes–an exploratory safety, usability and diagnostic agreement evaluation. medRxiv [Preprint]; published April 26, 2021. doi: 10.1101/2021.04.22.21255948 [DOI] [Google Scholar]
  • 126.Möckel M, Corman VM, Stegemann MS, Hofmann J, Stein A, Jones TC, et al. SARS-CoV-2 Antigen Rapid Immunoassay for Diagnosis of COVID-19 in the Emergency Department. Biomarkers. 2021;26(3):213–20. doi: 10.1080/1354750X.2021.1876769 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 127.Muhi S, Tayler N, Hoang T, Ballard SA, Graham M, Rojek A, et al. Multi-site assessment of rapid, point-of-care antigen testing for the diagnosis of SARS-CoV-2 infection in a low-prevalence setting: A validation and implementation study. Lancet Reg Health West Pac. 2021;9:100115. doi: 10.1016/j.lanwpc.2021.100115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 128.Nalumansi A, Lutalo T, Kayiwa J, Watera C, Balinandi S, Kiconco J, et al. Field Evaluation of the Performance of a SARS-CoV-2 Antigen Rapid Diagnostic Test in Uganda using Nasopharyngeal Samples. Int J Infect Dis. 2020;104:282–6. doi: 10.1016/j.ijid.2020.10.073 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 129.Ngo Nsoga MT, Kronig I, Perez Rodriguez FJ, Sattonnet-Roche P, Da Silva D, Helbling J, et al. Diagnostic accuracy of Panbio rapid antigen tests on oropharyngeal swabs for detection of SARS-CoV-2. PLoS ONE. 2021;16(6):e0253321. doi: 10.1371/journal.pone.0253321 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 130.Nikolai O, Rohardt C, Tobian F, Junge A, Corman V, Jones T, et al. Anterior nasal versus nasal mid-turbinate sampling for a SARS-CoV-2 antigen-detecting rapid test: does localisation or professional collection matter? Infect Dis Ther. 2021:1–6. doi: 10.1080/23744235.2021.1969426 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 131.Nordgren J, Sharma S, Olsson H, Jämtberg M, Falkeborn T, Svensson L, et al. SARS-CoV-2 rapid antigen test: High sensitivity to detect infectious virus. J Clin Virol. 2021;140:104846. doi: 10.1016/j.jcv.2021.104846 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 132.Okoye NC, Barker AP, Curtis K, Orlandi RR, Snavely EA, Wright C, et al. Performance Characteristics of BinaxNOW COVID-19 Antigen Card for Screening Asymptomatic Individuals in a University Setting. J Clin Microbiol. 2021;59(4):e03282–20. doi: 10.1128/JCM.03282-20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 133.Onsongo SN, Otieno K, van Duijn S, Adams E, Omollo M, Odero IA, et al. Field performance of NowCheck rapid antigen test for SARS-CoV-2 in Kisumu County, western Kenya. medRxiv [Preprint]; published August 13, 2021. doi: 10.1101/2021.08.12.21261462 [DOI] [Google Scholar]
  • 134.Orsi A, Pennati BM, Bruzzone B, Ricucci V, Ferone D, Barbera P, et al. On-field evaluation of a ultra-rapid fluorescence immunoassay as a frontline test for SARS-COV-2 diagnostic. J Virol Methods. 2021;295:114201. doi: 10.1016/j.jviromet.2021.114201 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 135.Osmanodja B, Budde K, Zickler D, Naik MG, Hofmann J, Gertler M, et al. Accuracy of a Novel SARS-CoV-2 Antigen-Detecting Rapid Diagnostic Test from Standardized Self-Collected Anterior Nasal Swabs. Journal of. Clin Med. 2021;10(10). doi: 10.3390/jcm10102099 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 136.Osterman A, Baldauf HM, Eletreby M, Wettengel JM, Afridi SQ, Fuchs T, et al. Evaluation of two rapid antigen tests to detect SARS-CoV-2 in a hospital setting. Med Microbiol Immunol. 2021;210(1):65–72. doi: 10.1007/s00430-020-00698-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 137.Osterman A, Iglhaut M, Lehner A, Späth P, Stern M, Autenrieth H, et al. Comparison of four commercial, automated antigen tests to detect SARS-CoV-2 variants of concern. Med Microbiol Immunol. 2021:1–13. doi: 10.1007/s00430-021-00719-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 138.Parada-Ricart E, Gomez-Bertomeu F, Picó-Plana E, Olona-Cabases M. Usefulness of the antigen for diagnosing SARS-CoV-2 infection in patients with and without symptoms. Enferm Infecc Microbiol Clin. 2020;39:357–8. doi: 10.1016/j.eimc.2020.09.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 139.Pekosz A, Cooper C, Parvu V, Li M, Andrews J, Manabe YCC, et al. Antigen-based testing but not real-time PCR correlates with SARS-CoV-2 virus culture. Clin Infect Dis. 2020:ciaa1706. doi: 10.1093/cid/ciaa1706 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 140.Pena M, Ampuero M, Garces C, Gaggero A, Garcia P, Velasquez MS, et al. Performance of SARS-CoV-2 rapid antigen test compared with real-time RT-PCR in asymptomatic individuals. Int J Infect Dis. 2021;107:201–4. doi: 10.1016/j.ijid.2021.04.087 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 141.Pérez-García F, Romanyk J, Gómez-Herruz P, Arroyo T, Pérez-Tanoira R, Linares M, et al. Diagnostic performance of CerTest and Panbio antigen rapid diagnostic tests to diagnose SARS-CoV-2 infection. J Clin Virol. 2021;137:104781. doi: 10.1016/j.jcv.2021.104781 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 142.Perez-Garcia F, Romanyk J, Gutierrez HM, Ballestero AL, Ranz IP, Arroyo JG, et al. Comparative evaluation of Panbio and SD Biosensor antigen rapid diagnostic tests for COVID-19 diagnosis. J Med Virol. 2021;93(9):5650–4. doi: 10.1002/jmv.27089 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 143.Peto T. COVID-19: Rapid antigen detection for SARS-CoV-2 by lateral flow assay: A national systematic evaluation of sensitivity and specificity for mass-testing. EClinicalMedicine. 2021;36:100924. doi: 10.1016/j.eclinm.2021.100924 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 144.Pickering S, Batra R, Merrick B, Snell LB, Nebbia G, Douthwaite S, et al. Comparative performance of SARS-CoV-2 lateral flow antigen tests and association with detection of infectious virus in clinical specimens: a single-centre laboratory evaluation study. Lancet Microbe. 2021;2(9):E461–71. doi: 10.1016/S2666-5247(21)00143-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 145.Pilarowski G, Lebel P, Sunshine S, Liu J, Crawford E, Marquez C, et al. Performance characteristics of a rapid SARS-CoV-2 antigen detection assay at a public plaza testing site in San Francisco. J Infect Dis. 2020;223(7):1139–44. doi: 10.1101/2020.11.02.20223891 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 146.Pollock N, Jacobs J, Tran K, Cranston A, Smith S, O’Kane C, et al. Performance and Implementation Evaluation of the Abbott BinaxNOW Rapid Antigen Test in a High-throughput Drive-through Community Testing Site in Massachusetts. J Clin Microbiol. 2021;59(5):e00083–21. doi: 10.1128/JCM.00083-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 147.Pollock NR, Tran K, Jacobs JR, Cranston AE, Smith S, O’Kane CY, et al. Performance and Operational Evaluation of the Access Bio CareStart Rapid Antigen Test in a High-Throughput Drive-Through Community Testing Site in Massachusetts. Open Forum. Infect Dis Ther. 2021;8(7):ofab243. doi: 10.1093/ofid/ofab243 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 148.Porte L, Legarraga P, Iruretagoyena M, Vollrath V, Pizarro G, Munita J, et al. Evaluation of two fluorescence immunoassays for the rapid detection of SARS-CoV-2 antigen—new tool to detect infective COVID-19 patients. PeerJ. 2020;9:e10801. doi: 10.1101/2020.10.04.20206466 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 149.Porte L, Legarraga P, Vollrath V, Aguilera X, Munita JM, Araos R, et al. Evaluation of a novel antigen-based rapid detection test for the diagnosis of SARS-CoV-2 in respiratory samples. Int J Infect Dis. 2020;99:328–33. doi: 10.1016/j.ijid.2020.05.098 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 150.Qahtani MA, Yang CWT, Lazosky L, Li X, D’Cruz J, Romney MG, et al. SARS-CoV-2 rapid antigen testing for departing passengers at Vancouver international airport. J Travel Med. 2021:taab085. doi: 10.1093/jtm/taab085 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 151.Ristić M, Nikolić N, Čabarkapa V, Turkulov V, Petrović V. Validation of the STANDARD Q COVID-19 antigen test in Vojvodina, Serbia. PLoS ONE. 2021;16(2):e0247606. doi: 10.1371/journal.pone.0247606 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 152.Rottenstreich A, Zarbiv G, Kabiri D, Porat S, Sompolinsky Y, Reubinoff B, et al. Rapid antigen detection testing for universal screening for severe acute respiratory syndrome coronavirus 2 in women admitted for delivery. Am J Obstet Gynecol. 2021;224(5):539–40. doi: 10.1016/j.ajog.2021.01.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 153.Salvagno GL, Gianfilippi G, Bragantini D, Henry BM, Lippi G. Clinical assessment of the Roche SARS-CoV-2 rapid antigen test. Diagnosis. 2021;8(3):322–6. doi: 10.1515/dx-2020-0154 [DOI] [PubMed] [Google Scholar]
  • 154.Sberna G, Lalle E, Capobianchi MR, Bordi L, Amendola A. Letter of concern re: "Immunochromatographic test for the detection of SARS-CoV-2 in saliva. J Infect Chemother. 2021. Feb;27(2):384–386. doi: 10.1016/j.jiac.2020.11.016". J Infect Chemother. 2021;27:1129–30. 10.1016/j.jiac.2021.04.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 155.Schildgen V, Demuth S, Lüsebrink J, Schildgen O. Limits and opportunities of SARS-CoV-2 antigen rapid tests–an experience based perspective. Pathogens. 2020;10(1):38. doi: 10.3390/pathogens10010038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 156.Schuit E, Veldhuijzen IK, Venekamp RP, van den Bijllaardt W, Pas SD, Lodder EB, et al. Diagnostic accuracy of rapid antigen tests in asymptomatic and presymptomatic close contacts of individuals with confirmed SARS-CoV-2 infection: cross sectional study. Br Med J. 2021;374:n1676. doi: 10.1136/bmj.n1676 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 157.Schwob J-M, Miauton A, Petrovic D, Perdrix J, Senn N, Jaton K, et al. Diagnostic du Covid-19 en milieu ambulatoire. Rev Med Suisse. 2020;17(737):862–5. doi: 10.1101/2020.11.23.20237057 [DOI] [PubMed] [Google Scholar]
  • 158.Scohy A, Anantharajah A, Bodeus M, Kabamba-Mukadi B, Verroken A, Rodriguez-Villalobos H. Low performance of rapid antigen detection test as frontline testing for COVID-19 diagnosis. J Clin Virol. 2020;129:104455. doi: 10.1016/j.jcv.2020.104455 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 159.Seitz T, Schindler S, Winkelmeyer P, Zach B, Wenisch C, Zoufaly A, et al. Evaluation of rapid antigen tests based on saliva for the detection of SARS-CoV-2. J Med Virol. 2021;93:4161–2. doi: 10.1002/jmv.26983 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 160.Seynaeve Y, Heylen J, Fontaine C, Maclot F, Meex C, Diep AN, et al. Evaluation of Two Rapid Antigenic Tests for the Detection of SARS-CoV-2 in Nasopharyngeal Swabs. J Clin Med. 2021;10(13):2774. doi: 10.3390/jcm10132774 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 161.Shah M, Salvatore P, Ford L, Kamitani E, Whaley M, Mitchell K, et al. Performance of Repeat BinaxNOW SARS-CoV-2 Antigen Testing in a Community Setting, Wisconsin, November-December 2020. Clin Infect Dis. 2021;73:S54–7. doi: 10.1093/cid/ciab309 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 162.Shaikh N, Friedlander EJ, Tate PJ, Liu H, Chang CH, Wells A, et al. Performance of a Rapid SARS-CoV-2 Antigen Detection Assay in Symptomatic Children. Pediatrics. 2021;148(3):e2021050832. doi: 10.1542/peds.2021-050832 [DOI] [PubMed] [Google Scholar]
  • 163.Shaw JLV, Deslandes V, Smith J, Desjardins M. Evaluation of the Abbott Panbio COVID-19 Ag rapid antigen test for the detection of SARS-CoV-2 in asymptomatic Canadians. Diagn Microbiol Infect Dis. 2021;101(4):115514. doi: 10.1016/j.diagmicrobio.2021.115514 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 164.Shidlovskaya E, Kuznetsova N, Divisenko E, Nikiforova M, Siniavin A, Ogarkova D, et al. The Value of Rapid Antigen Tests to Identify Carriers of Viable SARS-CoV-2. medRxiv [Preprint]; published March 12, 2021. doi: 10.1101/2021.03.10.21252667 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 165.Shrestha B, Neupane A, Pant S, Shrestha A, Bastola A, Rajbhandari B, et al. Sensitivity and specificity of lateral flow antigen test kits for covid-19 in asymptomatic population of quarantine centre of province 3. Kathmandu Univ Med J. 2020;18(2):36–9. [PubMed] [Google Scholar]
  • 166.Smith RD, Johnson JK, Clay C, Girio-Herrera L, Stevens D, Abraham M, et al. Clinical Evaluation of Sofia Rapid Antigen Assay for Detection of Severe Acute Respiratory Syndrome Coronavirus 2 among Emergency Department to Hospital Admissions. Infect Control Hosp Epidemiol. 2021:1–6. doi: 10.1017/ice.2021.281 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 167.Šterbenc A, Tomič V, Bidovec Stojković U, Vrankar K, Rozman A, Zidarn M. Usefulness of rapid antigen testing for SARS-CoV-2 screening of healthcare workers: a pilot study. Clin Exp Med. 2021:1–4. doi: 10.1007/s10238-021-00722-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 168.Stohr J, Zwart VF, Goderski G, Meijer A, Nagel-Imming CRS, Kluytmans-van den Bergh MFQ, et al. Self-testing for the detection of SARS-CoV-2 infection with rapid antigen tests for people with suspected COVID-19 in the community. Clin Microbiol Infect. 2021. doi: 10.1016/j.cmi.2021.07.039 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 169.Stokes W, Berenger BM, Portnoy D, Scott B, Szelewicki J, Singh T, et al. Clinical performance of the Abbott Panbio with nasopharyngeal, throat, and saliva swabs among symptomatic individuals with COVID-19. Eur J Clin Microbiol Infect Dis. 2021;40(8):1721–6. doi: 10.1007/s10096-021-04202-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 170.Strömer A, Rose R, Schäfer M, Schön F, Vollersen A, Lorentz T, et al. Performance of a Point-of-Care Test for the Rapid Detection of SARS-CoV-2 Antigen. Microorganisms. 2020;9(1). doi: 10.3390/microorganisms9010058 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 171.Suzuki H, Akashi Y, Ueda A, Kiyasu Y, Takeuchi Y, Maehara Y, et al. Diagnostic performance of a novel digital immunoassay (RapidTesta SARS-CoV-2): a prospective observational study with 1,127 nasopharyngeal samples. medRxiv [Preprint]; published August 04, 2021. doi: 10.1101/2021.07.26.21261162 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 172.Takeda Y, Mori M, Omi K. SARS-CoV-2 qRT-PCR Ct value distribution in Japan and possible utility of rapid antigen testing kit. medRxiv [Preprint]; published June 19, 2020. doi: 10.1101/2020.06.16.20131243 [DOI] [Google Scholar]
  • 173.Takeuchi Y, Akashi Y, Kato D, Kuwahara M, Muramatsu S, Ueda A, et al. The evaluation of a newly developed antigen test (QuickNavi-COVID19 Ag) for SARS-CoV-2: A prospective observational study in Japan. J Infect Chemother. 2021;27(6):890–4. doi: 10.1016/j.jiac.2021.02.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 174.Takeuchi Y, Akashi Y, Kato D, Kuwahara M, Muramatsu S, Ueda A, et al. Diagnostic performance and characteristics of anterior nasal collection for the SARS-CoV-2 antigen test: a prospective study. Sci Rep. 2021;11(1):10519. doi: 10.1038/s41598-021-90026-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 175.Terpos E, Ntanasis-Stathopoulos I, Skvarc M. Clinical Application of a New SARS-CoV-2 Antigen Detection Kit (Colloidal Gold) in the Detection of COVID-19. Diagnostics. 2021;11(6):995. doi: 10.3390/diagnostics11060995 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 176.Thakur P, Saxena S, Manchanda V, Rana N, Goel R, Arora R. Utility of Antigen-Based Rapid Diagnostic Test for Detection of SARS-CoV-2 Virus in Routine Hospital Settings. Lab Med. 2021; Online ahead of print. doi: 10.1093/labmed/lmab033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 177.Thell R, Kallab V, Weinhappel W, Mueckstein W, Heschl L, Heschl M, et al. Evaluation of a novel, rapid antigen detection test for the diagnosis of SARS-CoV-2. medRxiv [Preprint]; published April 22, 2021. doi: 10.1371/journal.pone.0259527 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 178.Thirion-Romero I, Guerrero-Zuniga S, Arias-Mendoza A, Cornejo-Juarez DP, Meza-Meneses P, Torres-Erazo DS, et al. Evaluation of a rapid antigen test for SARS-CoV-2 in symptomatic patients and their contacts: a multicenter study. medRxiv [Preprint]; published May 24, 2021. doi: 10.1016/j.ijid.2021.10.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 179.Tinker SC, Szablewski CM, Litvintseva AP, Drenzek C, Voccio GE, Hunter MA, et al. Point-of-Care Antigen Test for SARS-CoV-2 in Asymptomatic College Students. Emerg Infect Dis. 2021;27(10). doi: 10.3201/eid2710.210080 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 180.Toptan T, Eckermann L, Pfeiffer A, Hoehl S, Ciesek S, Drosten C, et al. Evaluation of a SARS-CoV-2 rapid antigen test: potential to help reduce community spread? J Clin Virol. 2020;135:104713. doi: 10.1016/j.jcv.2020.104713 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 181.Torres I, Poujois S, Albert E, Álvarez G, Colomina J, Navarro D. Point-of-care evaluation of a rapid antigen test (CLINITEST) Rapid COVID-19 Antigen Test) for diagnosis of SARS-CoV-2 infection in symptomatic and asymptomatic individuals. J Inf Secur. 2021;82(5):e11–2. doi: 10.1016/j.jinf.2021.02.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 182.Torres I, Poujois S, Albert E, Colomina J, Navarro D. Real-life evaluation of a rapid antigen test (Panbio COVID-19 Ag Rapid Test Device) for SARS-CoV-2 detection in asymptomatic close contacts of COVID-19 patients. Clin Microbiol Infect. 2020;27(4):636.E1-636.E4. doi: 10.1016/j.cmi.2020.12.022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 183.Tsai SC, Lee WS, Chen PY, Hung SH. Real world clinical performance of SARS-CoV-2 rapid antigen tests in suspected COVID-19 cases in Taiwan. J Formos Med Assoc. 2021;120(10):2042–3. doi: 10.1016/j.jfma.2021.07.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 184.Turcato G, Zaboli A, Pfeifer N, Ciccariello L, Sibilio S, Tezza G, et al. Clinical application of a rapid antigen test for the detection of SARS-CoV-2 infection in symptomatic and asymptomatic patients evaluated in the emergency department: a preliminary report. J Inf Secur. 2020;82:E14–6. doi: 10.1016/j.jinf.2020.12.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 185.Van der Moeren N, Zwart VF, Lodder EB, Van den Bijllaardt W, Van Esch H, Stohr J, et al. Evaluation of the test accuracy of a SARS-CoV-2 rapid antigen test in symptomatic community dwelling individuals in the Netherlands. PLoS ONE. 2021;16(5):e0250886. doi: 10.1371/journal.pone.0250886 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 186.Van Honacker E, Van Vaerenbergh K, Boel A, De Beenhouwer H, Leroux-Roels I, Cattoir L. Comparison of five SARS-CoV-2 rapid antigen detection tests in a hospital setting and performance of one antigen assay in routine practice: a useful tool to guide isolation precautions? J Hosp Infect. 2021;114:144–52. doi: 10.1016/j.jhin.2021.03.021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 187.Villaverde S, Domínguez-Rodríguez S, Sabrido G, Pérez-Jorge C, Plata M, Romero MP, et al. Diagnostic Accuracy of the Panbio SARS-CoV-2 Antigen Rapid Test Compared with Rt-Pcr Testing of Nasopharyngeal Samples in the Pediatric Population. J Pediatr. 2021;232:P287–289.E4. doi: 10.1016/j.jpeds.2021.01.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 188.Weitzel T, Legarraga P, Iruretagoyena M, Pizarro G, Vollrath V, Araos R, et al. Comparative evaluation of four rapid SARS-CoV-2 antigen detection tests using universal transport medium. Travel Med Infect Dis. 2020;39:101942. doi: 10.1016/j.tmaid.2020.101942 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 189.Weitzel T, Pérez C, Tapia D, Legarraga P, Porte L. SARS-CoV-2 rapid antigen detection tests. Lancet Infect Dis. 2021;21(8):1067–8. doi: 10.1016/S1473-3099(21)00249-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 190.Wertenauer C, Michael GB, Dressel A, Pfeifer C, Hauser U, Wieland E, et al. Diagnostic Efficacy of Rapid Antigen Testing for SARS-CoV-2: The COVid-19 AntiGen (COVAG) study. medRxiv [Preprint]; published August 07, 2021. doi: 10.1101/2021.08.04.21261609 [DOI] [Google Scholar]
  • 191.Yin N, Debuysschere C, Decroly M, Bouazza FZ, Collot V, Martin C, et al. SARS-CoV-2 Diagnostic Tests: Algorithm and Field Evaluation From the Near Patient Testing to the Automated Diagnostic Platform. Front Med. 2021;8:380. doi: 10.3389/fmed.2021.650581 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 192.Young BC, Eyre DW, Jeffery K. Use of lateral flow devices allows rapid triage of patients with SARS-CoV-2 on admission to hospital. J Inf Secur. 2021;82:276–316. doi: 10.1016/j.jinf.2021.02.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 193.Young S, Taylor SN, Cammarata CL, Varnado KG, Roger-Dalbert C, Montano A, et al. Clinical evaluation of BD Veritor SARS-CoV-2 point-of-care test performance compared to PCR-based testing and versus the Sofia 2 SARS Antigen point-of-care test. J Clin Microbiol. 2020;59(1). doi: 10.1128/JCM.02338-20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 194.Foundation for Innovative New Diagnostics. FIND Evaluation of Bionote, Inc. NowCheck COVID-19 Ag Test. External Report Version 15, 20 April 2021, 2020.
  • 195.Foundation for Innovative New Diagnostics. FIND Evaluation of RapiGEN Inc. BIOCREDIT COVID-19 Ag. External Report Version 21, 10 December, 2020.
  • 196.Foundation for Innovative New Diagnostics. FIND Evaluation of SD Biosensor, Inc. STANDARD F COVID-19 Ag FIA. External Report Version 21, 10 December, 2020.
  • 197.Foundation for Innovative New Diagnostics. FIND Evaluation of SD Biosensor, Inc. STANDARD Q COVID-19 Ag Test. External Report Version 21, 10 December, 2020.
  • 198.Foundation for Innovative New Diagnostics. FIND Evaluation of Boditech Medical, Inc. iChroma COVID-19 Ag Test. External Report Version 10, 23 February 2021, 2021.
  • 199.Foundation for Innovative New Diagnostics. FIND Evaluation of Joysbio (Tianjin) Biotechnology Co., Ltd. SARS-CoV-2 Antigen Rapid Test Kit (Colloidal Gold). External Report Version 10, 11 February 2021, 2021.
  • 200.Foundation for Innovative New Diagnostics. FIND Evaluation of Guangzhou Wondfo Biotech Co., Ltd Wondfo 2019-nCoV Antigen Test (Lateral Flow Method). Public Report Version 10, 25 February 2021, 2021.
  • 201.Foundation for Innovative New Diagnostics. FIND Evaluation of Abbott Panbio COVID-19 Ag Rapid Test Device (NASAL). External Report Version 10, 11 February 2021, 2021.
  • 202.Foundation for Innovative New Diagnostics. FIND Evaluation of Bionote, Inc. NowCheck COVID-19 Ag Test, nasal swab. External Report Version 10, 30 March 2021, 2021.
  • 203.Foundation for Innovative New Diagnostics. FIND Evaluation of Fujirebio Inc. Espline SARS-CoV-2. External Report Version 10, 29 March 2021, 2021.
  • 204.Foundation for Innovative New Diagnostics. FIND Evaluation of Mologic Ltd, COVID 19 RAPID ANTIGEN TEST. External Report Version 10, 23 April 2021, 2021.
  • 205.Foundation for Innovative New Diagnostics. FIND Evaluation of NADAL COVID-19 Ag Rapid Test. External Report Version 10, 26 April 2021, 2021.
  • 206.Foundation for Innovative New Diagnostics. FIND Evaluation of Acon Biotech (Hangzhou) Co. Ltd; Flowflex SARS-CoV-2 Antigen Rapid Test. External Report Version 10, 9 June 2021, 2021.
  • 207.Foundation for Innovative New Diagnostics. FIND Evaluation of Edinburgh Genetics; ActivXpress+ COVID-19 Antigen Complete Testing Kit. External Report Version 10, 26 April 2021, 2021.
  • 208.Foundation for Innovative New Diagnostics. FIND Evaluation of Green Cross Medical Sciences Corp.; Genedia W COVID-19 Ag. External Report Version 10, 25 April 2021, 2021.
  • 209.Foundation for Innovative New Diagnostics. FIND Evaluation of Hotgen; Novel Coronavirus 2019-nCoV Antigen Test (Colloidal Gold). External Report Version 20, [15 September 2021], 2021.
  • 210.Foundation for Innovative New Diagnostics. FIND Evaluation of Abbott; Panbio COVID-19 Ag Rapid Test Device. Country Specific External Report Version 10, 28 April 2021, 2021.
  • 211.Foundation for Innovative New Diagnostics. FIND Evaluation of Premier Medical Corporation Pvt. Ltd; Sure Status COVID-19 Antigen Card Test. External Report Version 11, 18 August 2021, 2021.
  • 212.Foundation for Innovative New Diagnostics. FIND Evaluation of SD Bionsensor, Inc.; STANDARD Q COVID-19 Ag Test. External Report (Continue from V21) Version 1, 22 April 2021, 2021.
  • 213.Foundation for Innovative New Diagnostics. FIND Evaluation of SD Biosensor, Inc.; STANDARD Q COVID-19 Ag Test, nasal swab. External Report Version 20, 12 April 2021, 2021.
  • 214.Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Br Med J. 2021;372:n71. doi: 10.1136/bmj.n71 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 215.Callahan C, Lee RA, Lee GR, Zulauf K, Kirby JE, Arnaout R. Nasal Swab Performance by Collection Timing, Procedure, and Method of Transport for Patients with SARS-CoV-2. J Clin Microbiol. 2021;59(9):e0056921. doi: 10.1128/JCM.00569-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 216.Lindner A, Nikolai O, Rohardt C, Kausch F, Wintel M, Gertler M, et al. SARS-CoV-2 patient self-testing with an antigen-detecting rapid test: a head-to-head comparison with professional testing. medRxiv [Preprint]; published January 08, 2021. doi: 10.1101/2021.01.06.20249009 [DOI] [Google Scholar]
  • 217.Deerain J, Druce J, Tran T, Batty M, Yoga Y, Fennell M, et al. Assessment of the analytical sensitivity of ten lateral flow devices against the SARS-CoV-2 omicron variant. J Clin Microbiol. 2021:jcm0247921. doi: 10.1128/jcm.02479-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 218.Bekliz M, Adea K, Alvarez C, Essaidi-Laziosi M, Escadafal C, Kaiser L, et al. Analytical sensitivity of seven SARS-CoV-2 antigen-detecting rapid tests for Omicron variant. medRxiv [Preprint]; published December 22, 2021. doi: 10.1101/2021.12.18.21268018 [DOI] [Google Scholar]
  • 219.Bayart J-L, Degosserie J, Favresse J, Gillot C, Didembourg M, Djokoto HP, et al. Analytical Sensitivity of Six SARS-CoV-2 Rapid Antigen Tests for Omicron versus Delta Variant. Viruses. 2022;14(4):654. doi: 10.3390/v14040654 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 220.Pd Michelena, Torres I, Ramos-García Á, Gozalbes V, Ruiz N, Sanmartín A, et al. Real-life performance of a COVID-19 rapid antigen detection test targeting the SARS-CoV-2 nucleoprotein for diagnosis of COVID-19 due to the Omicron variant. J Inf Secur. 2022; Online ahead of print. doi: 10.1016/j.jinf.2022.02.022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 221.Kissler SM, Fauver JR, Mack C, Tai CG, Breban MI, Watkins AE, et al. Viral Dynamics of SARS-CoV-2 Variants in Vaccinated and Unvaccinated Persons. N Engl J Med. 2021;385(26):2489–91. doi: 10.1056/NEJMc2102507 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 222.Hay JA, Kissler SM, Fauver JR, Mack C, Tai CG, Samant RM, et al. Viral dynamics and duration of PCR positivity of the SARS-CoV-2 Omicron variant. medRxiv [Preprint]; published January 14, 2022. doi: 10.1101/2022.01.13.22269257 [DOI] [Google Scholar]
  • 223.World Health Organization. Use of SARS-CoV-2 antigen-detection rapid diagnostic tests for COVID-19 self-testing. 2022. [Google Scholar]
  • 224.Adamson B, Sikka R, Wyllie AL, Premsrirut P. Discordant SARS-CoV-2 PCR and Rapid Antigen Test Results When Infectious: A December 2021 Occupational Case Series. medRxiv [Preprint]; published January 05, 2022. doi: 10.1101/2022.01.04.22268770 [DOI] [Google Scholar]
  • 225.Marais G, Hsiao N-y, Iranzadeh A, Doolabh D, Enoch A, Chu C-y, et al. Saliva swabs are the preferred sample for Omicron detection. medRxiv [Preprint]; published December 24, 2021. doi: 10.1101/2021.12.22.21268246 [DOI] [Google Scholar]
  • 226.Schrom J, Marquez C, Pilarowski G, Wang G, Mitchell A, Puccinelli R, et al. Direct Comparison of SARS CoV-2 Nasal RT- PCR and Rapid Antigen Test (BinaxNOW(TM)) at a Community Testing Site During an Omicron Surge. medRxiv [Preprint]; published January 12, 2022. doi: 10.1101/2022.01.08.22268954 [DOI] [Google Scholar]
  • 227.Audigé A, Böni J, Schreiber PW, Scheier T, Buonomano R, Rudiger A, et al. Reduced Relative Sensitivity of the Elecsys SARS-CoV-2 Antigen Assay in Saliva Compared to Nasopharyngeal Swabs. Microorganisms. 2021;9(8):1700. doi: 10.3390/microorganisms9081700 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 228.Dye C, Bartolomeos K, Moorthy V, Kieny MP. Data sharing in public health emergencies: a call to researchers. Bull World Health Organ. 2016;94(3):158. doi: 10.2471/BLT.16.170860 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 229.Modjarrad K, Moorthy VS, Millett P, Gsell PS, Roth C, Kieny MP. Developing Global Norms for Sharing Data and Results during Public Health Emergencies. PLoS Med. 2016;13(1):e1001935. doi: 10.1371/journal.pmed.1001935 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 230.Arnaout R, Lee RA, Lee GR, Callahan C, Cheng A, Yen CF, et al. The Limit of Detection Matters: The Case for Benchmarking Severe Acute Respiratory Syndrome Coronavirus 2 Testing. Clin Infect Dis. 2021;73(9):e3042–6. doi: 10.1093/cid/ciaa1382 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 231.Mina MJ, Parker R, Larremore DB. Rethinking Covid-19 Test Sensitivity—A Strategy for Containment. N Engl J Med. 2020;383(22). doi: 10.1056/NEJMp2025631 [DOI] [PubMed] [Google Scholar]
  • 232.Kirby JE, Riedel S, Dutta S, Arnaout R, Cheng A, Ditelberg S, et al. SARS-CoV-2 Antigen Tests Predict Infectivity Based on Viral Culture: Comparison of Antigen, PCR Viral Load, and Viral Culture Testing on a Large Sample Cohort. medRxiv [Preprint]; published December 23, 2021. doi: 10.1101/2021.12.22.21268274 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Richard Turner

10 Feb 2022

Dear Dr Denkinger,

Thank you for submitting your manuscript entitled "Accuracy of rapid point-of-care antigen-based diagnostics for SARS-CoV-2: an updated systematic review and meta-analysis with meta regression analyzing influencing factors" for consideration by PLOS Medicine.

Your manuscript has now been evaluated by the PLOS Medicine editorial staff and I am writing to let you know that we would like to send your submission out for external assessment.

However, we first need you to complete your submission by providing the metadata that is required for full assessment. To this end, please login to Editorial Manager where you will find the paper in the 'Submissions Needing Revisions' folder on your homepage. Please click 'Revise Submission' from the Action Links and complete all additional questions in the submission questionnaire.

Please re-submit your manuscript within two working days, i.e. by Feb 14 2022 11:59PM.

Login to Editorial Manager here: https://www.editorialmanager.com/pmedicine

Once your full submission is complete, your paper will undergo a series of checks in preparation for full assessment.

Feel free to email us at plosmedicine@plos.org if you have any queries relating to your submission.

Kind regards,

Richard Turner, PhD

Senior Editor, PLOS Medicine

plosmedicine@plos.org

Decision Letter 1

Richard Turner

8 Mar 2022

Dear Dr. Denkinger,

Thank you very much for submitting your manuscript "Accuracy of rapid point-of-care antigen-based diagnostics for SARS-CoV-2: an updated systematic review and meta-analysis with meta regression analyzing influencing factors" (PMEDICINE-D-22-00454R1) for consideration at PLOS Medicine.

Your paper was discussed with an academic editor with relevant expertise and sent to independent reviewers, including a statistical reviewer. The reviews are appended at the bottom of this email and any accompanying reviewer attachments can be seen via the link below:

[LINK]

In light of these reviews, we will not be able to accept the manuscript for publication in the journal in its current form, but we would like to invite you to submit a revised version that addresses the reviewers' and editors' comments fully. You will appreciate that we cannot make a decision about publication until we have seen the revised manuscript and your response, and we expect to seek re-review by one or more of the reviewers.

In revising the manuscript for further consideration, your revisions should address the specific points made by each reviewer and the editors. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments, the changes you have made in the manuscript, and include either an excerpt of the revised text or the location (eg: page and line number) where each change can be found. Please submit a clean version of the paper as the main article file; a version with changes marked should be uploaded as a marked up manuscript.

In addition, we request that you upload any figures associated with your paper as individual TIF or EPS files with 300dpi resolution at resubmission; please read our figure guidelines for more information on our requirements: http://journals.plos.org/plosmedicine/s/figures. While revising your submission, please upload your figure files to the PACE digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at PLOSMedicine@plos.org.

We hope to receive your revised manuscript by Mar 29 2022 11:59PM. Please email us (plosmedicine@plos.org) if you have any questions or concerns.

***Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***

We ask every co-author listed on the manuscript to fill in a contributing author statement, making sure to declare all competing interests. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. If new competing interests are declared later in the revision process, this may also hold up the submission. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. You can see our competing interests policy here: http://journals.plos.org/plosmedicine/s/competing-interests.

Please use the following link to submit the revised manuscript:

https://www.editorialmanager.com/pmedicine/

Your article can be found in the "Submissions Needing Revision" folder.

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Please ensure that the paper adheres to the PLOS Data Availability Policy (see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by "data not shown" or "unpublished results." For such statements, authors must provide supporting data or cite public sources that include it.

Please let me know if you have any questions, and we look forward to receiving your revised manuscript.

Sincerely,

Richard Turner, PhD

Senior editor, PLOS Medicine

rturner@plos.org

-----------------------------------------------------------

Requests from the editors:

Please update the search to the most recent date feasible.

Our academic editor commented: "the variant specific analyses one of the reviewers mentioned may be plausible with alpha, delta, and omicron estimates if up to date data are available (would be interesting to see whether there is effect modification depending on what was the predominant variant at the time of testing)".

Please add a new final sentence to the "Methods and findings" subsection of your abstract, which should begin "Study limitations include ..." or similar and should quote 2-3 of the study's main limitations.

In the abstract and throughout the paper, please quote p values alongside 95% CI, where available.

After the abstract, please add a new and accessible "Author summary" section in non-identical prose. You may find it helpful to consult one or two recently-published research papers in PLOS Medicine to get a sense of the preferred style.

Please add some additional detail in the Methods section (main text) regarding the methods for extracting studies and how disagreement between researchers was resolved, for example.

At line 162, we note that heterogeneity was estimated "visually". Can statistical analyses be reported?

Please restructure the start of the Discussion section (main text), as the first paragraph should summarize the study's main findings.

Throughout the text, please remove spaces from within reference call-outs (e.g., "... in which they are used [2,3].").

In the reference list, please convert all italic and boldface text into plain text.

Please list 6 authors names followed by "et al.", where appropriate.

Noting reference 181, please use the journal name abbreviation "PLoS ONE".

Noting reference 222 and others, please ensure that all citations have full access details.

Comments from the reviewers:

*** Reviewer #1:

The authors describe an update/publication of their living systematic review of antigen tests, with particular emphasis on summarizing/clarifying results on subsets of patient populations. This work is important, comprehensive, and well written, and the technical details of the meta-analysis are appropriate (I caution that while I have a fairly strong grasp of the mathematics and techniques described, I would not consider myself a meta-analysis expert). The sensitivity analysis to exclude papers that might have been un/intentionally "bought" by manufacturers is a nice touch. In my opinion it is thus worthy of publication. I recommend acceptance with only minor revisions (below).

Major points

--------------

The only substantive comment regards viral load. As the authors state, and as is well known, viral load ranges widely over the course of infection and among infected individuals. Unfortunately, and to some consternation, even after two-plus years, viral loads are still not reported clinically, as they are for numerous other infections (for example, HIV, HCV), for various reasons, or in studies, where the quantitative measure used is the Ct value. This is problematic, as Ct value ranges can differ from one reference test to another, sometimes markedly.

To their credit, the authors of this study are well aware of this issue, discuss it ably, and and use viral loads in the cases where they are available (I feel their discussion in 554-555 is perhaps a bit more defensive-sounding than it needs be: Ct value is essentially the only available quantitation at present, so however imperfect, it is completely reasonable to do; I would be surprised if there were pushback to that).

However:

--it would be of great utility to name the specific RT-qPCR assay/platform used in each assay, where available, in a supplemental document. (Apologies if this was done and I missed it; I looked ca. L283-286.) I suspect this would explain some of the outliers in Fig. 4. Ideally, the authors would add platform as a regression variable, but I respect that this might be too much to ask, so would not require

redoing the regression with platform as a condition of publication. The effect of differences in limits of detection of these platforms on sensitivity/specificity/concordance between tests should be mentioned (illustrated well in Fig. 2 of https://pubmed.ncbi.nlm.nih.gov/34076471/ , which should be cited).

--it would be useful (va. L308-319) to give the reader even a rough sense of what viral loads these Ct value cutoffs most likely correspond to, on average, if possible. If not possible, a sentence in this section declaring that to be the case, explaining that Ct scales vary (their reference 3 is a suitable citation). My hope is the field will at some point move to viral loads, and it would wonderful to "future-proof" this excellent review, simply by adding such a Ct-to-viral-load conversion anchor (even a rough one, for example, plus-or-minus two or three orders of magnitude).

Hopefully these (and the minor comments below) will be straightforward points for the authors to address, further strengthening this exceptional study ahead of publication.

Minor comments

------------------

Given that it is known what the predominant strain was in each geography (for example https://github.com/hodcroftlab/covariants), is it not possible to list the likely strain for each study? That would offer the opportunity to regress by strain in the future.

L99-110, PRISMA checklist: I missed whether the search was done independently by two individuals. This may be a requirement of MOOSE but not PRISMA, or other journals, but not PRISMA/PLoS Medicine. If it was done, please mention. If it was not, that is also fine. (I am *not* suggesting it be done, if it wasn't, as a condition for publication.)

L228-230: Were the non-English-language papers included? I assume yes, but please clarify.

L151-155: I did not quite follow whether this "relaxing" allowed/resulted in overlap, or just aligned bin edges. Perhaps the authors could clarify.

L302-303: the 95% CIs overlap. Would the authors comment on the probability that this increase is real? I ask especially since the 95% CIs do not overlap for the specificities, yet the authors do not claim that the specificity was lower when using the IFU (which would not be unexpected, given a lower sensitivity). The comparator adjective should be used similarly: I would suggest either these are both comparable, or the first is comparable (since the 95% CIs overlap) and the second is lower. I would suggest they are both comparable.

L328-331: It is interesting that saliva swabs showed the lowest sensitivity, since saliva has been shown to be comparable to NP as a specimen type (https://pubmed.ncbi.nlm.nih.gov/34406838/). The 95%CI range for saliva in Fig. 3 raises the same question (5% to 95%!). Was there/was it possible to detect a difference between the swabbed saliva and the whole-mouth saliva samples?

Fig. 5: Given the narrow range of specificities, might the x-axis be usefully contracted to 0.85 or 0.9-1.0, so that differences might be visualized?

L551-552: Would also cite https://www.medrxiv.org/content/10.1101/2021.12.22.21268274v1

*** Reviewer #2:

Brümmer and colleagues present a systematic review and meta-analysis of the diagnostic accuracy of rapid antigen tests for SARS-CoV-2 compared to a gold standard RT-PCR. Across 194 studies that could be meta-analyzed, the overall sensitivity was 72% and specificity was 98.9%. The study builds upon a previous living systematic review by adding additional studies, though overall findings remain near identical. The added value of this study is in its meta-regression and stratified analyses, which help clarify the impact of various factors of Ag-RDT sensitivity/specificity. The methods used are rigorous and search thorough. I do have some suggestions for the authors to better describe the performance of Ag-RDTs:

* I believe the IFU evaluation is valuable, but think there is likely more that can be done, if the data is available. The value of Ag-RDTs can only be fully realized if they can be performed without aid or observation of health workers. The authors specifically mention on lines 523-524 that future research could evaluate "Ag-RDTs that are self-performed or those that are instrument-based." It would be interesting to know if any of the 194 studies were self-performed and/or self-read, and if further stratified analyses could be done to evaluate if sensitivity is impacted (as compared to health care professional performed and read). This would better clarify the role and impact of Ag-RDTs as a public health tool.

* In the same vein, performance of Ag-RDT may be impacted by circulating variants. Were data available to evaluate this (e.g., for wild type, alpha, delta) within the existing cohorts in the review (understanding this study was done pre-omicron)? From the discussion section, it is clear that most studies were done during wild-type or alpha dominance. Proxies could be used such as dominating variant circulating in the study setting during the study period (which I believe is defensible as study mean age and Ct were used in meta-regression), and would permit comparison.

* With respect to specific brands of Ag-RDT being the best performers, I was left wondering if studies that used these Ag-RDT also contained participants with characteristics that increased sensitivity (e.g., IFU-conforming, symptomatic patients, early in symptoms, lower Ct), which would limit our ability to conclude which test may have highest sensitivity. I think more could be done to contextualize these findings, particularly in the discussion. Forgive me if I missed this information.

* What were the specimens collected for RT-PCR for the reference standard across studies? I was unable to find this information, however I may have missed it in the dense supplement. There are differences in sensitivity between specimen types for RT-PCR, so if there were different specimens used for the reference standard across studies this should be clear and evaluated.

* I believe there may be typos with respect to number of included studies and data sets throughout the manuscript (e.g., Fig 1 states 194 studies included, line 216 says 174). Please review.

*** Reviewer #3:

Alex McConnachie, Statistical Review

Brummer et al present the latest update from the living review of rapid antigen tests for SARS-Cov-2. The analysis now includes meta regression, looking at factors associated with diagnostic performance. This review considers the statistical element of the paper.

Generally, the statistical methods are good, with the results presented and interpreted appropriately. I have a few comments, which are fairly minor.

As a non-clinical reader, I would have appreciated an explanation of Ct values, in the sense that a low Ct value indicates a higher viral load, earlier in the paper, or even in the abstract.

The statistical methods section of the paper is quite good, but does not mention the use of summary ROC curves, when perhaps it should. Also, line 181 mentions multivariate regression of factors associated with sensitivity; "multivariable" is a more accurate term.

There is mention of the assessment of publication bias using Deeks' test for funnel plot asymmetry. However, I could not find any funnel plots in the paper or the supplement, nor any results of these tests. Why are these not reported? Also, assessment of publication bias is usually not recommended when the number of studies is small; did the authors have specific criteria in terms of the number of studies, when it came to the assessment of publication bias?

Most research studies require a justification for the sample size of the study. For meta analyses, this is generally not an issue. However, in this case, since this is a living review, the authors have been able to see the data accumulating over time, so perhaps some sort of sample size justification is warranted. Why do the analysis now? Were there pre-defined criteria that triggered this analysis?

3 sensitivity analyses are reported. What is the rationale for doing these as sensitivity analyses (i.e. repeating the analysis after excluding a group of studies) rather than as subgroup analyses (i.e. estimating the difference in sensitivity and specificity estimates obtained from studies of different types)?

In Figures 3 and 5, it is noticeable that the pooled specificity estimates are all very high. So high, in fact, that it is very difficult to see what is going on, as all the points and confidence intervals are bunched up to the right of the figure. I can understand that there is a need to keep the axes the same, for comparability, but would it help to truncate the x-axis in these figures, e.g. with a lower range of 90%? A lower limit can be used in the supplement, but for the main body of the paper, a higher value perhaps makes sense.

*** Reviewer #4:

Estimated Authors,

I've read the present study with great interest. Following a living review of POC antigen-based test for SARS-CoV-2 (updated until August 2021), Authors were able to summarise available evidence on sensitivity and specificity of such instruments. Because of the reduced turnaround time, as well as the option to perform such tests in settings other than hospitals and medical laboratories, POC antigen-based test represent a valuable asset in in our global efforts against SARS-CoV-2 pandemic. Unfortunately, as previously stressed by several studies, the actual sensitivity of these POC tests may be quite inappropriate to fulfil the target of promptly and accurately identify incident cases of SARS-CoV-2 infections. With a sensitivity of around 70%, a large share of cases may fail to be diagnosed, particularly in cases with a low to moderate replication of the pathogen, and in early stages of infections.

Authors, through an appropriate study design and an accurate and diligent application of a proper methodology were able to provide a valuable piece of information for all professionals interested in this specific topic.

Briefly, I've neither requests or recommendations for improving this paper, whose acceptance "as it is" I strongly recommend.

***

Any attachments provided with reviews can be seen via the following link:

[LINK]

Decision Letter 2

Richard Turner

22 Apr 2022

Dear Dr. Denkinger,

Thank you very much for re-submitting your manuscript "Accuracy of rapid point-of-care antigen-based diagnostics for SARS-CoV-2: an updated systematic review and meta-analysis with meta regression analyzing influencing factors" (PMEDICINE-D-22-00454R2) for consideration at PLOS Medicine.

I have discussed the paper with our academic editor and it was also seen again by three reviewers. I am pleased to tell you that, once the remaining editorial and production issues are fully dealt with, we expect to be able to accept the paper for publication in the journal.

The remaining issues that need to be addressed are listed at the end of this email. Any accompanying reviewer attachments can be seen via the link below. Please take these into account before resubmitting your manuscript:

[LINK]

***Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***

In revising the manuscript for further consideration here, please ensure you address the specific points made by each reviewer and the editors. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments and the changes you have made in the manuscript. Please submit a clean version of the paper as the main article file. A version with changes marked must also be uploaded as a marked up manuscript file.

Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. If you haven't already, we ask that you provide a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract.

We hope to receive your revised manuscript within 1 week. Please email us (plosmedicine@plos.org) if you have any questions or concerns.

We ask every co-author listed on the manuscript to fill in a contributing author statement. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT.

Please ensure that the paper adheres to the PLOS Data Availability Policy (see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by "data not shown" or "unpublished results." For such statements, authors must provide supporting data or cite public sources that include it.

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

Please note, when your manuscript is accepted, an uncorrected proof of your manuscript will be published online ahead of the final version, unless you've already opted out via the online submission form. If, for any reason, you do not want an earlier version of your manuscript published online or are unsure if you have already indicated as such, please let the journal staff know immediately at plosmedicine@plos.org.

Please let me know if you have any questions, and we look forward to receiving the revised manuscript.   

Sincerely,

Richard Turner, PhD

Senior Editor, PLOS Medicine

rturner@plos.org

------------------------------------------------------------

Requests from Editors:

Our academic editor requests that you state in the paper, as a limitation, perhaps, that the absence of literature post-August 2021 means that information on testing for the Delta and Omicron variants, for example, is lacking.

At line 81, please make that "... proved to be" or similar.

At line 649, we suggest "... enabled an improved analysis".

Please add an institutional author name to references 1 & 3, and any others.

Please spell out the group author name in reference 10 and any other relevant references.

Comments from Reviewers:

*** Reviewer #1:

Overall the authors have addressed all the points brought up in my review, further strengthening this major contribution to the literature. The variant discussion in particular was excellent. I have now just the following minor points; if the authors make these, I recommend acceptance. In the interest of time, I would not need to see a revision again before publication (I trust the authors/editors).

L385: Would suggest the following change (additions in ALL CAPS): "Can be assumed to be" --> "AS A POINT OF REFERENCE, we assume AS A MEDIAN CONVERSION that Ct value of 25 corresponds to a viral load of 1.5 * 10^6 RNA copies per milliliter OF TRANSPORT MEDIA, but [rest of sentence and refs are fine]". Gives a better idea of why the assumption was made.

L222-223: please cite Covariants.org as follows, per the author's recommendation (https://covariants.org/faq#how-should-i-cite-or-acknowledge-this-work">https://covariants.org/faq#how-should-i-cite-or-acknowledge-this-work): Emma B. Hodcroft. 2021. "CoVariants: SARS-CoV-2 Mutations and Variants of Interest." https://covariants.org/. This will help the author rationalize to funding agencies the value of this work, keeping it available to the scientific community going forward.

L662 (or possibly L175-177): It is worth noting explicitly that RT-PCR tests are generally much more sensitive than antigen tests, which fits nicely with the better correlation at lower Ct values (most COVID-19-experienced readers will know this, but safer to not assume that future/all readers will). Please cite https://academic.oup.com/cid/article/73/9/e3042/6127024 as to the importance of LoD.

Table S2: Would add a column for the limit of detection of the comparator and a column for the limit of detection of the Ag-RDT (I went through all the supplements; apologies if I missed this). These are also usefully mentioned where appropriate in L360-362.

*** Reviewer #2:

I'd like to thank the authors for thoroughly revising their manuscript based on the editor and reviewer comments. My only remaining comment is for the authors to potentially re-consider how they speak about overlapping confidence intervals - which are not a perfect method to state two measures are not significantly different (see: https://www.tandfonline.com/doi/abs/10.1198/000313001317097960 and https://cscu.cornell.edu/wp-content/uploads/73_ci.pdf). The authors can maintain their messaging even with these statements on overlapping CIs removed. Currently, there is an implication that differences in sensitivity do not exist between tests since confidence intervals overlap - though I think the authors intend to say there is heterogeneity.

*** Reviewer #3:

Alex McConnachie, Statistical Review

I thank the authors for their consideration of my previous comments. I am happy with the responses, and have no further comments to make.

***

Any attachments provided with reviews can be seen via the following link:

[LINK]

Decision Letter 3

Richard Turner

4 May 2022

Dear Dr Denkinger, 

On behalf of my colleagues and the Academic Editor, Dr Suthar, I am pleased to inform you that we have agreed to publish your manuscript "Accuracy of rapid point-of-care antigen-based diagnostics for SARS-CoV-2: an updated systematic review and meta-analysis with meta regression analyzing influencing factors" (PMEDICINE-D-22-00454R3) in PLOS Medicine.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. Please be aware that it may take several days for you to receive this email; during this time no action is required by you. Once you have received these formatting requests, please note that your manuscript will not be scheduled for publication until you have made the required changes.

In the meantime, please log into Editorial Manager at http://www.editorialmanager.com/pmedicine/, click the "Update My Information" link at the top of the page, and update your user information to ensure an efficient production process. 

PRESS

We frequently collaborate with press offices. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximise its impact. If the press office is planning to promote your findings, we would be grateful if they could coordinate with medicinepress@plos.org. If you have not yet opted out of the early version process, we ask that you notify us immediately of any press plans so that we may do so on your behalf.

We also ask that you take this opportunity to read our Embargo Policy regarding the discussion, promotion and media coverage of work that is yet to be published by PLOS. As your manuscript is not yet published, it is bound by the conditions of our Embargo Policy. Please be aware that this policy is in place both to ensure that any press coverage of your article is fully substantiated and to provide a direct link between such coverage and the published work. For full details of our Embargo Policy, please visit http://www.plos.org/about/media-inquiries/embargo-policy/.

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Thank you again for submitting to PLOS Medicine. We look forward to publishing your paper. 

Sincerely, 

Richard Turner, PhD 

Senior Editor, PLOS Medicine

rturner@plos.org

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 PRISMA Checklist. PRISMA checklist.

    (DOCX)

    S1 Fig. Forest plots of all Ag-RDTs.

    Ag-RDT, antigen rapid diagnostic test; CI, confidence interval; FN, false negative; FP, false positive; TN, true negative; TP, true positive.

    (PDF)

    S2 Fig. Details of QUADAS assessment.

    (PDF)

    S3 Fig. Forest plots for subgroup analysis by Ct-values.

    CI, confidence interval; Ct, cycle threshold.

    (PDF)

    S4 Fig. Forest plots for subgroup analysis by Ct-values per test.

    CI, confidence interval; Ct, cycle threshold.

    (PDF)

    S5 Fig. Forest plots for subgroup analysis by IFU versus non-IFU.

    CI, confidence interval; FN, false negative; FP, false positive; IFU, instructions for use; TN, true negative; TP, true positive.

    (PDF)

    S6 Fig. Forest plots for subgroup analysis by sample type.

    CI, confidence interval; FN, false negative; FP, false positive; TN, true negative; TP, true positive.

    (PDF)

    S7 Fig. Forest plots for subgroup analysis by symptomatic versus asymptomatic.

    CI, confidence interval.

    (PDF)

    S8 Fig. Forest plot for subgroup analysis by mean Ct-values for TP and FN samples.

    CI, confidence interval; Ct, cycle threshold; FN, false negative; TP, true positive.

    (PDF)

    S9 Fig. Forest plots for subgroup analysis by mean Ct-values for TP and FN samples.

    CI, confidence interval; Ct, cycle threshold; FN, false negative; TP, true positive.

    (PDF)

    S10 Fig. HSROC curve Standard Q nasal and LumiraDx Ag-RDT.

    Ag-RDT, antigen rapid diagnostic test; HSROC, Hierarchical summary receiver-operating characteristic.

    (PDF)

    S11 Fig. Forest plot for univariate analysis for Nadal and SureScreen V.

    CI, confidence interval.

    (PDF)

    S12 Fig. Funnel plots for all, LumiraDx, Panbio, and Standard Q studies.

    (PDF)

    S1 Table. List of parameters extracted from studies.

    (XLSX)

    S2 Table. Summary of tests.

    (XLSX)

    S3 Table. Overall and sensitivity analysis.

    (XLSX)

    S4 Table. Tests analyzed descriptively not included in meta-analysis.

    (XLSX)

    S5 Table. Test specific IFU analysis.

    (XLSX)

    S1 Text. Study protocol submitted to PROSPERO.

    (DOCX)

    S2 Text. Search strategy.

    (DOCX)

    S3 Text. QUDAS-2 assessment interpretation guide.

    (DOCX)

    S4 Text. Details meta-regression.

    (DOCX)

    S5 Text. List of studies excluded.

    (DOCX)

    S6 Text. Studies potentially influenced by the test manufacturer.

    (DOCX)

    Attachment

    Submitted filename: Point-to-point response.docx

    Attachment

    Submitted filename: Point-by-point response.docx

    Data Availability Statement

    All data are available from https://doi.org/10.11588/data/T3MIB0.


    Articles from PLoS Medicine are provided here courtesy of PLOS

    RESOURCES