Skip to main content
PLOS One logoLink to PLOS One
. 2023 Jul 21;18(7):e0288612. doi: 10.1371/journal.pone.0288612

Evaluation of the diagnostic accuracy of two point-of-care tests for COVID-19 when used in symptomatic patients in community settings in the UK primary care COVID diagnostic accuracy platform trial (RAPTOR-C19)

Brian D Nicholson 1,*,#, Philip J Turner 1,#, Thomas R Fanshawe 1,#, Alice J Williams 1, Gayatri Amirthalingam 2, Sharon Tonner 1, Maria Zambon 3,4, Richard Body 5,6,7, Kerrie Davies 8,9, Rafael Perera 1,, Simon de Lusignan 1,, Gail N Hayward 1,, FD Richard Hobbs 1,; on behalf of the RAPTOR-C19 Study Group and the CONDOR Steering Committee
Editor: Vittorio Sambri10
PMCID: PMC10361479  PMID: 37478103

Abstract

Background and objective

Point-of-care lateral flow device antigen testing has been used extensively to identify individuals with active SARS-CoV-2 infection in the community. This study aimed to evaluate the diagnostic accuracy of two point-of-care tests (POCTs) for SARS-CoV-2 in routine community care.

Methods

Adults and children with symptoms consistent with suspected current COVID-19 infection were prospectively recruited from 19 UK general practices and two COVID-19 testing centres between October 2020 and October 2021. Participants were tested by trained healthcare workers using at least one of two index POCTs (Roche-branded SD Biosensor Standard™ Q SARS-CoV-2 Rapid Antigen Test and/or BD Veritor™ System for Rapid Detection of SARS-CoV-2). The reference standard was laboratory triplex reverse transcription quantitative PCR (RT-PCR) using a combined nasal/oropharyngeal swab. Diagnostic accuracy parameters were estimated, with 95% confidence intervals (CIs), overall, in relation to RT-PCR cycle threshold and in pre-specified subgroups.

Results

Of 663 participants included in the primary analysis, 39.2% (260/663, 95% CI 35.5% to 43.0%) had a positive RT-PCR result. The SD Biosensor POCT had sensitivity 84.0% (178/212, 78.3% to 88.6%) and specificity 98.5% (328/333, 96.5% to 99.5%), and the BD Veritor POCT had sensitivity 76.5% (127/166, 69.3% to 82.7%) and specificity 98.8% (249/252, 96.6% to 99.8%) compared with RT-PCR. Sensitivity of both devices dropped substantially at cycle thresholds ≥30 and in participants more than 7 days after onset of symptoms.

Conclusions

Both POCTs assessed exceed the Medicines and Healthcare products Regulatory Agency target product profile’s minimum acceptable specificity of 95%. Confidence intervals for both tests include the minimum acceptable sensitivity of 80%. In symptomatic patients, negative results on these two POCTs do not preclude the possibility of infection. Tests should not be expected to reliably detect disease more than a week after symptom onset, when viral load may be reduced.

Registration

ISRCTN142269.

Introduction

As point-of-care tests (POCTs), lateral flow device antigen (LFD-Ag) tests provide rapid results that avoid the delays and costs associated with laboratory testing [1] and may be used for community testing for SARS-CoV-2. They provide decentralised, near-real-time information to guide individual decisions about self-isolation and treatment, enabling enhanced surveillance of health and social care staff with potential to reduce community transmission through early detection. For use in primary care, the ideal test would be simple to use with minimal training required, give rapid but accurate results, and present a low biosafety risk. As many countries are reducing or withdrawing community testing at dedicated testing centres, testing for SARS-CoV-2 is falling to community-based healthcare workers, such as those working in General Practice, and to patients.

There are concerns about the diagnostic accuracy of LFD-Ag devices and in particular LFD-Ag test performance when used by front-line community-based healthcare workers in usual care settings. False negatives are more damaging in the community as ambulatory patients can potentially propel community transmission, whilst false positives in otherwise healthy individuals could hamper efforts to maintain employment and education and result in inappropriate management [2].

Whilst the evidence base for LFD-Ag SARS-CoV-2 testing has steadily increased since early 2020 [3], community settings, where most tests take place, are less well studied. Extrapolating results from one clinical setting or population to another risks spectrum bias and is not recommended [4]. In-context evaluations reflect the dynamics of disease transmission, the capabilities of those performing the test and the circumstances under which they are operating. Community populations have a relatively low prevalence and severity of disease, there is overlap in symptomatic presentation with other common clinical syndromes, and the population includes many elderly and frail patients who may mount weaker immune responses to circulating respiratory virus. Studies of selective patient samples tested within laboratories by highly trained staff, or hospital populations who are likely to be more severely unwell, have differing viral loads, and may undergo invasive procedures to increase yield of respiratory tract sampling. Community staff performing POCTs often have little-or-no laboratory experience and no ready access to technical support. Therefore, data on performance of diagnostics in the community is important to inform clinical decisions in the main area of use for these tests.

We aimed to conduct a community based prospective diagnostic accuracy study of POCTs for SARS-CoV-2 infection in symptomatic patients performed by front-line healthcare workers.

Methods

Design

RAPTOR-C19 (RApid community Point-of-care Testing fOR COVID-19) is the community testbed for diagnostic testing for SARS-CoV-2 within the UK’s COVID-19 National DiagnOstic Research and Evaluation Platform (CONDOR) [5]. It was designed as a prospective platform diagnostic accuracy study, conducted in the community, for the assessment of diagnostic accuracy of point-of-care tests (POCTs) for SARS-CoV-2 infection. RAPTOR-C19 allows for POCTs that test for either active or past infection; the present paper relates to the first two POCTs assessed via this study, both of which test for active infection. Further diagnostic tests are undergoing assessment within the platform. The published protocol gives full details of the study design [6], and a summary is provided here.

Ethical approval

This study was approved by the North West-Liverpool Central Research Ethics Committee (20/NW/0282). Participants were provided with information about the study via electronic participant information accessible online. All participants (or their parent or guardian, where applicable) gave informed consent via an e-consent process conducted online to minimise the risk of disease transmission, with the completed consent form emailed to the participant.

Recruitment and participant eligibility

The main setting for this study was UK primary care. Nineteen general practices were recruited after email invitation for expressions of interest, following the sharing of Research Information Sheets for GP surgeries to practices identified through the Oxford-Royal College of General Practitioners (RCGP) Research and Surveillance Centre (RSC) and the National Institute for Health and Care Research (NIHR) Clinical Research Network (CRN). To increase recruitment, two COVID-19 community testing centres for symptomatic individuals were added as additional recruitment sites in the spring of 2021. Participants were adults and children presenting with symptoms of active infection consistent with suspected current COVID-19 (see [6] for a list of specific symptoms). Within these criteria, practices may have differed in their approaches to recruitment (for example, some may not have had capacity to recruit participants on certain days of the week), and so included participants can not necessarily be considered as a consecutive series among those eligible and willing to participate.

Baseline and follow-up assessments

In addition to undergoing testing for the index test(s) and reference standard described below, further participant information was collected at the time of recruitment using an electronic case report form (eCRF). Variables included age, sex, ethnicity, presence and duration of specified symptoms within the preceding 14 days, vaccine status, household contacts diagnosed with SARS-CoV-2, and the timing and results of previous tests for SARS-CoV-2. Adult participants (age ≥ 16 years) recruited from general practices were asked to provide a venous blood sample for antibody testing, collected by appropriately trained staff. These participants were also invited to attend a second visit, or be visited at home by a research nurse, after 28 days to provide a second sample for repeat antibody testing. Adult participants were asked to complete an online daily symptom diary for 28 days after recruitment, but as completion rates were low, no diary data are reported here. Linked electronic health records provided collateral information about subsequent hospitalisation, SARS-CoV-2 test results (in addition to those performed for this study) and mortality within 28 days. Serious adverse events related to test usage were reported by study sites to the RAPTOR-C19 coordination centre via adverse events reporting forms, which were evaluated by clinical staff.

Index tests

This paper presents results from two POCTs. The SD Biosensor Standard™ Q SARS-CoV-2 Rapid Antigen Test (REF 9901-NCOV-01G, branded and distributed by Roche Diagnostics GmbH, Mannheim, Germany) was used from the start of the study. This test incorporates an internal quality control and is read and interpreted manually by the user following the prescribed assay incubation period [7].The BD Veritor™ System for Rapid Detection of SARS-CoV-2 coupled with the BD Veritor™ Plus Analyzer (REF 256089, Becton Dickinson and Company, Maryland, USA) was used from January 2021 onwards. This assay also incorporates an internal quality control but differs from the SD Biosensor test, as results are read by the associated analyser following either a manually timed incubation process (Analyze Now mode) or in an automated manner (Walk Away mode) which times incubation and reads automatically [8]. Both assays consist of individually packaged LFD cassettes with associated swabbing and sample extraction materials. The RAPTOR-C19 and CONDOR teams deemed both tests could feasibly be used by community healthcare workers, including those without clinical qualifications, with minimal training. Both tests had a buffer with SARS-CoV-2 inactivation capacity and a process with no associated aerosol generating procedure for use away from the laboratory. Recruitment sites used either one or both POCTs, depending on availability. Some participants were tested using both POCTs, and as each candidate POCT used a different sampling site (nasopharyngeal for SD Biosensor, nasal for BD Veritor), order of sampling was judged unlikely to disadvantage either POCT. Index test results were neither shared with the patient nor used as a basis for clinical decision-making. Clinical site staff, including general practitioners, nurses and healthcare assistants, took the samples. They received training via a webinar before recruiting to the study, and were asked to adhere to the manufacturers’ instructions for use. Only manufacturer-issued swabs and materials were used to collect and process samples. Index test results were recorded in the eCRF by site staff as ‘Positive’, ‘Negative’, or ‘Unknown/No result’. Further details of testing procedures are provided as Supplemental Material.

Reference standard

The reference test for active infection was an in-house validated reverse transcription quantitative PCR (RT-PCR) for the detection of ORF1ab and E gene regions of SARS-CoV-2. The assay incorporated ORF1ab primers and probes as published by the Chinese Center for Disease Control and Prevention and E gene primers and probe published by Corman et al. [9, 10]. The assay used the ThermoFisher TaqPath 1-Step Multiplex Master Mix (ThermoFisher Scientific, Waltham, Massachusetts, United States) carried out on the ABI QuantStudio 7 flex real-time PCR system (Applied Biosystems Corporation, Waltham, Massachusetts, United States). Testing was performed at the same Public Health England (latterly UK Health Security Agency) laboratory, using a combined nasal/oropharyngeal swab taken during the same visit as when the index test(s) were performed. Results were reported as positive or negative for SARS-CoV-2, with RT-PCR cycle threshold (Ct) values provided for each assay target [11]. The reference test was conducted blind to the results of the index tests. Reference results were not available for at least 24 hours after recruitment. The reference sample was also tested for respiratory syncytial virus (RSV), human metapneumovirus (hMPV), seasonal coronavirus, and influenza. We linked baseline and POCT data to date-matched reference standard data using a unique patient identifier. During the earlier phase of the study, delivery delays and recording and administrative errors meant that for some participants no reference swab was analysed. For some others the swab used for RT-PCR could not be reliably date-matched with the recruitment date of the participant. These participants were excluded from the primary analysis but included in a sensitivity analysis.

Sample size

The sample size was based on a target of 150 reference standard positive individuals, for each index test. If the true sensitivity of an index test were 90% or higher, a similar number of positive samples (144) would yield a standard error of the estimate of the sensitivity of ≤2.5%, and a 95% confidence interval width of ±5%. This would have ≥90% power to detect a difference from a level of 80% sensitivity (the level specified as “desired” by the MHRA Target Product Profile [12]), at a 5% significance level. Based on an assumed prevalence of 10%, the original target total sample size was 1500 (see Statistical Analysis Plan in Supplemental Materials and published protocol for full details). In the event, the observed prevalence was higher than expected and the study was terminated when the 150 positive sample target was met. For the SD Biosensor POCT, an interim analysis for futility (i.e. to test if sensitivity and specificity estimates fell below pre-specified thresholds) was performed at the end of July 2021 using data from the first 331 participants recruited. As this did not lead to discontinuation for futility, recruitment continued until the full target sample size was exceeded. Details are available as Supplemental Materials.

Statistical methods

A Statistical Analysis Plan was written before the analysis was performed and is available in Supplemental Material. We calculated the prevalence of positive RT-PCR results, and the sensitivity, specificity and predictive values for each index test alongside exact 95% confidence intervals. Index test results were presented graphically in relation to the RT-PCR cycle threshold. Prespecified subgroup analyses split results by participant characteristics including age, sex, ethnicity, spectrum of disease, recruitment method and recruiting practice. We also performed a post-hoc analysis of diagnostic performance against time since symptom onset. We summarised recruitment rates, variation in disease prevalence over time, and baseline characteristics and symptom progression using appropriate summary statistics and graphs, with the number of participants with missing data reported separately.

We used two methods to allow for imperfect reference standard bias. Firstly, for individuals with discordant results between either index test and the reference standard, we created an enhanced composite reference standard using a combination of antibody testing results, additional RT-PCR test results, and linked hospitalisation and mortality records [6]. Secondly, we performed a statistical adjustment to sensitivity and specificity estimates using a Bayesian adjustment approach [13, 14], assuming Beta prior distributions for the sensitivity (prior mean 97%) and specificity (prior mean 99%) of the reference standard derived from performance characteristics of the RT-PCR test in operation during the study period.

To allow for discrepancies between recorded recruitment date and recorded swab dates from some participants, two sensitivity analyses were performed: firstly, a stricter scenario excluding all individuals for whom these dates did not match exactly, and secondly, a less strict scenario in which date discrepancies of up to a week were allowed (as discrepancies may have reflected date recording errors).

Statistical analysis was performed using RStudio 2022.02.3 including the epiR package [15], RStan [16] and WinBUGS14.

Patient and public involvement

RAPTOR was supported from inception by the CONDOR steering committee public co-chairs for Patient and Public Involvement and Engagement (PPIE) who co-developed the CONDOR platform and its PPIE strategy. Additional PPIE contributors provided pre-funding feedback on the value of the study, reviewed plain language text, and reviewed and co-developed patient information materials. Contributors favourably reviewed the potential burden on patients of involvement in the study.

Results

Recruitment

A total of 763 participants consented to recruitment between 29th October 2020 and 12th October 2021 (Fig 1). Two additional potential participants were excluded because they withdrew from consent procedures). Recruitment at the testing centre began on 15th July 2021. At least one POCT result was reported for 738 participants; for the other 25, no reason for the missing POCT result was provided. In the primary analysis of 663 participants, a reference test result could be matched to 245 samples tested with SD Biosensor only, 118 tested with BD Veritor only, and 300 tested with both POCTs.

Fig 1. Recruitment flow chart.

Fig 1

Fig 2 shows recruitment over the course of the study. Peaks occurred in early 2021 and at the end of summer 2021, the second of which coincided with the start of recruitment at the testing centre. The proportion of the participants with positive RT-PCR results varied throughout the study period, and was highest during periods of elevated recruitment rate.

Fig 2.

Fig 2

Top: monthly recruitment over time (blue), with the number of participants who had a positive reference standard test for SARS-CoV-2 infection (red). Bottom: estimated monthly prevalence of SARS-CoV-2 infection (calculated as the proportion of positive RT-PCR study results) and 95% confidence interval. Data shown refer to the end of the month indicated. The time when the testing centre started recruitment is indicated by the dotted black line.

Participant characteristics

42% of participants recruited were male, mean age was 41 years, the majority (81%) were white, and 26% were contacts of a household member who had tested positive for SARS-CoV-2 (Table 1). Just over half of the participants had received at least one vaccination dose, with around half of those having received the Oxford-AstraZeneca vaccine. About 21% of those recruited reported a previous SARS-CoV-2 infection. Cough, fatigue, headache and fever were among the most reported baseline symptoms.

Table 1. Baseline characteristics (number (%) or mean (standard deviation)).

All participants (n = 763) Participants included in primary analysis (n = 663)
Male sex 317 (42%) 272 (41%)
Age (years)
 • < 16
 • 16–39
 • 40–59
 • 60+
41 (19)
65 (9%)
294 (39%)
263 (34%)
141 (18%)
41 (19)
52 (8%)
252 (38%)
237 (36%)
122 (18%)
Ethnicity
 • White
 • Asian
 • Black
 • Mixed-White and Black Caribbean
 • Mixed-White and Asian
 • Mixed-Other
 • Other/not reported

613 (81%)
119 (16%)
5 (1%)
4 (1%)
5 (1%)
8 (1%)
9 (1%)

569 (86%)
69 (10%)
5 (1%)
4 (1%)
5 (1%)
5 (1%)
6 (1%)
Previous episode of COVID-19 infection
 • Positive RT-PCR test reported
 • Number of days since last positive antigen test

163 (21%)
42 (120)

92 (14%)
45 (125)
Vaccinated against COVID-19*
 • Oxford-AstraZeneca
 • Pfizer
 • Moderna
 • - Other/type unknown
411 (54%)
210 (51%)
183 (45%)
9 (2%)
9 (2%)
389 (59%)
197 (51%)
175 (45%)
8 (2%)
9 (2%)
Number of days since first symptom 3.8 (2.7) 3.7 (2.6)
Symptoms
 • Any symptom
 • Fever
 • Cough
 • Fatigue
 • Shortness of breath
 • Sputum
 • Loss of smell or change in taste
 • Muscle ache
 • Chills
 • Dizziness
 • Headache
 • Sore throat
 • Hoarseness
 • Nausea or vomiting
 • Diarrhoea
 • Nasal congestion
 • Other

717 (94%)
297 (39%)
461 (60%)
327 (43%)
173 (23%)
136 (18%)
167 (22%)
255 (33%)
193 (25%)
93 (12%)
303 (40%)
253 (33%)
140 (18%)
114 (15%)
62 (8%)
244 (32%)
113 (15%)

626 (94%)
256 (39%)
410 (62%)
308 (46%)
164 (25%)
132 (20%)
160 (24%)
241 (36%)
184 (28%)
92 (14%)
288 (43%)
239 (36%)
139 (21%)
110 (17%)
59 (9%)
234 (35%)
110 (17%)
Household contact diagnosed with COVID-19 201 (26%) 174 (26%)

* Vaccinated with at least one dose (booster doses were not recorded)

† Calculated among participants who reported at least one specific symptom within the preceding 14 days

‡ For 6% of participants, no specific symptoms were recorded in the eCRF

Diagnostic accuracy (primary outcome)

The prevalence of SARS-CoV-2 positive RT-PCR tests among participants included in the primary analysis was 39.2% (260/663, 95% CI 35.5% to 43.0%). The SD Biosensor POCT had a sensitivity of 84.0% (178/212, 95% CI 78.3% to 88.6%) and specificity of 98.5% (328/333, 96.5% to 99.5%), and the BD Veritor POCT had a sensitivity of 76.5% (127/166, 69.3% to 82.7%) and specificity of 98.8% (249/252, 96.6% to 99.8%) (Table 2 and S1 Table 1 in S1 File). The positive and negative predictive values were 97.3% (178/183, 93.7% to 99.1%) and 90.6% (328/362, 87.1% to 93.4%) respectively for SD Biosensor, and 97.7% (127/130, 93.4% to 99.5%) and 86.5% (249/288, 82.0% to 90.2%) respectively for BD Veritor.

Table 2. Summary of results for each POCT compared to the reference test result.

Reference standard
Positive Negative Total reported Not reported Date mismatched
SD Biosensor test result Positive 178 5 183 7 0
Negative 34 328 362 33 35
Total 212 333 545 40 35
Reference standard
Positive Negative Total reported Not reported Date mismatched
BD Veritor test result Positive 127 3 130 3 0
Negative 39 249 288 1 2
Total 166 252 418 4 2
Reference standard
Positive Negative Total reported Not reported Date mismatched
Both POCT results SDB positive, BDV positive 85 0 85 2 0
SDB positive, BDV negative 12 1 13 1 0
SDB negative, BDV positive 4 2 6 1 0
SDB negative, BDV negative 17 179 196 0 2
Total 118 182 300 4 2

SDB = SD Biosensor, BDV = BD Veritor.

Of the 300 participants who had results for both POCTs and the reference test, 85 had concordant positive results from both POCTs and RT-PCR and 179 had concordant negative results. Patterns of discordance are shown in Table 2. In 17 of the 196 cases when both POCTs gave negative results, the RT-PCR was positive. Among these 17 participants, average time since symptom onset (3.9 days) was similar to that in the full cohort (3.8 days).

Pre-specified subgroup analyses found some variation in test performance according to certain participant characteristics reported (S1 Table 1 in S1 File), with both POCTs having higher sensitivity in males and in participants who reported at least two key symptoms (fever, cough, or change in taste/smell) at baseline. Disease prevalence in the primary analysis cohort was higher in males (47%, 128/272) than in females (34%, 132/391) and much higher in those who reported at least two key symptoms (59%, 151/257).

Among sites that recruited at least 10 participants, sensitivity and specificity estimates were largely similar, although the prevalence of disease varied substantially between sites (S1 Fig 1 in S1 File).

There were two main circulating variants of SARS-CoV-2 during the study period in the UK, VOC Alpha GRY (B.1.1.7+Q.) then VOC Delta GK (B.1.617.2+AY.) (S1 Fig 2 in S1 File). We tracked the performance of both POCTs over time showing that the diagnostic performance of neither test shifted during the transition from one dominant variant to the other (S1 Figs 3 and 4 in S1 File). Table 3 summarises the diagnostic performance in relation to RT-PCR cycle threshold. Both POCTs show a clear reduction in performance with increasing cycle threshold (reflecting reduced viral load). In an exploratory analysis, the sex differences in diagnostic sensitivity remained when broken down by cycle threshold (S1 Table 2 in S1 File). S1 Fig 5 in S1 File shows the trend in mean cycle threshold across the duration of the study.

Table 3. Summary of diagnostic performance in relation to RT-PCR cycle threshold.

ORF1ab E gene
Ct value ≤ 20 20–25 25–30 ≥ 30 ≤ 20 20–25 25–30 ≥ 30
SD Biosensor Positive 58 81 33 6 68 72 33 5
Negative 0 6 11 15 0 6 12 16
Sensitivity 1.00 (0.94, 1.00) 0.93 (0.86, 0.97) 0.75 (0.60, 0.87) 0.29 (0.11, 0.52) 1.00 (0.95, 1.00) 0.92 (0.84, 0.97) 0.73 (0.58, 0.85) 0.24 (0.08, 0.47)
BD Veritor Positive 59 56 10 1 69 46 10 2
Negative 1 6 17 14 1 6 18 14
Sensitivity 0.98 (0.91, 1.00) 0.90 (0.80, 0.96) 0.37 (0.19, 0.58) 0.07 (0.00, 0.32) 0.99 (0.92, 1.00) 0.88 (0.77, 0.96) 0.36 (0.19, 0.56) 0.12 (0.02, 0.38)

There was no clear trend in diagnostic performance in relation to the number of days since first reported symptom among those who commenced participation less than a week after symptom onset, but there was some indication of a decrease in the sensitivity of both index tests among the small number of participants with positive RT-PCR results and a longer symptom duration (Fig 3).

Fig 3.

Fig 3

Estimated sensitivity (upper two panels) and specificity (lower two panels), with 95% confidence intervals, by number of days since first reported symptom (x-axis), for SD Biosensor (left two panels) and BD Veritor (right two panels). The number of individuals correctly diagnosed by the POCT out of the total are shown towards the bottom of each plot. Participants who did not report specific symptoms and those for whom the timing of symptom onset was unclear are excluded.

Diagnostic accuracy (enhanced reference standard)

A summary of findings using the composite enhanced reference standard among individuals with discordant result between either index test and the RT-PCR reference standard is provided in S1 Table 3 in S1 File. For participants for whom additional information, such as follow-up serology or additional RT-PCR results, was available, this information generally (in 12/14 cases) supported the original reference standard diagnosis.

Results of the statistical adjustment method for imperfect reference standard bias are shown in S1 Fig 6 and S1 Table 4 in S1 File. This adjustment yielded a small increase, of approximately one percentage point, in the estimated sensitivity and specificity of both index tests (SD Biosensor posterior median sensitivity 85.0%, specificity 99.4%, BD Veritor sensitivity 77.4%, specificity 99.4%).

Secondary outcomes

Of the 403 participants who had a negative RT-PCR for SARS-CoV-2, RSV was detected in 12 participants (7 subtype A, 5 subtype B), hMPV in 1 participant and seasonal coronavirus in 8 participants (4 species #NL63, 3 #OC43, 1 #229E). No participants tested positive for influenza (subtypes A or B). Among the 260 participants who had a positive RT-PCR for SARS-CoV-2, co-infection with RSV was detected in 2 participants (both subtype B), seasonal coronavirus in 2 (both #OC43) and hMPV in 1.

There were no serious adverse events related to study procedures. Three participants were recorded as having been hospitalised within 28 days of a positive COVID-19 test at recruitment with COVID-19 the primary reason for admission, and all were discharged within two weeks. One participant was hospitalised within 28 days of recruitment for an unrelated injury.

Sensitivity analysis

Sensitivity analyses allowing for different date mismatching scenarios did not show a large impact on estimated diagnostic accuracy measures (S1 Table 5 in S1 File).

Discussion

The results of this prospective diagnostic accuracy evaluation of two POCTs for the detection of SARS-CoV-2 in symptomatic patients in primary care fall within the wide range of previous studies in other settings [3, 1722]. A living systematic review found widely varying estimates of the sensitivity for the BD Veritor system (between 41.2% and 96.2% in different studies), and similarly varying estimates for the SD Biosensor system (between 28.6% and 98.3%) [3]. In the primary analysis, we estimate the sensitivities of BD Veritor and SD Biosensor to be 76.5% (95% CI 69.3% to 82.7%) and 84.0% (95% CI 78.3% to 88.6%) respectively. Both devices were found to have specificities close to 99%, which is consistent with most previous studies [2326].

The minimum target for acceptable performance in the target product profile of the Medicines and Healthcare products Regulatory Agency, is sensitivity of 80% and specificity of 95% [12]. The World Health Organisation target product profile stipulates sensitivity ≥80% and specificity ≥97% [27]. Our results indicate that performance is likely to exceed the specificity threshold, but there remains doubt over performance in relation to sensitivity. Allied to high positive predictive values, this suggests that the most appropriate use of these POCTs may be as rule-in tests, while negative test results do not preclude infection.

Diagnostic performance was strongly associated with RT-PCR cycle threshold. Performance declined at higher cycle thresholds, which are associated with the presence of lower intact sample viral RNA, a proxy for viral load. Test sensitivity declined among individuals whose symptoms began more than one week before recruitment. Correlation has been proposed between higher viral load distributions, LFD positive results and infectiousness of individuals [28], but others have suggested that important numbers of infections may be missed by LFDs due to their limited sensitivity [29]. Without an agreed reference standard for infectiousness, we were unable to assess the value of these tests for identifying infectious individuals [30].

Venekamp et al’s community-based study in the Netherlands of tests including SD Biosensor and BD Veritor recruited until June 2021, before the Delta variant became dominant [23]. Our recruitment continued for one year from October 2020, covering the period of the two dominant SARS-CoV-2 variants in the UK circulating during this time (Alpha and Delta). We demonstrated a sustained diagnostic performance for both variants, with sensitivities slightly higher than those reported by Venekamp et al.

Other studies have shown substantial decreases in test sensitivity in asymptomatic individuals, including those recruited as close contacts of cases [25, 31, 32]. Our study demonstrates reduced sensitivity in individuals with fewer core symptoms but does not provide evidence about the performance of the two assays in the asymptomatic population. Based on these findings, patients with a single main symptom (fever, cough or anosmia), could be advised to repeat a negative test if their symptoms persist, or if more symptoms develop.

This study prospectively recruited a large cohort of symptomatic participants attending primary healthcare and two COVID-19 testing centres, and therefore reflects real-world diagnostic accuracy. Understanding performance in primary healthcare is likely to be increasingly important as we cope with waves of endemic infection, and this is one of the few studies to report performance for any POCT in this setting and to our knowledge the only one based in UK primary care.

Recruitment met recommended sample sizes. Further, this study benefitted from contemporaneous swabbing for all tests, and blinding of index test results from those who were performing the reference test (and vice versa), as recommended in diagnostic accuracy studies [33]. A single site performed all reference standard testing to ensure consistency. Our results were adjusted, using two methods, for possible reference standard misclassification and were robust to this adjustment. Paired sampling and use of two index diagnostic tests gave greater scope for direct comparison than previous evaluations.

This study also has some limitations. Because of low recruitment from some sites, a testing centre was added as a recruitment site and so the population tested overall may be less unwell than those who would contact the GP surgery. Prevalence of SARS-CoV-2 infection varied substantially by practice, suggesting there may have been differences in the way in which practices identified participants for recruitment. However, throughout the study recruited patients were required to be symptomatic and diagnostic performance did not change when restricted to participants recruited via the testing centre. This study does not assess diagnostic performance in asymptomatic patients, in whom viral load may be lower and there may be a consequent effect on diagnostic performance. It assesses performance when testing was carried out by clinical staff, rather than via self-swabbing, and performance might decline if not always done according to manufacturers’ instructions and performed by a trained operator. Consistent with other studies, we have used RT-PCR cycle threshold as a proxy for viral load and did not apply a calibration and conversion to provide absolute estimates. Fully quantitative assays require a calibrated standard curve, which was not incorporated as an element of this study, as the results were intended to be binary in recognition of how diagnostic decisions are made in the real world.

This study represents 12 months of recruitment, during which time the prevalence of SARS-CoV-2 fluctuated, and results cannot necessarily be extrapolated to future variants should they emerge. For example, some studies have suggested that some assays may have impaired detection for Omicron variants [34].

The number of missing test results was higher than anticipated, and RT-PCR results were unobtainable for 40 samples, most of which were from participants who received the SD Biosensor POCT during the early period of recruitment. The effect of this was explored in sensitivity analyses, which did not show substantial changes in the major results. The principal reason for missing RT-PCR data was because of postal delays during the pandemic period in the early set-up of the study. As such we consider these data to be missing completely at random and do not expect this to bias the results.

In a population with symptoms of COVID-19 presenting to community settings, SD Biosensor and BD Veritor POCTs performed by healthcare professionals are highly specific and so could be used to rule in COVID-19. However the proportion of patients with positive RT-PCR test results who received false negative POCT results was 16.0% for SD Biosensor and 23.5% for BD Veritor, which could result in onward transmission and inappropriate management unless population prevalence of disease is very low. Performance was improved in patients with more symptoms and those with low RT-PCR Ct values. Tests should be interpreted with more caution outside of this clinical phenotype. Though this strategy was not tested, it may be sensible to repeat the POCT in 12 or 24 hours in patients with a clinical phenotype for COVID-19 who test negative since viral counts may rise over time. This strategy should be studied since identifying true negatives as well as positives is important as waves of this virus continue.

Supporting information

S1 File

(DOCX)

S2 File

(DOCX)

S1 Data

(DOCX)

Acknowledgments

We would like to thank the study participants, practice staff at all participating general practice and testing centre sites, and staff of the NIHR Clinical Research Network: Thames Valley & South Midlands (Jithen Benjamin, Joanne Carter, Helen Collins, Mark Dolman, Ross Downes, Kelly Fricker, Kate Hannaby, Heather Kenyon, Kathryn Lucas, Sophie Maslen, Lydia Owen, Cate Wills, Olga Zolle). We also acknowledge the support of Dr Jason Oke and Dr Constantinos Koshiaris (University of Oxford) at the planning stage of the study; Helen Bohan and Julian Sherlock (University of Oxford); Kevin Brown, Joanna Ellis, Jamie Lopez Bernal and Tim Brooks (UK Health Security Agency) for assistance interpreting RT-PCR and serology results; Gary Howsam and Victoria Tzortziou-Brown (Royal College of General Practitioners); Dr Matt Wilson, Abi Dhillon and Sian Organ (uMed); patients and practices in the Oxford-Royal College of General Practitioners Research and Surveillance Centre (RSC) who share pseudonymised data to support research and surveillance (UKHSA is the principal sponsor of the RSC); EMIS, TPP, Vision and Wellbeing for assistance with pseudonymised data extraction. We would like to acknowledge GISAID (https://gisaid.org/) and their contributing laboratories for the provision of the SARS-CoV-2 variant epidemiological data displayed in S1 Fig 5 in S1 File.

Here we name the members of the RAPTOR-C19 Study Group% and COVID-19 National DiagnOstic Research and Evaluation Platform (CONDOR) Steering Committee$ as follows: Rachel C. Byford%1, Alexandra S. Deeks%1, George Edwards%1, Jennifer Hirst%1, Uy Hoang%1, F. D. Richard Hobbs%1 (Chief Investigator RAPTOR-C19 Study Group; richard.hobbs@phc.ox.ac.uk), Kirsty Jackson%1, Heather Kenyon%* Joseph J. Lee%1, Ezra Linley%2, Mary Logan%1, Kathryn Lucas%*, Abigail A. Moore%1, Lazaro Mwandigha%1, Meriel Raymond%2, Praveen Sebastianpillai%2, Anna E. Seeley%1, Sharon Tonner%1, Richard Body$3 (Co-Chief Investigator CONDOR; richard.body@manchester.ac.uk), Paul Dark$3, Eloïse Cook$3, Colette Inkson$3, Charles Reynard$3, Gail N. Hayward%$1 (Co-Chief Investigator CONDOR; gail.hayward@phc.ox.ac.uk), Rafael Perera%$1, Brian D. Nicholson%$1, Philip J. Turner%$1, Peter Buckle$4, Naoko Jones$4, Mark Wilcox$5, Kerrie Davies$5, Beverley Riley$5, Adam Gordon$6, Clare Lendrem$7, Will Jones$7, Anna Halstead$7, A Joy Allen$7, D Ashley Price$8, Amanda Winter$8, Julian Braybrook$9, Emily Adams$10, Valerie Tate$, Graham Prestwich$11.

1Nuffield Department of Primary Care Health Sciences, University of Oxford, UK

2UK Health Security Agency, UK

3University of Manchester, UK

4Imperial College London, UK

5Leeds Teaching Hospitals NHS trust and University of Leeds, UK

6University of Nottingham, UK

7University of Newcastle upon Tyne, UK

8 Newcastle upon Tyne Hospitals NHS Foundation Trust, UK

9National Measurement Laboratory, UK

10Liverpool School of Tropical Medicine, UK

11York and Humber AHSN, UK

* NIHR Clinical Research Network: Thames Valley & South Midlands, UK

Data Availability

Data cannot be shared publicly because of participant confidentiality considerations. Research data access requests should be submitted to the Nuffield Department of Primary Care Health Sciences Information Guardian for consideration (contact via information.guardian@phc.ox.ac.uk) for researchers who meet the criteria for access to confidential data.

Funding Statement

This study was funded by the following grants: University of Oxford Medical Sciences Division Benefactors Urgent COVID-19 Fund (COVID-19 Research Response Fund Grant 0009325 - https://researchsupport.admin.ox.ac.uk/funding/internal?filter-566-funding%20opportunity%20type-451761=4021&filter-1686-status-451761=9876) to BDN, SdL, FDRH, JJL, TRF, PJT, GNH, GA, MZ, AD, UH The National Institute for Health and Care Research (NIHR) School for Primary Care Research (SPCR grant 495 - https://www.spcr.nihr.ac.uk/career-development/funding) to BDN, SdL, FDRH, GNH, PJT, JJL, TRF, UH Urgent Public Health funding received by the CONDOR platform from the NIHR and Asthma + Lung UK (NIHR UPH grant COV0051 - https://www.nihr.ac.uk/researchers/funding-opportunities/) to BDN, PJT, RB, KD, GNH, RP, PB, DAP, PD, MW, AJA The three sources of funds above are the sources of funds which specifically funded the study. The following disclosure of funds did not specifically fund the study, but part supported some of the staff who took part and thus need to be declared: TRF, LM, GNH, PJT, RP, GE and FDRH received funding from the NIHR Community Healthcare MedTech and In Vitro Diagnostics Co-operative at Oxford Health NHS Foundation Trust (MIC-2016-018). FDRH, RP and TRF received funding from the NIHR Applied Research Collaboration Oxford and Thames Valley at Oxford Health NHS Foundation Trust. RP acknowledges part support from the Oxford Martin School. JJL is funded by the NIHR (Doctoral Research Fellowship NIHR300738). AAM is funded by a Wellcome Trust Doctoral Fellowship. AES is funded by an NIHR Academic Clinical Fellowship (ACF-2019-13-009). AJW acknowledges an Enriching Engagement grant from the Wellcome Trust for PPI work on RSC general surveillance.

References

  • 1.Leber W, Lammel O, Siebenhofer A, Redlberger-Fritz M, Panovska-Griffiths J, Czypionka T. Comparing the diagnostic accuracy of point-of-care lateral flow antigen testing for SARS-CoV-2 with RT-PCR in primary care (REAP-2). EClinicalMedicine. 2021;38:101011. doi: 10.1016/j.eclinm.2021.101011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Mouliou DS, Gourgoulianis KI. False-positive and false-negative COVID-19 cases: respiratory prevention and management strategies, vaccination, and further perspectives. Expert Rev Resp Med. 2021;15(8):993–1002. doi: 10.1080/17476348.2021.1917389 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Brümmer LE, Katzenschlager S, Gaeddert M, Erdmann C, Schmitz S, Bota M, et al. Accuracy of novel antigen rapid diagnostics for SARS-CoV-2: A living systematic review and meta-analysis. PLoS Med. 2021;18(8):e1003735. doi: 10.1371/journal.pmed.1003735 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Einhauser S, Peterhoff D, Niller HH, Beileke S, Günther F, Steininger P, et al. Spectrum bias and individual strengths of SARS-CoV-2 serological tests—a population-based evaluation. Diagnostics. 2021;11(10):1843. doi: 10.3390/diagnostics11101843 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.CONDOR: COVID-19 National DiagnOstic Research and Evaluation Platform [21 October 2022]. Available from: https://www.condor-platform.org/.
  • 6.Nicholson BD, Hayward G, Turner PJ, Lee JJ, Deeks A, Logan M, et al. Rapid community point-of-care testing for COVID-19 (RAPTOR-C19): protocol for a platform diagnostic study. Diagn Progn Res. 2021;5(1):1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.SARS-CoV-2 Rapid Antigen Test 2022 [21 October 2022]. Available from: https://diagnostics.roche.com/global/en/products/params/sars-cov-2-rapid-antigen-test.html.
  • 8.BD Veritor™ System for Rapid Detection of SARS-CoV-2 2022 [21 October 2022]. Available from: https://www.bd.com/en-uk/products/diagnostics-systems/point-of-care-testing/bd-veritor-system-for-rapid-detection-of-sars-cov-2.
  • 9.Corman VM, Landt O, Kaiser M, Molenkamp R, Meijer A, Chu DK, et al. Detection of 2019 novel coronavirus (2019-nCoV) by real-time RT-PCR. Eurosurveillance. 2020;25(3):2000045. doi: doi: 10.2807/1560-7917.ES.2020.25.3.2000045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Niu P, Lu R, Zhao L, Wang H, Huang B, Ye F, et al. Three novel real-time RT-PCR assays for detection of COVID-19 virus. China CDC Weekly. 2020;2(25):453. doi: 10.46234/ccdcw2020.116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Vierbaum L, Wojtalewicz N, Grunert H-P, Lindig V, Duehring U, Drosten C, et al. RNA reference materials with defined viral RNA loads of SARS-CoV-2—A useful tool towards a better PCR assay harmonization. PloS One. 2022;17(1):e0262656. doi: 10.1371/journal.pone.0262656 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Medicines & Healthcare products Regulatory Agency [MHRA]. Target Product Profile: Point of Care SARS-CoV-2 Detection Tests. 2020.
  • 13.Lu Y, Dendukuri N, Schiller I, Joseph L. A Bayesian approach to simultaneously adjusting for verification and reference standard bias in diagnostic test studies. Stat Med. 2010;29(24):2532–43. doi: 10.1002/sim.4018 [DOI] [PubMed] [Google Scholar]
  • 14.Zhao Z. Early stopping clinical trials of binomial response with an exact group sequential method. Stat Med. 2007;26(8):1724–9. doi: 10.1002/sim.2807 [DOI] [PubMed] [Google Scholar]
  • 15.Stevenson M, Nunes T, Heuer C, Marshall J, Sanchez J, Thornton R, et al. Tools for the analysis of epidemiological data. R package version 2.0.38. Package EpiR: CRAN. 2017. [Google Scholar]
  • 16.Stan Development Team. RStan: the R interface to Stan. R package version 2.26.13. 2021.
  • 17.Scheiblauer H, Filomena A, Nitsche A, Puyskens A, Corman VM, Drosten C, et al. Comparative sensitivity evaluation for 122 CE-marked rapid diagnostic tests for SARS-CoV-2 antigen, Germany, September 2020 to April 2021. Eurosurveillance. 2021;26(44):2100441. doi: 10.2807/1560-7917.ES.2021.26.44.2100441 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Caruana G, Croxatto A, Kampouri E, Kritikos A, Opota O, Foerster M, et al. ImplemeNting SARS-CoV-2 Rapid antigen testing in the Emergency wArd of a Swiss univErsity hospital: the INCREASE study. Microorganisms. 2021;9(4):798. doi: 10.3390/microorganisms9040798 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Ghasemi S, Harmooshi NN, Rahim F. Diagnostic utility of antigen detection rapid diagnostic tests for Covid-19: a systematic review and meta-analysis. Diagn Pathol. 2022;17(1):36. doi: 10.1186/s13000-022-01215-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Khalid MF, Selvam K, Jeffry AJN, Salmi MF, Najib MA, Norhayati MN, et al. Performance of rapid antigen tests for COVID-19 diagnosis: a systematic review and meta-analysis. Diagnostics. 2022;12(1):110. doi: 10.3390/diagnostics12010110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Arshadi M, Fardsanei F, Deihim B, Farshadzadeh Z, Nikkhahi F, Khalili F, et al. Diagnostic accuracy of rapid antigen tests for COVID-19 detection: a systematic review with meta-analysis. Front Med. 2022;9. doi: 10.3389/fmed.2022.870738 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Hayer J, Kasapic D, Zemmrich C. Real-world clinical performance of commercial SARS-CoV-2 rapid antigen tests in suspected COVID-19: A systematic meta-analysis of available data as of November 20, 2020. Int J Infect Dis. 2021;108:592–602. doi: 10.1016/j.ijid.2021.05.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Venekamp RP, Veldhuijzen IK, Moons KGM, van den Bijllaardt W, Pas SD, Lodder EB, et al. Detection of SARS-CoV-2 infection in the general population by three prevailing rapid antigen tests: cross-sectional diagnostic accuracy study. BMC Med. 2022;20(1):97. doi: 10.1186/s12916-022-02300-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Van der Moeren N, Zwart VF, Lodder EB, Van den Bijllaardt W, Van Esch HR, Stohr JJ, et al. Evaluation of the test accuracy of a SARS-CoV-2 rapid antigen test in symptomatic community dwelling individuals in the Netherlands. PLoS One. 2021;16(5):e0250886. doi: 10.1371/journal.pone.0250886 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Berger A, Nsoga MTN, Perez-Rodriguez FJ, Aad YA, Sattonnet-Roche P, Gayet-Ageron A, et al. Diagnostic accuracy of two commercial SARS-CoV-2 antigen-detecting rapid tests at the point of care in community-based testing centers. PLoS One. 2021;16(3):e0248921. doi: 10.1371/journal.pone.0248921 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Yin N, Debuysschere C, Decroly M, Bouazza F-Z, Collot V, Martin C, et al. SARS-CoV-2 diagnostic tests: algorithm and field evaluation from the near patient testing to the automated diagnostic platform. Front Med. 2021;8. doi: 10.3389/fmed.2021.650581 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.World Health Organisation. Antigen-detection in the diagnosis of SARS-CoV-2 infection 2021 [21 October 2022]. Available from: https://www.who.int/publications/i/item/antigen-detection-in-the-diagnosis-of-sars-cov-2infection-using-rapid-immunoassays.
  • 28.Lee LYW, Rozmanowski S, Pang M, Charlett A, Anderson C, Hughes GJ, et al. Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infectivity by viral load, S gene variants and demographic factors, and the utility of lateral flow devices to prevent transmission. Clin Infect Dis. 2021;74(3):407–15. doi: 10.1093/cid/ciab421%J Clinical Infectious Diseases. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Deeks JJ, Singanayagam A, Houston H, Sitch AJ, Hakki S, Dunning J, et al. SARS-CoV-2 antigen lateral flow tests for detecting infectious people: linked data analysis. BMJ. 2022;376:e066871. doi: 10.1136/bmj-2021-066871%J BMJ. doi: 10.1136/bmj-2021-066871 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Drain PK. Rapid diagnostic testing for SARS-CoV-2. N Engl J Med. 2022;386(3):264–72. doi: 10.1056/NEJMcp2117115 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Jegerlehner S, Suter-Riniker F, Jent P, Bittel P, Nagler M. Diagnostic accuracy of a SARS-CoV-2 rapid antigen test in real-life clinical settings. Int J Infect Dis. 2021;109:118–22. doi: 10.1016/j.ijid.2021.07.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Iglὁi Z, Velzing J, Van Beek J, Van de Vijver D, Aron G, Ensing R, et al. Clinical evaluation of Roche SD Biosensor rapid antigen test for SARS-CoV-2 in municipal health service testing site, the Netherlands. Emerg Infect Dis. 2021;27(5):1323. doi: 10.3201/eid2705.204688 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Lijmer JG, Mol BW, Heisterkamp S, Bonsel GJ, Prins MH, Van Der Meulen JH, et al. Empirical evidence of design-related bias in studies of diagnostic tests. JAMA. 1999;282(11):1061–6. doi: 10.1001/jama.282.11.1061 [DOI] [PubMed] [Google Scholar]
  • 34.Osterman A, Badell I, Dächert C, Schneider N, Kaufmann A-Y, Öztan GN, et al. Variable detection of Omicron-BA.1 and -BA.2 by SARS-CoV-2 rapid antigen tests. Medical Microbiology and Immunology. 2023;212(1):13–23. doi: 10.1007/s00430-022-00752-7 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Vittorio Sambri

Transfer Alert

This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present.

20 Mar 2023

PONE-D-23-03034Evaluation of the diagnostic accuracy of two point-of-care tests for COVID-19 when used in 2 community settings in the UK primary care COVID diagnostic accuracy platform trial (RAPTOR-C19)PLOS ONE

Dear Dr. Nicholson,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Your manuscript has been reviewed by three experts in the field of POCT use for the detection of SARS CoV-2  and

two of them suggest that the manuscript should be revised  in order to be acceptable for publication. One of the reviewer is suggesting to reject the manuscript because of poor novelty in a landscape filled with many similar report: I do  not agree with this evaluation, since I believe that the quality of your study is above the level of  many other similar published papers and consequently  I suggest that you undertake a revision procedure  considering the points raised  by the reviewers.One of the most relevant issue is the lack of OMICRON related variants among the viruses incleded: if you feel that this step would not be possible please open a wide discussion on that. I also believe that fact that only symptomatic patients have been included should be widely  and deeply discussed.

Please submit your revised manuscript by May 04 2023 11:59PM.  If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Vittorio Sambri, M.D., Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

3. Thank you for stating the following in the Competing Interests section: 

"We have read the journal's policy and the authors of this manuscript have the following competing interests:

The authorship declares funding support for this study from the University of Oxford Medical Sciences Division Benefactors Urgent COVID-19 Fund, the National Institute for Health and Care (NIHR) School of Primary Care Research, and Urgent Public Health funding for the CONDOR Platform from the NIHR and Asthma+Lung UK. The RAPTOR-C19 study team received analysers and assays free of charge from Becton Dickinson for evaluation in this study.

GH declares funding from the National Institute for Health and Care Research (NIHR) paid to the University of Oxford.

KD declares grant funding from Alere Inc and Cepheid Inc paid to her institution for unrelated research. TF declares NIHR support from the NIHR Community Healthcare MIC for diagnostic evaluation research. AJW declares grant funding received by the University of Oxford through a Wellcome Trust Enriching Engagement grant which has supported unrelated patient participation work carried out by the Royal College of General Practitioners Research Surveillance Centre (based at the University of Oxford) for surveillance work. GE declares funding support from the NIHR Community Healthcare MIC recieved by the University of Oxford. AM declares the support of a Wellcome Trust Doctoral Research Fellowship and an NIHR In-practice Fellowship unrelated to this research. PJT declares support from the NIHR Community Healthcare MIC for diagnostic evaluation research. PJT has provided expert support to the Longitude Prize AMR competition administration which is unrelated to this project and for which the University of Oxford received an honorarium. RB declares grant funding for this project from the NIHR and Asthma+Lung UK, with additional funding from the Department of Health and Social Care paid to his host institution. RB declares grants from Siemens Healthineers, Abbott Point-of-Care and Ancon, all paid to his institution for unrelated research. He declares consulting fees received by his institution from Roche, Siemens, Aptamer Group, LumiraDx, Beckman Coulter and Radiometer, with personal fees received from Psyros Diagnostics. RB has received support for attending meetings / travel from Roche and EMCREG International. RB has participated on data safety monitoring boards or advisory boards for the unrelated FORCE Trial, REWIRE Trial, TARGET-CTA, and Magnetocardiography study (MAGNETIC - sponsored by Creavo). RB is the Deputy National Specialty Lead for Trauma & Emergency Care, National Institute for Health and Care Research Clinical Research Network. RB declares receipt of donated reagents for research not detailed in this paper from Roche, LumiraDx, BD, iXensor, Abbott Point-of-Care, Randox, Avacta, Menarini, loan of analysers from Randox and Menarini, and assays run free of charge for research purposes by Chronomics, My110, and Ancon. JJL declares funding from an NIHR Doctoral Research Fellowship which is unrelated to this research. LM declares support from the NIHR Community Healthcare MIC and other NIHR grants to the University of Oxford in support of this work. EL declares unrelated project funding received by the UKHSA Vaccine Evaluation Unit for contract research from GSK, Pfizer and Sanofi. MZ declares her unpaid activities as the Chair of the charitable organisation ISIRV and her membership of the UK SAGE, NERVTAG and JCVI groups."

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. 

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

4. One of the noted authors is a group or consortium [Listed in MS - insufficient characters to complete here.]. In addition to naming the author group, please list the individual authors and affiliations within this group in the acknowledgments section of your manuscript. Please also indicate clearly a lead author for this group along with a contact email address.

5. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. 

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Reviewer #1: The authors analyse the performance of two POCT tests in a clinical setting on symptomatic individuals with high viral load compare to RT-PCR. The study is planned and carried out thoroughly. The study has a high quality compared to the multitude of studies on this topic but is limited by the restricted sample size, the missing viral load calculation, the lack of Omicron VOC samples as well as asymptomatic individuals.

Major comments:

1. Figure 1: Is there a cause why only in participants tested with SD Biosensor (including participants testes with both tests), PCR results are missing? Please discuss in the limitations.

2. The information contained in the figures in the supplement seems much more interesting to the reader compared to the figures in the manuscript (especially figure 2). Consider creating interesting figures from the supplement data and move the figures from the manuscript to the supplement.

3. Ct value is just a gross measure of viral load. I suggest calculating viral loads from the PCR data as they are comparable to other high quality studies.

4. The study only includes symptomatic persons. This fact should be strengthened during the whole manuscript.

Minor comments:

1. Abstract (only in the Submission form): “(260/663, 95% CI 35.5% to43.0%)” – a space is missing

2. p3, Abstract, Results, why is 95% CI included two times. Consides removing the second or add 95% for all confidence intervals

3. p9, 168-171: Please describe the reference test more in detail. Was it a commercial assay, different assays, …?

4. p10, 190: If you do a sample size calculation, please do it exact. 1500 is quite crude. In my calculation using your assumptions I get the result of 1347

5. p 23, 366-370: The data presented is only performed on symptomatic individuals with high viral loads. Other studies show big differences between symptomatic and asymptomatic individuals. Consider removing the statements on asymptomatic individuals or discuss them together with relevant literature (e.g. https://doi.org/10.1016/j.jinf.2022.12.017 https://doi.org/10.1128/jcm.00991-21 ).

6. Limitations: A limited sensitivity has been reported for the Omicron variants (e.g. https://doi.org/10.1016/j.cmi.2022.08.006 https://doi.org/10.1007/s00430-022-00730-z https://doi.org/10.1007/s00430-022-00752-7 ). Consider adding this fact to the limitations ot another place in the discussion.

Reviewer #2: THE POOR NOVELTY OF THE PAPER SUGGESTS ITS REJECTION

THERE ARE MANY PAPER ALREADY PUBLISHED ON POCT TESTING FOR COVID-19

THE STUDY DESIGN IS FINE AND THIS IS A WELL-WRITTEN PAPER BUT, IN MY OPINION, IT DOES NOD ADD VALUABLE INFORMATION TO CURRENT KNOWLEDGE

Reviewer #3: General comments:

The manuscript thoroughly describes the performance of two different antigen tests (one with visual reading of results and one with machine reading of results) in an outpatient setting. Although the paper could be considered post festum (the current SARS-CoV-2 variants were not present during the testing period), the manuscript highlights, documents and adequately discuss the inferior diagnostic accuracy of antigen testing compared to Gold Standard qPCR testing. The manuscript is well written with huge amounts of clinical data and adequate statistics.

Specific comments:

Line 68: I do not think LFD-Ag are commonplace for community testing at the present time?

Methods: A description of the two assays is warranted – e.g. the difference between manual and machine reading of results.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: MARIO PLEBANI

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Jul 21;18(7):e0288612. doi: 10.1371/journal.pone.0288612.r002

Author response to Decision Letter 0


5 Jun 2023

05th June 2023

Dear Professor Sambri,

We hope that this letter finds you well. We were very grateful to you for your editorial comments and guidance and to the three reviewers for their considered and constructive reviews of our manuscript.

We are delighted to respond to your comments and those of the reviewers in the following document and through submission of a revised manuscript and associated material. We trust that our revisions will be acceptable to you and the reviewers.

Yours sincerely,

Brian Nicholson, Tom Fanshawe & Phil Turner, on behalf of the authorship

Response to the editor and reviewers

PONE-D-23-03034

Evaluation of the diagnostic accuracy of two point-of-care tests for COVID-19 when used in 2 community settings in the UK primary care COVID diagnostic accuracy platform trial (RAPTOR-C19)

One of the most relevant issue is the lack of OMICRON related variants among the viruses incleded: if you feel that this step would not be possible please open a wide discussion on that. I also believe that fact that only symptomatic patients have been included should be widely and deeply discussed.

Response: We are grateful to Professor Sambri for these observations and have amended the manuscript title to make clear from the outset that the evaluation focused on symptomatic participants. The symptomatic focus of the study is also referenced in the abstract and discussion. This is further discussed in response to Reviewer #1 point 4 below.

We have made specific reference to a publication in the Discussion which describes variable detection of Omicron by rapid antigen tests (Osterman A, Badell I, Dächert C, Schneider N, Kaufmann A-Y, Öztan GN, et al. Variable detection of Omicron-BA.1 and -BA.2 by SARS-CoV-2 rapid antigen tests. Medical Microbiology and Immunology. 2023;212(1):13-23. doi: 10.1007/s00430-022-00752-7.), at the point where we describe the possibility that it may not be possible to extrapolate our results to Omicron or future variants of SARS-CoV-2.

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Response: We have checked our resubmission against PLOS ONE’s style guide and are confident that it conforms. We have applied the PLOS ONE endnote style to in-text citations and the reference list, with in-text references now appearing within square parentheses throughout the manuscript.

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

Response: We have moved this information to the ethics paragraph and provided more details.

3. Thank you for stating the following in the Competing Interests section:

"We have read the journal's policy and the authors of this manuscript have the following competing interests:

The authorship declares funding support for this study from the University of Oxford Medical Sciences Division Benefactors Urgent COVID-19 Fund, the National Institute for Health and Care (NIHR) School of Primary Care Research, and Urgent Public Health funding for the CONDOR Platform from the NIHR and Asthma+Lung UK. The RAPTOR-C19 study team received analysers and assays free of charge from Becton Dickinson for evaluation in this study.

GH declares funding from the National Institute for Health and Care Research (NIHR) paid to the University of Oxford.

KD declares grant funding from Alere Inc and Cepheid Inc paid to her institution for unrelated research. TF declares NIHR support from the NIHR Community Healthcare MIC for diagnostic evaluation research. AJW declares grant funding received by the University of Oxford through a Wellcome Trust Enriching Engagement grant which has supported unrelated patient participation work carried out by the Royal College of General Practitioners Research Surveillance Centre (based at the University of Oxford) for surveillance work. GE declares funding support from the NIHR Community Healthcare MIC recieved by the University of Oxford. AM declares the support of a Wellcome Trust Doctoral Research Fellowship and an NIHR In-practice Fellowship unrelated to this research. PJT declares support from the NIHR Community Healthcare MIC for diagnostic evaluation research. PJT has provided expert support to the Longitude Prize AMR competition administration which is unrelated to this project and for which the University of Oxford received an honorarium. RB declares grant funding for this project from the NIHR and Asthma+Lung UK, with additional funding from the Department of Health and Social Care paid to his host institution. RB declares grants from Siemens Healthineers, Abbott Point-of-Care and Ancon, all paid to his institution for unrelated research. He declares consulting fees received by his institution from Roche, Siemens, Aptamer Group, LumiraDx, Beckman Coulter and Radiometer, with personal fees received from Psyros Diagnostics. RB has received support for attending meetings / travel from Roche and EMCREG International. RB has participated on data safety monitoring boards or advisory boards for the unrelated FORCE Trial, REWIRE Trial, TARGET-CTA, and Magnetocardiography study (MAGNETIC - sponsored by Creavo). RB is the Deputy National Specialty Lead for Trauma & Emergency Care, National Institute for Health and Care Research Clinical Research Network. RB declares receipt of donated reagents for research not detailed in this paper from Roche, LumiraDx, BD, iXensor, Abbott Point-of-Care, Randox, Avacta, Menarini, loan of analysers from Randox and Menarini, and assays run free of charge for research purposes by Chronomics, My110, and Ancon. JJL declares funding from an NIHR Doctoral Research Fellowship which is unrelated to this research. LM declares support from the NIHR Community Healthcare MIC and other NIHR grants to the University of Oxford in support of this work. EL declares unrelated project funding received by the UKHSA Vaccine Evaluation Unit for contract research from GSK, Pfizer and Sanofi. MZ declares her unpaid activities as the Chair of the charitable organisation ISIRV and her membership of the UK SAGE, NERVTAG and JCVI groups. This does not alter our adherence to PLOS ONE policies on sharing data and materials."

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

Response: Please see our amended ‘Competing Interests Statement’ above; we have also included this text in the covering letter. We are not aware of any impediments associated with the declaration which would prevent us from complying with PLOS ONE policies on sharing data and materials.

4. One of the noted authors is a group or consortium [Listed in MS - insufficient characters to complete here.]. In addition to naming the author group, please list the individual authors and affiliations within this group in the acknowledgments section of your manuscript. Please also indicate clearly a lead author for this group along with a contact email address.

Response: The original submission listed group/consortium membership and institutional affiliation in the acknowledgements section. We have amended this section as we had inadvertently omitted Dr Sharon Tonner and Professor F.D. Richard Hobbs from the RAPTOR-C19 Study Group and we failed to assign an affiliation to Mary Logan. We have also marked the leads of these groups as ‘Chief Investigator’ and ‘Co-chief Investigator’ for the RAPTOR-C19 Study Group and CONDOR respectively. We have provided email addresses for ‘Chief Investigator’ and ‘Co-chief Investigators’.

5. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

Response: The datasets used and analysed during the current study contain potentially sensitive and identifiable patient information under the definitions of UK data protection legislation. Requests for de-identified participant level data collected during this study should be made to the Nuffield Department of Primary Care hosted Datasets Independent Scientific Committee (PrimDISC): primdisc@phc.ox.ac.uk . Data will be released following review and approval by PrimDISC of a protocol, statistical analysis plan and the signing of a suitable data sharing agreement.

Review Comments to the Author

Reviewer #1: The authors analyse the performance of two POCT tests in a clinical setting on symptomatic individuals with high viral load compare to RT-PCR. The study is planned and carried out thoroughly. The study has a high quality compared to the multitude of studies on this topic but is limited by the restricted sample size, the missing viral load calculation, the lack of Omicron VOC samples as well as asymptomatic individuals.

Major comments:

1. Figure 1: Is there a cause why only in participants tested with SD Biosensor (including participants testes with both tests), PCR results are missing? Please discuss in the limitations.

Response: Thank you for this comment. The main reason that more RT-PCR results were missing among participants tested with the SD Biosensor POCT than the BD Veritor POCT was because of postal issues in the delivery of samples for RT-PCR testing during the pandemic period in the early part of recruitment (SD Biosensor was the first POCT to come on board the study). Some samples were inadvertently delivered to the wrong location by the postal service and others were lost in transit. As this is the main reason for this missing data we do not expect this to be related to the RT-PCR result and so would consider these missing results not to bias estimates of diagnostic accuracy. We have explained this in the relevant paragraph in the limitations part of the Discussion.

2. The information contained in the figures in the supplement seems much more interesting to the reader compared to the figures in the manuscript (especially figure 2). Consider creating interesting figures from the supplement data and move the figures from the manuscript to the supplement.

Response: We feel it is somewhat subjective as to which figures readers would find more interesting, but we agree that the previous S1 Fig 6 (diagnostic accuracy in relation to time since first reported symptom) would have been better in the main manuscript (now Fig 3).

3. Ct value is just a gross measure of viral load. I suggest calculating viral loads from the PCR data as they are comparable to other high quality studies.

Response: We thank the reviewer for making this point. We prefer that our data remain in the format of Ct values, which are the common lexicon for these kinds of studies. Fully quantitative assays require a calibrated standard curve, which was not included in this study, as the results were intended to be binary. The comparator of positive/negative is indicative of how diagnostic decisions are made in the real world, where absolute viral load and even Ct values are not used by clinicians. Our study is consistent with many published reports. Most other studies that have evaluated POCTs in other settings, and/or systematic reviews, report by Ct value and not directly by viral load measures (such as the living systematic review, Brümmer et al., 2021. Accuracy of novel antigen rapid diagnostics for SARS-CoV-2: PLoS Med, 18(8), p.e1003735). We have added a note in the limitation paragraph of the Discussion to highlight these points.

4. The study only includes symptomatic persons. This fact should be strengthened during the whole manuscript.

Response: Participants with symptoms consistent with SARS-CoV-2 were the target patient group for this study. We had previously noted this in several places of the manuscript (including the Abstract Methods section, final sentence of Introduction, Recruitment and participant eligibility section of Methods, several places in Discussion including a comparison of our results with other studies performed in asymptomatic individuals, and in the limitations paragraph of the Discussion). However, we agree that it is important to emphasise further so have now made this additionally clear in the title of the manuscript, the Conclusions section of the Abstract and the first sentence of the Discussion.

Minor comments:

1. Abstract (only in the Submission form): “(260/663, 95% CI 35.5% to43.0%)” – a space is missing

Response: We have corrected this in the Submission form.

2. p3, Abstract, Results, why is 95% CI included two times. Consides removing the second or add 95% for all confidence intervals

Response: We have corrected this.

3. p9, 168-171: Please describe the reference test more in detail. Was it a commercial assay, different assays, …?

Response: We have expanded the ‘Reference standard’ sub-section of the ‘Methods’ to provide more detail on the RT-PCR assay used for reference testing in the study.

4. p10, 190: If you do a sample size calculation, please do it exact. 1500 is quite crude. In my calculation using your assumptions I get the result of 1347

Response: The primary target sample size was 150 positive cases (in line with the UK Medicines and Healthcare products Regulatory Agency (MHRA) Target Product Profile) and the 1500 figure was only ever intended as an approximate total target sample size based on an assumed prevalence of 10%. For the reasons outlined, this total in any case needed to be adjusted as a result of the fluctuating and unpredictable prevalence of SARS-CoV-2 infection over the period of the study. Full details of the sample size calculation are available in the (cited) published protocol and the Statistical Analysis Plan that was provided as a supplementary file.

5. p 23, 366-370: The data presented is only performed on symptomatic individuals with high viral loads. Other studies show big differences between symptomatic and asymptomatic individuals. Consider removing the statements on asymptomatic individuals or discuss them together with relevant literature (e.g. https://doi.org/10.1016/j.jinf.2022.12.017 https://doi.org/10.1128/jcm.00991-21 ).

Response: We agree that this study does not demonstrate performance in asymptomatic individuals, in whom performance may differ from symptomatic individuals, and had already stated in the limitations section of the Discussion: “This study does not assess diagnostic performance in asymptomatic patients, in whom viral load may be lower and there may be a consequent effect on diagnostic performance.”

Also in the Discussion we state “Other studies have shown substantial decreases in test sensitivity in asymptomatic individuals, including those recruited as close contacts of cases” and cite three papers to support this. To make this additionally clear we now also state “Our study demonstrates reduced sensitivity in individuals with fewer core symptoms but does not provide evidence about the performance of the two assays in the asymptomatic population.”

However we do not feel the two papers indicated by the reviewer are directly relevant as they report results of other assays which were not evaluated in our study.

6. Limitations: A limited sensitivity has been reported for the Omicron variants (e.g. https://doi.org/10.1016/j.cmi.2022.08.006 https://doi.org/10.1007/s00430-022-00730-z https://doi.org/10.1007/s00430-022-00752-7 ). Consider adding this fact to the limitations ot another place in the discussion.

Response: We are grateful to the reviewer for this suggestion and have added this statement and references about possible impaired performance for Omicron variants to the limitations part of the Discussion that mentions future SARS-CoV-2 variants.

Reviewer #2: THE POOR NOVELTY OF THE PAPER SUGGESTS ITS REJECTION

THERE ARE MANY PAPER ALREADY PUBLISHED ON POCT TESTING FOR COVID-19

THE STUDY DESIGN IS FINE AND THIS IS A WELL-WRITTEN PAPER BUT, IN MY OPINION, IT DOES NOD ADD VALUABLE INFORMATION TO CURRENT KNOWLEDGE

Response: Thank you for your review. We respectfully disagree as there are no other published studies of this size, and with the methodological strengths of this study, conducted in community settings.

Reviewer #3: General comments:

The manuscript thoroughly describes the performance of two different antigen tests (one with visual reading of results and one with machine reading of results) in an outpatient setting. Although the paper could be considered post festum (the current SARS-CoV-2 variants were not present during the testing period), the manuscript highlights, documents and adequately discuss the inferior diagnostic accuracy of antigen testing compared to Gold Standard qPCR testing. The manuscript is well written with huge amounts of clinical data and adequate statistics.

Response: Thank you for your considered review of the manuscript.

Specific comments:

Line 68: I do not think LFD-Ag are commonplace for community testing at the present time?

Response: We have rewritten the opening sentences of the manuscript to reflect changes in community testing practices and have removed original reference [1] accordingly.

Methods: A description of the two assays is warranted – e.g. the difference between manual and machine reading of results.

Response: We have added additional descriptive detail with respect to the two index tests under the ‘Index tests’ heading, highlighting the key differences in test result interpretation i.e. manual/user vs machine.

Attachment

Submitted filename: Response to reviewers.docx

Decision Letter 1

Vittorio Sambri

2 Jul 2023

Evaluation of the diagnostic accuracy of two point-of-care tests for COVID-19 when used in symptomatic patients in community settings in the UK primary care COVID diagnostic accuracy platform trial (RAPTOR-C19)

PONE-D-23-03034R1

Dear Dr. Nicholson,

We’re pleased to inform you that your revised manuscript has been now judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Thank you for the efforts made to revise your work accordingly to the suggestions made by the reviewers.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Vittorio Sambri, M.D., Ph.D.

Academic Editor

PLOS ONE

Acceptance letter

Vittorio Sambri

13 Jul 2023

PONE-D-23-03034R1

Evaluation of the diagnostic accuracy of two point-of-care tests for COVID-19 when used in symptomatic patients in community settings in the UK primary care COVID diagnostic accuracy platform trial (RAPTOR-C19)

Dear Dr. Nicholson:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Vittorio Sambri

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (DOCX)

    S2 File

    (DOCX)

    S1 Data

    (DOCX)

    Attachment

    Submitted filename: Response to reviewers.docx

    Data Availability Statement

    Data cannot be shared publicly because of participant confidentiality considerations. Research data access requests should be submitted to the Nuffield Department of Primary Care Health Sciences Information Guardian for consideration (contact via information.guardian@phc.ox.ac.uk) for researchers who meet the criteria for access to confidential data.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES