Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Apr 7;16(4):e0249791. doi: 10.1371/journal.pone.0249791

Clinical utility of targeted SARS-CoV-2 serology testing to aid the diagnosis and management of suspected missed, late or post-COVID-19 infection syndromes: Results from a pilot service implemented during the first pandemic wave

Nicola Sweeney 1, Blair Merrick 1,2,*, Rui Pedro Galão 3, Suzanne Pickering 3, Alina Botgros 1, Harry D Wilson 3, Adrian W Signell 3, Gilberto Betancor 3, Mark Kia Ik Tan 2, John Ramble 4, Neophytos Kouphou 3, Sam Acors 3, Carl Graham 3, Jeffrey Seow 3, Eithne MacMahon 1,2, Stuart J D Neil 3, Michael H Malim 3, Katie Doores 3, Sam Douthwaite 1,2, Rahul Batra 1,2, Gaia Nebbia 1,2, Jonathan D Edgeworth 1,2
Editor: Michael Nagler5
PMCID: PMC8026061  PMID: 33826651

Abstract

During the first wave of the global COVID-19 pandemic the clinical utility and indications for SARS-CoV-2 serological testing were not clearly defined. The urgency to deploy serological assays required rapid evaluation of their performance characteristics. We undertook an internal validation of a CE marked lateral flow immunoassay (LFIA) (SureScreen Diagnostics) using serum from SARS-CoV-2 RNA positive individuals and pre-pandemic samples. This was followed by the delivery of a same-day named patient SARS-CoV-2 serology service using LFIA on vetted referrals at central London teaching hospital with clinical interpretation of result provided to the direct care team. Assay performance, source and nature of referrals, feasibility and clinical utility of the service, particularly benefit in clinical decision-making, were recorded. Sensitivity and specificity of LFIA were 96.1% and 99.3% respectively. 113 tests were performed on 108 participants during three-week pilot. 44% participants (n = 48) had detectable antibodies. Three main indications were identified for serological testing; new acute presentations potentially triggered by recent COVID-19 e.g. pulmonary embolism (n = 5), potential missed diagnoses in context of a recent COVID-19 compatible illness (n = 40), and making infection control or immunosuppression management decisions in persistently SARS-CoV-2 RNA PCR positive individuals (n = 6). We demonstrate acceptable performance characteristics, feasibility and clinical utility of using a LFIA that detects anti-spike antibodies to deliver SARS-CoV-2 serology service in adults and children. Greatest benefit was seen where there is reasonable pre-test probability and results can be linked with clinical advice or intervention. Experience from this pilot can help inform practicalities and benefits of rapidly implementing new tests such as LFIAs into clinical service as the pandemic evolves.

Introduction

Infection with SARS-CoV-2 stimulates a detectable antibody response in most people, however, the clinical utility of routine serological testing has been questioned [1, 2]. There has been uncertainty about what proportion of infected individuals produce serum antibodies, how long they persist for, what role they have in diagnosis and whether their detection provides protection against reinfection or disease manifestations upon re-exposure to the virus. These uncertainties, coupled with the fact that antibody testing for other respiratory viral infection is not standard practice and concerns regarding production and validation of rapidly developed new tests [1, 3], led to hesitancy introducing them into widespread clinical practice.

From late May 2020 the UK government prioritised serological testing to NHS staff, reserving patient testing for those interested and undergoing other blood tests with a requirement for written consent. By that time our virology department had received many enquiries from different specialties asking whether SARS-CoV-2 infection might be contributing to patient presentation despite negative or absent conventional PCR testing.

We had recently completed parallel validation of eight lateral flow immunoassay (LFIA) devices and two commercial ELISA platforms against an ELISA developed at King’s College London (KCL) measuring IgG, IgA and IgM against several SARS-CoV-2 antigens (nucleocapsid (N) and spike (S) proteins and the S receptor binding domain (RBD). Viral neutralisation assays were also established alongside the in-house ELISA to correlate antibody titres with functional activity. Validation was initially performed on a cohort of patients presenting to Guy’s and St Thomas’ NHS Foundation Trust and showed that the accuracy of some of the lateral flow devices was comparable to our ELISA [4].

We therefore submitted a formal request to the hospital risk & assurance board sub-committee to provide a pilot clinical SARS-CoV-2 serology service for children and adults. Pilot approval was obtained on May 29th 2020 following review of protocols and laboratory data including a further pilot validation set reported here.

Methods

SureScreen diagnostics LFIA validation

The CE marked SureScreen Diagnostics LFIA was selected for further validation based on results from previous head-to-head analyses [4], provision of additional proprietary information on the antigen target by the manufacturer and confidence in procurement. Tests were performed according to manufacturer’s instructions by two independent operators evaluating the result as negative (0: no visible band), borderline (0.5: visible band in ideal lighting conditions, unable to photograph/ scan), positive (1: visible band in all lighting conditions), strong positive (2: visible band at the intensity of the control band or 3: visible band of greater intensity than control line) (S1 Fig). Sensitivity and specificity experiments were performed to meet MHRA validation guidance published on 19th May 2020 [5]. Serum samples were obtained from SARS-CoV-2 RNA positive (AusDiagnostics) [6] patients taken 14 or more (n = 301) and 20 or more (n = 204) days post onset of symptoms (POS) and 300 pre-pandemic samples. This included 200 stored serum samples and a panel of 100 stored acute and convalescent confounder samples taken from individuals with EBV, CMV, HIV and a range of other viral, bacterial and fungal pathogens. 95% confidence intervals were determined using the Wilson/Brown Binomial test. Sera from individuals diagnosed with seasonal coronaviruses were not available for testing. The research reagent for anti-SARS-CoV-2 Ab (NIBSC 20/130) obtained from the National Institute for Biological Standards and Control (NIBSC), UK, was used as a positive control for reproducibility and limit of detection experiments (IgG only) [7].

Service delivery

Internal governance approval for service delivery was obtained based on the laboratory validation data, clinical oversight, confirmation of an ability to request and report tests on electronic systems, a review of risks and their mitigation and agreement to report back on completion of the pilot. Service commenced on 29th May 2020 and was delivered by scientists from the KCL Department of Infectious Diseases who had conducted all the LFIA validations. Tests were performed in and provided by the Guy’s and St Thomas’ Hospital Centre for Clinical Infection and Diagnostics Research (CIDR), located adjacent to hospital routine diagnostic virology and blood sciences laboratories on the St Thomas’ Hospital site.

Availability of SARS-CoV-2 serology service was communicated through clinical networks with requests vetted by the clinical virology team. Samples were requested as part of routine laboratory testing route and serology was performed once daily, Monday to Friday. A positive band for either IgM or IgG (or borderline band in both) was reported to the clinician as “antibodies detected”. Results were uploaded onto hospital electronic patient records as a scanned image of the lateral flow cassette with a written comment alongside telephoning where appropriate. Differential detection of IgM and IgG was not considered as part of verbal or written advice. Repeat testing was recommended when there was a high index of clinical suspicion and no antibodies were detected, or a borderline result in IgM or IgG was the only observed band (S2 Fig). A standard set of demographics, clinical information, and SARS-CoV-2 PCR results were recorded for each participant and stored in a clinical database (S3 Fig). Informal verbal or written feedback from clinicians about their views on utility was also recorded.

ELISA

ELISA testing was performed on the 168 stored samples where sufficient sample was available from patients that had all severities of COVID-19 for comparison with the LFIA validation cohort (n = 301). All serum samples from the pilot service (where sufficient sample was available) were also batched for comparative testing at a time remote from clinical decision making.

High-binding ELISA plates (Corning, 3690) were coated with antigen (N, S) at 3 μg/mL (25 μL per well) in PBS. Wells were washed with PBS-T (PBS with 0.05% Tween-20) and then blocked with 100 μL 5% milk in PBS-T for 1 hr at room temperature. Wells were emptied and sera diluted at 1:50 in milk was added and incubated for 2 hr at room temperature. Control reagents included CR3009 (2 μg/mL), CR3022 (0.2 μg/mL), negative control plasma (1:25 dilution), positive control plasma (1:50) and blank wells. Wells were washed with PBS-T. Secondary antibody was added and incubated for 1 hr at room temperature. IgM was detected using goat-anti-human-IgM-HRP (1:1,000) (Sigma: A6907), IgG was detected using goat-anti-human-Fc-AP (1:1,000) (Jackson: 109-055-043-JIR). Wells were washed with PBS-T and either Alkaline Phosphatase (AP) substrate (Sigma) was added and read at 405 nm (AP) or 1-step TMB substrate (Thermo Scientific) was added and quenched with 0.5 M H2SO4 before reading at 450 nm (HRP). Antibodies were considered detected if OD values were 4-fold or greater above background.

Neutralising antibody assay

Neutralising antibody testing was performed on six patients (all in infection control/ immunosuppression management group–Fig 3). Neutralisation were conducted as previously described [8]. Serial dilutions of serum samples were prepared with DMEM media and incubated with pseudotyped HIV virus incorporating the SARS-CoV-2 spike protein [9] for 1-hour at 37°C in 96-well plates. Next, HeLa cells stably expressing the ACE2 receptor (provided by Dr James Voss, The Scripps Research Institute) were added and the plates were left for 72 hours. Infection level was assessed in lysed cells with the Bright-Glo luciferase kit (Promega), using a Victor™ X3 multilabel reader (Perkin Elmer). ID50 for each serum was calculated using GraphPad Prism. Neutralisation titres were classified as low (50–200), moderate (201–500), high (501–2000), or potent (2001+).

Fig 3. Referral characteristics and RNA results of individuals having SARS-CoV-2 serology testing performed during the pilot.

Fig 3

Patient and public involvement

Patients were not involved in the development of the study or its outcome measures, conduct of the research, or preparation of the manuscript.

Ethical approval

All work was performed in accordance with the UK Policy Framework for Health and Social Care Research, and approved by the Risk and Assurance Committee at Guy’s and St Thomas’ NHS Foundation Trust. Informed consents were not required from participants in this study as per the guidelines set out in the UK Policy Framework for Health and Social Care Research and by the registration with, and express consent of the host institution’s review board.

Results

LFIA validation was performed using serum samples from 301 PCR-confirmed SARS-CoV-2 positive individuals collected 14 or more days POS and 300 pre-pandemic serum samples including 100 (acute and convalescent) from patients with a range of other infections that could give rise to a false positive result (Fig 2A) and in accordance with published MHRA guidance at the time. A random selection of 168 (of the 301) samples from patients with a range of disease severities (and where sufficient sample for analysis was available) were compared head-to-head with an in-house ELISA for IgM and IgG to N, S and RBD (Fig 1). Sensitivity at 14 and 20 days or more POS was 94.4% and 96.1% respectively and specificity was 99.3% (Fig 2B). Limit of detection based on visual inspection of LFIA bands by two operators was determined using the NIBSC reference standard to a dilution of 1 in 500. This was consistent with the expected limit of detection of the NIBSC in-house assay (S4 Fig) [9].

Fig 2.

Fig 2

a: Testing of samples that were pre-pandemic from patients with other infectious diseases and known confounders to estimate specificity of the SureScreen lateral flow immunoassay. b: Sensitivity estimates of SureScreen lateral flow immunoassay using serum samples obtained from SARS-CoV-2 PCR positive patients at greater than 14 and 20 days post reported onset of symptoms. POS = post onset of symptoms.

Fig 1. Comparative assessment of 168 serum samples from SARS-CoV-2-infected individuals by ELISA and lateral flow immunoassay.

Fig 1

168 serum samples from individuals with confirmed SARS-CoV-2 infection were tested for the presence of antibody by ELISA to the full spike (S), receptor binding domain (RBD) and nucleocapsid (N), and by SureScreen lateral flow immunoassay. Detection of IgG is shown in the top panel, and IgM in the bottom panel. Samples are arranged according to days post onset of symptoms, ranging from 14 to 40 days. Results are displayed as a heatmap, with white indicating a negative result, and gradations of orange indicating the magnitude of response detected.

The pilot service was commenced on May 29th 2020 and lasted 3 weeks. 48/108 (44%) participants had detectable IgG and/or IgM SARS-CoV-2 antibodies on their first serum sample that was communicated to referring clinicians as “antibodies detected” (Fig 3). 38/48 (79%) had IgM and 47/48 (98%) had IgG bands. Five participants with a high index of suspicion but no detectable antibodies had a further serum sample tested at least one week after initial testing. All repeat samples had no detectable antibodies. Rationale for testing broadly fell into three referral categories. First, acute presentations with new symptoms potentially triggered by SARS-CoV-2 infection. This included suspected cases of Paediatric Inflammatory Multisystem Syndrome Temporally associated with SARS-CoV-2 (PIMS-TS) (n = 30), plus adults (n = 27) and children (n = 5) presenting with other clinical syndromes including thrombotic events such as strokes and pulmonary emboli (collectively called COVID-19 syndromes). Second, suspected “missed” diagnoses in individuals with a (recent) COVID-19 compatible illness who either never had an RNA test performed (n = 19) or viral RNA was not detected in respiratory specimens (n = 21). Third, those for whom antibody detection made a significant contribution to decisions on infection control management or immunosuppressive treatment (n = 6).

Of 30 children with suspected PIMS-TS, 11 had detectable antibodies (37%). Reviewing the clinical history of the 19 with no detectable antibodies, seven had an alternate plausible diagnosis, or did not fulfil PIMS-TS diagnostic criteria at the time of discharge and 12 had ongoing high clinical suspicion of PIMS-TS. Two children (participants 018 and 034) had repeat testing at least seven days later, neither had detectable antibodies at this stage. For the remaining 32 PCR negative/ not done individuals presenting with a potential post-COVID syndrome, seven (21.2%) had antibodies detected. This included two with the diagnosis of pulmonary embolism (PE), one with a new diagnosis of interstitial lung disease (ILD), two with a hyperinflammatory syndrome (akin to PIMS-TS), one patient with a relapse of HSV encephalitis, and one patient with paracentral acute middle maculopathy.

40 individuals were tested to identify potential missed COVID-19 diagnoses comprising nine presenting to hospital with ongoing compatible symptoms but negative SARS-CoV-2 RNA tests, and 31 who had recovered from a recent compatible illness in the community, including 15 individuals with end-stage renal failure, who had been advised to shield, and 12 patients attending the respiratory led post-COVID clinic due to failure to return to their baseline level of function. Overall, 24/40 (60%) had detectable antibodies, including two patients admitted to ITU but with repeatedly negative PCR results on upper and lower respiratory sampling.

Of the six individuals with persistent SARS-CoV-2 RNA on nose and throat swabs tested to guide infection control or immunosuppression decisions, all had detectable antibodies on SureScreen LFIA, and when tested, moderate (n = 1, ID50 = 277), high (n = 1, ID50 = 1135), or potent (n = 4, ID50 = 2333, 4130, 5164, 5248) neutralising antibodies titres. This implied, when considered with other factors such as time from first positive PCR test, and threshold cycle for RNA detection, that they were no longer infectious, and had a degree of protection from reinfection.

Serological testing was performed no earlier than 21 days post onset of symptoms (POS), up to approximately 90 days POS (where symptom onset data was available) for all participants.

When considering the combined (IgM and IgG) anti-spike ELISA data versus the SureScreen LFIA result there were 13 discrepant results. Reviewing the ELISA IgG anti-spike data only, there was greater concordance (four discrepant). The ELISA did not detect antibodies in two cases–participants 055 and 098 (where the SureScreen LFIA did), and in two cases antibodies were detected by the ELISA–participants 019 and 046 (where the SureScreen LFIA detected none). There was one individual (participant 045) with detectable anti-nucleocapsid IgG by ELISA who did not have anti-spike IgG (S3 Fig). The SureScreen LFIA did not detect antibodies in this participant–an expected result as the device only detects anti-spike antibodies. The IgM anti-nucleocapsid ELISA data has not been considered as previous work recognised the low specificity of this test (4). ELISA testing was not performed on three participants due to lack of sample availability (participants 002, 036 and 096).

Discussion

This pilot SARS-CoV-2 serology service was introduced two months after the peak of acute UK COVID-19 admissions and provided results on 108 patients over a three-week period. It included a large number of children presenting with a new hyper-inflammatory, Kawasaki-like syndrome, termed PIMS-TS [10], to the on-site Evelina London Children’s Hospital that provides regional specialist services. 37% had antibodies detected, lower than previously reported [10, 11], potentially due to increased awareness and broadening of clinical evaluation criteria, supported by a number of children having this diagnosis removed from discharge coding.

Serology was particularly helpful aiding diagnosis and management of what is an increasing range of assumed COVID-19 triggered conditions [1217]. For example, antibodies were detected in two patients presenting with a PE that was therefore considered a provoked event, limiting the need for additional investigations and reducing the period of anticoagulation. Negative serology also helped to discount, although recognising the limitations of testing, could not fully exclude COVID-19 as a potential trigger for newly presenting conditions. These included acquired haemophilia A and a range of unusual dermatological presentations e.g. ‘Covid toes’.

Detecting antibodies in patients with persistently positive SARS-CoV-2 PCR tests despite symptom resolution, a phenomenon reported elsewhere [18], enabled important decisions for infection control and immunosuppression. These decisions were supported by data that antibodies against spike protein (personal communication with SureScreen Diagnostics Ltd) correlate with neutralisation [19] and there is published guidance that neutralisation can be used as a proxy for reduced risk of transmission [20, 21]. Since neutralising experiments are time-consuming and complex, rapid tests that detect antibodies against spike, such as the SureScreen LFIA and some, but not other technologies [2224] are a practical alternative [25] when considered alongside other factors including timing from symptom onset, ongoing symptoms, and cycle threshold or take-off values of PCR results.

The strength of this study includes the extensive prior comparison of multiple technologies using a large panel of serum samples to inform choice and validation of the selected LFIA for clinical service. Results were also consistent with recommendations from a Cochrane review published after completion of our pilot, which suggested a benefit for serology to confirm a COVID-19 diagnosis in patients who did not have SARS-CoV-2 RNA testing performed, or who had a negative result despite an ongoing high index of clinical suspicion [3].

It was also offered across the hospital to assess the broad potential clinical utility. With high pre-test probability (e.g. 45%), the positive predictive value (PPV) is 99.2%, with an acceptable negative predictive value (NPV) of 96.9%. However, it is of note that if testing were to be extended to a population where prevalence is low (e.g. 5%) the PPV falls to <90%. This re-enforces the importance of providing serology for defined patient cohorts where the pre-test probability is high and the potential clinical utility is understood [26, 27].

The main limitation of this study is in being performed at a single-centre at a discrete time-point in the COVID-19 pandemic. Since that time there have been many changes in the epidemiology and approach to COVID-19 testing. Most countries are in the midst of a second wave and vaccination will change the utility and interpretation of antibody detection. PCR tests are also more widely available to patients in the community (pillar 2) and hospital laboratories have higher capacity and more rapid PCR tests (pillar 1), which could reduce the number of missed or delayed diagnoses. There are also more accurate laboratory serology technologies, including the ability to assess dynamic responses [2830] alongside T cell assays [31, 32], which could reduce utility of LFIA in many settings.

The discrepant ELISA and LFIA data illustrate the challenges of any single technology employed to detect specific antibodies induced in response to infection rather than cross-reactivity or anamnestic responses–particularly for IgM. Ten participants with no antibodies detected using SureScreen LFIA had low level anti-spike IgM antibodies detected by ELISA (but not IgG). At 21 or more days POS the one would expect the vast majority of individuals to have seroconverted to IgG (only one study participant had IgM only identified on SureScreen LFIA). When taken into consideration with previous validation work using this ELISA [4], these results could merely represent non-specific reactivity. The explanation for the four SureScreen results that were discrepant with the ELISA anti-spike IgG would require further investigation including repeat sampling and testing using other technologies. All LFIA results in this pilot were communicated in the context of the clinical history and the decisions being made, and where limitations of serological results in general and these technologies in particular were understood. Technologies for confirmatory testing alongside participation in external quality assurance schemes would be required to extend delivery of such a service.

Nevertheless, LFIAs are quick (10-minute test), inexpensive and are used in diagnostic laboratories for example in detecting pneumococcal and legionella urinary antigens. It is hard to predict where future clinical need for serology LFIAs might be. They could be developed further for deployment in settings with limited laboratory facilities, enabled by methods to collect capillary blood (although this will require further validation work), or used longitudinally to assess waning response to mass population vaccination campaigns where rapid high-volume longitudinal assessments might be required. There are also now technologies available for electronic reading of bands that would take away the subjectivity of reading band signal-strength by eye, which we recorded here in a semi-quantitative way by two independent observers. This experience may help inform approach to reading and communicating SARS-CoV-2 antigen lateral flow devices that are now being used by healthcare staff, patients and public [33], although the significance of band strength for both antibody and antigen lateral flow assays still requires further investigation.

This study also represents the final phase of a translational research pathway completed in three months from the basic science, comparative evaluation and now this pilot service study. The diagnostic response to this pandemic will come under continued scrutiny and there are lessons to be learnt on the ability of diagnostic laboratories and translational research teams to work together in the response to rapidly emerging infections [34]. The strengths and limitations of conducting this study at this time therefore also provides useful data to inform discussion on the requirements of academia to respond in a pandemic setting.

Supporting information

S1 Fig. Scanned images of LFIA cassettes for participants 084 (left) and 086 (right) labelled with band intensities.

Negative = 0: No visible borderline = 0.5: A visible band in ideal lighting conditions, positive = 1: A visible band in all lighting conditions, strong positive = 2: A visible band at the intensity of the control line or 3: A visible band of greater intensity than the control line. NB: Bands of 0.5 intensity are unable to be scanned/ photographed and therefore appear blank on the scanned image below.

(DOCX)

S2 Fig. Flowchart of service delivery, same-day service, Monday to Friday.

(DOCX)

S3 Fig. Cohort demographics including age, sex, category, direct care team, RNA result (if performed), SureScreen LFIA results (band intensity recorded from 0.5–3) and ELISA data (results expressed as fold change above background, ≥4 fold above background in either IgM or IgG is reported as positive).

(DOCX)

S4 Fig. IgG limit of detection for the SureScreen LFIA.

Defined dilutions of the NIBSC research reference reagent for anti-SARS-CoV-2 antibody (20/130) were tested in triplicate on the SureScreen LFIA. Results are displayed as a heat map, with white indicating a negative result and gradations of orange representing the magnitude of response detected.

(DOCX)

Acknowledgments

We are extremely grateful to all staff in Viapath Infection Sciences and Department of Infectious Diseases based at St Thomas’ Hospital who helped deliver this service.

Data Availability

All relevant data are within the paper and its Supporting Information files.

Funding Statement

King’s Together Rapid COVID-19 Call awards to KJD, SJDN and RMN. MRC Discovery Award MC/PC/15068 to SJDN, KJD and MHM. National Institute for Health Research (NIHR) Biomedical Research Centre based at Guy's and St Thomas' NHS Foundation Trust and King's College London, programme of Infection and Immunity (RJ112/N027) to MHM and JE. BM was supported by an NIHR Academic Clinical Fellowship in Combined Infection Training. AWS and CG were supported by the MRC-KCL Doctoral Training Partnership in Biomedical Sciences (MR/N013700/1). GB was supported by the Wellcome Trust (106223/Z/14/Z to MHM). SA was supported by an MRC-KCL Doctoral Training Partnership in Biomedical Sciences industrial Collaborative Award in Science & Engineering (iCASE) in partnership with Orchard Therapeutics (MR/R015643/1). NK was supported by the Medical Research Council (MR/S023747/1 to MHM). SP, HDW and SJDN were supported by a Wellcome Trust Senior Fellowship (WT098049AIA). Fondation Dormeur, Vaduz for funding equipment (KJD). Development of SARS-CoV-2 reagents (RBD) was partially supported by the NIAID Centers of Excellence for Influenza Research and Surveillance (CEIRS) contract HHSN272201400008C. Viapath LLP provided support in the form of salaries for author JR, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of this author is articulated in the ‘author contributions’ section.

References

Decision Letter 0

Michael Nagler

20 Nov 2020

PONE-D-20-30782

Clinical utility of targeted SARS-CoV-2 serology testing to aid the diagnosis and management of suspected missed, late or post-COVID-19 infection syndromes: results from a pilot service

PLOS ONE

Dear Dr. Merrick,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

One issue must be clarified to meet the publication requirements of PLOS ONE: are these data new or did the authors already report them in a previous publication? Besides, the conclusions are not fully supported by the data. We invite the authors to carefully address the comments of the reviewers.

Please submit your revised manuscript by Jan 04 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Michael Nagler, M.D., Ph.D., MSc

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please include your tables as part of your main manuscript and remove the individual files. Please note that supplementary tables (should remain/ be uploaded) as separate "supporting information" files

3.Thank you for stating the following in the Financial Disclosure section:

[King’s Together Rapid COVID-19 Call awards to KJD, SJDN and RMN.

MRC Discovery Award MC/PC/15068 to SJDN, KJD and MHM.

National Institute for Health Research (NIHR) Biomedical Research Centre based at Guy's and St Thomas' NHS Foundation Trust and King's College London, programme of Infection and Immunity (RJ112/N027) to MHM and JE.

AWS and CG were supported by the MRC-KCL Doctoral Training Partnership in Biomedical Sciences (MR/N013700/1).

GB was supported by the Wellcome Trust (106223/Z/14/Z to MHM).

SA was supported by an MRC-KCL Doctoral Training Partnership in Biomedical Sciences industrial Collaborative Award in Science & Engineering (iCASE) in partnership with Orchard Therapeutics (MR/R015643/1).

NK was supported by the Medical Research Council (MR/S023747/1 to MHM).

SP, HDW and SJDN were supported by a Wellcome Trust Senior Fellowship (WT098049AIA).

Fondation Dormeur, Vaduz for funding equipment (KJD).

Development of SARS-CoV-2 reagents (RBD) was partially supported by the NIAID Centers of Excellence for Influenza Research and Surveillance (CEIRS) contract HHSN272201400008C.

No sponsors or funders played a role in the study design, data collection or analysis, decision to publish, or preparation of the manuscript.].   

We note that one or more of the authors are employed by a commercial company: Viapath Group LLP

  1. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form.

Please also include the following statement within your amended Funding Statement.

“The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.”

If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement.

2. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc.  

Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

4. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please delete it from any other section.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: I Don't Know

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The submitted manuscript of Sweeney et al. highlights the potential clinical benefit of targeted SARS-CoV-2 serology testing/service to aid the diagnosis and management of suspected missed, late or post-COVID-19 infection syndrome based on a self-validated LFIA test. In general, people are strongly engouraged to design studies that investigate performance and application of serological tests for COVID-19 based on different reasons. Given the tremendous improvements in state of the art serology tests (ELISAs, flow cytometry-based assay for SARS-CoV-2-binding antibodies, Antigen tests) in the last couple of months the rapidly processed LFIA test lose its advantage over accuracy, quantification and speed compared to other tests. These limitations have also been discussed in the manuscript. Therefore, rapidly processed and accurate serology can be performed with more sophisticated tests than the LFIA in short time. Clinical advice or interventions and management based on the LFIA data summarized in this study is intended just for a defined patient cohort and needs further investigations on solid and statistically unbiased data sets to be competitive and attractive over other tests. The following major and minor concerns have to be addressed in order to warrant publication of the study.

Main concerns:

1 In a previously published study (Plos pathogen (2020) Pickering et al.) the authors already extensively validated/compared LFIA devices including the commercially available SureScreen LFIA. Therefore, the comprehensive LFIA validation shown in figure 1 re-produces their own initially published data but is not new. This also raises the question of whether another patient cohort has been used for the analysis shown in this study. Additionally, raw data points and group clustering strategy are missing.

2 The sensitivity of the LFIA test in table 1b has also already been described in the earlier study (Plos pathogen, Pickering et al. 2020, Figure 5a).

3 The high sensitivity (accuracy) of the test might derive from the biased sample group which is 14 and 20 days POS (Peak of AB response) and would rather not reflect the “real-life” situation in clinics. Therefore, clinical interpretation of results might rather be difficult (no quantification of the AB response) if random samples should/will be tested and interpreted. This is further highlighted in the study of Pickering et al. (Plos pathogen (2020) where the sensitivity (of all tests, including SureScreen) is highly dependent on the “days after onset of symptoms”.

4 Quantification and interpretation of antibody dynamics is not possible with this assay. Furthermore, additional data on IgA levels in nasopharyngeal swaps would be helpful and informative and would strengthen the considered serology service in clinics.

5 In addition to the LFIA limitations described above, serology only identifies a proportion of recently infected patients within a short time frame. Thus, seronegative results (more than 60% of the cases, table 2) are difficult to interpret and resulting interventions as well as management would be extremely difficult and speculative with this test. CD4+/CD8+ T cell memory assays might close the gap (Peng et al.2020, Nature immunology).

6 Split of patient samples into different pathologies (Table 1a) is fairly interesting but loses its statistic power (small sample size).

7 “Neutralising antibody titers” are missing from the neutralising antibody assay described in methods and results.

8 Within this small cohort, rational for testing and the resulting interpretation is highly speculative and demonstrate rather a case report.

Minor concerns:

1 Study design (flow chart) would be helpful to follow

2 Age characteristics of individual patients would be helpful

3 The SureScreen LFIA was selected based on which criteria?

4 How did you exactly correlate/categorize LFIA results with ELISA data into the different categories as “negative” ,“borderline”, “positive” and “strong positive”?

5 Neutralising antibody titers for 6 individuals with persistent SARS-CoV-2 are missing

6 Can you run blood samples on the LFIA and do those results correlate with serum samples (much faster processed) on the LFIA? Which dilution is required (serum/blood)?

7 Rapid decline in AB titers of SARS-CoV-2 infected patients is controversially discussed and should be interpreted carefully.

Reviewer #2: In this manuscript, the authors describe an extended validation of a LFIA for the rapid serological assessment of anti-SARS-CoV-2 antibodies, as well as a pilot study using this test in the clinical setting to decide whether PCR-negative patients with Covid-19 like symptoms may be infected.

In my opinion, this manuscript may be accepted with major revisions.

Here, I will raise some questions, which should be addressed in the revised manuscript

1. In the Methods and Results section, the authors mention the assessment of Limit of Detection, but there are no Results shown. Visualization of these experiments would help to interpret the heat-map results shown in figure 1. Are these LoD dilutions mentioned in the results for IgM or IgG?

2. How are the 168 samples in fig. 1 selected?

3. Table 1b: are the values for sensitivity and specificity for IgG, IgM or a combination of both?

4. Are borderline results calculated as positive or negative?

5. It is difficult to understand the sensitivity and specificity values, as 5 of 19 sera at d14 are negative. Are these samples representative?

6. I propose that the results of the pilot study are presented in the same manner as the results in Figure 1, i.e. strength of reaction for IgG and IgM, alongside with the time point after POS.

7. Paragraph 3 of results: the indications of % in brackets are misleading, as the 29 pediatric patients with PIMS-TS are 100%, the following numbers should be adapted accordingly.

8. In the same paragraph are 7 patients mentioned, but only 6 described – what about the last patient?

9. Paragraph 5 of results: the implication drawn by the authors is not very robust. There is no evidence for this implication, as results are lacking. Is there correlation between the threshold cycle of RNA detection and the strength of Antibody signal in the LFIA? Furthermore, the results of the neutralization experiments should be included in the comparison of RT-PCR and LFIA results for these follow-up patients.

10. The authors claim, that this pilot should address the question, whether the serological rapid antibody test may help in the clinical routine in decisions. as mentioned in the methods, the clinicians using the service described, have been contacted for informal feedback and their view of utility. These feedbacks are not systematically evaluated.

11. Paragraph 5 of discussion: the first sentence does not help a lot.

12. Paragraph 5 of discussion: how are the inclusion criteria set, to obtain high pre-test probability? Could you explain how the PPV and NPV with the different pre-test probabilities are calculated?

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Apr 7;16(4):e0249791. doi: 10.1371/journal.pone.0249791.r002

Author response to Decision Letter 0


21 Dec 2020

We have uploaded a response to the Reviewers as a separate file entitled 'Response to Reviewers' as requested by the Editor.

Attachment

Submitted filename: Response to Reviewers 1.docx

Decision Letter 1

Michael Nagler

14 Jan 2021

PONE-D-20-30782R1

Clinical utility of targeted SARS-CoV-2 serology testing to aid the diagnosis and management of suspected missed, late or post-COVID-19 infection syndromes: results from a pilot service

PLOS ONE

Dear Dr. Merrick,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The reviewer still raise major concerns and I would like to encourage the authors to address these issues.

Please submit your revised manuscript by Feb 28 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Michael Nagler, M.D., Ph.D., MSc

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I carefully read the manuscript and the corresponding responses to the raised concerns to both reviewers and noted (at least for my section) that these points are not sufficiently addressed as well as important raw data sets are still missing.

I therefore do not support direct publication at Plos one based on the following concerns.

Concern 1: The LFIA validation in figure 1, as discussed in the first revision, has not been satisfactory addressed. In general, the “Pickering study” validated +/- 50 samples less. Therefore, the reasoning of a limited dataset in the “Pickering study” or an “extended” data set in this study is not valid in this case.

Furthermore, requested raw data (ELISA data, LFIA pictures, neutralization assay setup with appropriate controls) are missing in the .

Concern 2: Responses to concern 2/3 has only been partially addressed. Even though the manuscript includes “a bigger dataset” the current test validation strategy (samples 14+ POS) do still not reflect the “real-life” test-sensitivity from my point of view. The authors claim that they consider this limitation before agreeing to perform the test. Can you include potential POS of these patients in the manuscript, to get a better feeling how to interpret these negative LFIA tests (+/- 60%) in figure 2?

Concern 3: Response to concern 4 has only been partially addressed since comments on the AB quantification and AB dynamics has not been discussed in the manuscript. AB dynamics might represent an important and useful service implementation when included in the service (not only for borderline results) and should be included in the manuscript.

Concern 4: Concern 5 has only been partially addressed in the discussion.

Concern 5: Concern 6 has not been addressed since the Specificity (95% CI) describes a statistical test and the data are at least linked/interpreted like this (Title of the table: Specificity of SureScreen LFIA). The authors should tone down their statements or adjust the table.

Concern 6: Concern 7 has not been addressed. Even though ID50 values were now reported in the results neither an experimental layout with appropriate controls nor plaque pictures or raw data from these assays are included in the manuscript. Implementation of this experiment is necessary to interpret the results and will strengthen the manuscript.

Concern 7: Minor concern 4 is partially addressed. It is hard to believe how those bands can be grouped by eye in the corresponding categories. Moreover, I have not seen any pictures of those bands yet in the manuscript. Can you please provide those LFIA images? Quantification of these pictures would be helpful to categorize those samples into the corresponding group.

Concern 8: Minor concern 4 is not addressed. See above

Concern 9: Minor concern 6 has not been addressed. If there is one potential niche for LFIA tests then at locations with limited lab material. Capillary blood instead of serum preparation could dramatically simplify the test procedure. Therefore, including those Blood/Plasma validations on the LFIA would indeed strengthen the manuscript.

Reviewer #2: The following questions have not been addressed properly.

Q2. It is not visible in the Results section, that these 168 Samples have been randomly selected.

Q8. the authors still mention 7 out of 33 PCR negative patients, but describe only 6 patients. This point has not been clarified.

Q11. this question has not been addressed in the revised manuscript. The changes are not visible.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 2

Michael Nagler

25 Mar 2021

Clinical utility of targeted SARS-CoV-2 serology testing to aid the diagnosis and management of suspected missed, late or post-COVID-19 infection syndromes: results from a pilot service implemented during the first pandemic wave

PONE-D-20-30782R2

Dear Dr. Merrick,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Michael Nagler, M.D., Ph.D., MSc

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Michael Nagler

29 Mar 2021

PONE-D-20-30782R2

Clinical utility of targeted SARS-CoV-2 serology testing to aid the diagnosis and management of suspected missed, late or post-COVID-19 infection syndromes: results from a pilot service implemented during the first pandemic wave

Dear Dr. Merrick:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Prof. Dr. Michael Nagler

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. Scanned images of LFIA cassettes for participants 084 (left) and 086 (right) labelled with band intensities.

    Negative = 0: No visible borderline = 0.5: A visible band in ideal lighting conditions, positive = 1: A visible band in all lighting conditions, strong positive = 2: A visible band at the intensity of the control line or 3: A visible band of greater intensity than the control line. NB: Bands of 0.5 intensity are unable to be scanned/ photographed and therefore appear blank on the scanned image below.

    (DOCX)

    S2 Fig. Flowchart of service delivery, same-day service, Monday to Friday.

    (DOCX)

    S3 Fig. Cohort demographics including age, sex, category, direct care team, RNA result (if performed), SureScreen LFIA results (band intensity recorded from 0.5–3) and ELISA data (results expressed as fold change above background, ≥4 fold above background in either IgM or IgG is reported as positive).

    (DOCX)

    S4 Fig. IgG limit of detection for the SureScreen LFIA.

    Defined dilutions of the NIBSC research reference reagent for anti-SARS-CoV-2 antibody (20/130) were tested in triplicate on the SureScreen LFIA. Results are displayed as a heat map, with white indicating a negative result and gradations of orange representing the magnitude of response detected.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers 1.docx

    Attachment

    Submitted filename: Reponse to Reviewers.docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES