Skip to main content
The Lancet Regional Health: Western Pacific logoLink to The Lancet Regional Health: Western Pacific
. 2021 Mar 2;9:100115. doi: 10.1016/j.lanwpc.2021.100115

Multi-site assessment of rapid, point-of-care antigen testing for the diagnosis of SARS-CoV-2 infection in a low-prevalence setting: A validation and implementation study

Stephen Muhi a,b,c,#, Nick Tayler c,d,#, Tuyet Hoang c,#, Susan A Ballard c,#, Maryza Graham c,e, Amanda Rojek d, Jason C Kwong b,f, Jason A Trubiano f, Olivia Smibert f, George Drewett f, Fiona James f, Emma Gardiner d, Socheata Chea c, Nicole Isles c, Michelle Sait c, Shivani Pasricha b, George Taiaroa c, Julie McAuley b, Eloise Williams g, Katherine B Gibney a,h, Timothy P Stinear b, Katherine Bond b,g, Sharon R Lewin a,h,i, Mark Putland d, Benjamin P Howden b,c,f,#, Deborah A Williamson b,c,g,#,
PMCID: PMC8076656  PMID: 33937887

Abstract

Background

In Australia, COVID-19 diagnosis relies on RT-PCR testing which is relatively costly and time-consuming. To date, few studies have assessed the performance and implementation of rapid antigen-based SARS-CoV-2 testing in a setting with a low prevalence of COVID-19 infections, such as Australia.

Methods

This study recruited participants presenting for COVID-19 testing at three Melbourne metropolitan hospitals during a period of low COVID-19 prevalence. The Abbott PanBioTM COVID-19 Ag point-of-care test was performed alongside RT-PCR. In addition, participants with COVID-19 notified to the Victorian Government were invited to provide additional swabs to aid validation. Implementation challenges were also documented.

Findings

The specificity of the Abbott PanBioTM COVID-19 Ag test was 99.96% (95% CI 99.73 - 100%). Sensitivity amongst participants with RT-PCR-confirmed infection was dependent upon the duration of symptoms reported, ranging from 77.3% (duration 1 to 33 days) to 100% in those within seven days of symptom onset. A range of implementation challenges were identified which may inform future COVID-19 testing strategies in a low prevalence setting.

Interpretation

Given the high specificity, antigen-based tests may be most useful in rapidly triaging public health and hospital resources while expediting confirmatory RT-PCR testing. Considering the limitations in test sensitivity and the potential for rapid transmission in susceptible populations, particularly in hospital settings, careful consideration is required for implementation of antigen testing in a low prevalence setting.

Funding

This work was funded by the Victorian Department of Health and Human Services. The funder was not involved in data analysis or manuscript preparation.


Research in Context.

Evidence before this study

The reported sensitivity and specificity of the Abbott PanBioTM COVID-19 Ag test (as reported by the manufacturer) is 91.4% and 99.8%, respectively. A small number of studies in high prevalence settings have demonstrated similar or reduced sensitivity but comparable specificity. Few studies have reported the use of SARS-CoV-2 antigen tests in a very low prevalence setting, and the obstacles to implementation in this setting require exploration.

Added value of this study

This study reports the findings of antigen based SARS-CoV-2 testing in a low prevalence, hospital-based setting. We observed high specificity, but due to the low prevalence, no cases of RT-PCR-confirmed COVID-19 were newly diagnosed in the hospital arm of the study. Sensitivity, as determined from participants with known COVID-19, depended on the duration of symptoms. Numerous implementation challenges were identified; solutions to which may inform future point-of-care testing strategies in a low prevalence setting.

Implications of all the available evidence

In a low prevalence setting, all positive SARS-CoV-2 antigen based tests should be confirmed with RT-PCR. Antigen-based tests may assist resource allocation while awaiting confirmatory testing. Considering the ability of the virus to spread rapidly in susceptible populations, particularly hospitals, careful consideration is required for implementation in a low prevalence setting.

Alt-text: Unlabelled box

1. Introduction

The scale and speed of the coronavirus disease (COVID-19) pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is unprecedented [1]. In the current absence of a vaccine, public health responses have relied largely on a combination of population-level non-pharmaceutical interventions (NPIs) and individual-level diagnostic testing [2]. To date, diagnostic testing for SARS-CoV-2 has relied on highly sensitive reverse-transcriptase PCR (RT-PCR) assays performed in clinical laboratories. However, RT-PCR is relatively expensive and, depending on the setting, results may take 24-48 hours to return. Importantly, such delays in testing and contact tracing may lead to further transmission of disease [3]. Recently, rapid antigen tests have been proposed as a means of increasing population-level surveillance testing and enabling testing at, or near, the point of care (POC) [4]. As antigen tests detect viral protein rather than amplified nucleic acid, they are inherently less sensitive than RT-PCR assays. To offset this reduced sensitivity, it has been suggested that increasing the frequency of testing may enable rapid identification and isolation of infected individuals [4,5].

Australia has one of the highest COVID-19 testing rates in the world; to date all diagnostic testing for COVID-19 in Australia has been performed using laboratory-based RT-PCR. These high testing rates, coupled with early and aggressive NPIs, including closure of the Australian border, have contributed to Australia having one of the lowest COVID-19 infection rates globally [6]. However, as societies begin to interact and international travel resumes, the requirement for rapid, scalable population-level testing may not be fully met by laboratory-based RT-PCR testing. Although antigen testing may have potential benefits in enabling widespread testing, the feasibility and utility of implementing this testing in a low-prevalence setting such as Australia have not been assessed. Here, we undertook a laboratory and multi-site clinical validation study of a commercially available rapid antigen test, the Abbott PanBioTM COVID-19 Ag test. At the time of study initiation, it was one of two rapid POC tests with emergency use authorised by the World Health Organisation and was approved for supply by the Australian Therapeutic Goods Administration [7,8]. Specifically, the aims of our study were to: (i) determine the performance characteristics of the Abbott PanBioTM COVID-19 Ag test against an RT-PCR reference standard; (ii) identify the implementation challenges associated with rapid antigen point-of-care testing, and (iii) develop a framework for the implementation of rapid antigen testing in a low-prevalence setting.

2. Methods

2.1. Study setting and patient populations

We undertook a prospective study across three academic hospitals located in Melbourne (population 4.97 million), which is the capital of the Australian state of Victoria. In Victoria, the COVID-19 pandemic has been characterised by two peaks of transmission, the first occurring between March and April 2020 (maximum 622 active cases) and the second between July and September (maximum 7,880 active cases) [9]. This study commenced in late September after significant public health interventions had controlled transmission, during which time the 14-day average in metropolitan Melbourne decreased from 20.3 to zero new cases per day [10]. The three participating hospitals were Monash Health (located in Melbourne's south-eastern suburbs, including Monash Medical Centre, Casey Hospital and Dandenong Hospital and servicing one quarter of Melbourne's population), Austin Hospital (located in Melbourne's north-eastern suburbs) and the Royal Melbourne Hospital (RMH) City Campus, with catchment extending to Melbourne's north-western suburbs. The study recruited participants at the RMH in a pre-pilot phase from 28 September 2020 and from all three sites from 19 October 2020 to 14 November 2020. Individuals presented for COVID-19 testing at each of these hospitals mainly due to: (i) the presence of symptoms consistent with COVID-19 or (ii) contact tracing / outbreak management responses. Participants underwent standard-of-care RT-PCR testing using a combined throat and deep nasal swab, plus simultaneous antigen testing using the Abbott PanBioTM COVID-19 Ag test if consent was provided (Supplementary Figure 1). All participants provided information about the presence, absence and duration of clinical symptoms, and possible exposures to SARS-CoV-2, which was recorded in a secure REDCap database hosted at the RMH [11]. For testing using RT-PCR, patients were swabbed in accordance with their local hospital procedure; all sites performed a single combined throat and deep nasal swab, of either one or both nasal cavities. For the Abbott PanBioTM COVID-19 Ag test, all sites performed a single deep nasal swab. Site investigators from the three sites met virtually each day to identify emerging logistical issues.

In addition to individuals presenting to screening clinics, RT-PCR and antigen testing were simultaneously performed on individuals with known COVID-19 infection notified to the Victorian DHHS who provided consent for additional sample collection. These samples provided additional material for validation of the Abbott PanBioTM COVID-19 Ag test and were tested at the Microbiological Diagnostic Unit Public Health Laboratory (MDU PHL), The University of Melbourne at the Doherty Institute, Australia. For these participants, RT-PCR and Abbott PanBioTM COVID-19 Ag tests were performed using a single deep nasal swab.

2.2. RT-PCR testing

Swabs were collected using either a flocked Copan© Eswab in 1 ml liquid amies or flocked Kang Jian© swab in 3 ml universal transport media and tested using the preferred RT-PCR assay at each of the pilot sites. In brief, testing at RMH was performed using the Coronavirus Typing (8-well) panel (AusDiagnostics, Mascot, Australia), as previously described [12] or the Xpert® Xpress SARS-CoV-2 assay (Cepheid, Sunnyvale, USA) [13]. Testing at Monash Health was performed using the Respiratory Pathogens 12-well assay (AusDiagnostics, Mascot, Australia) [14] or Xpert® Xpress SARS-CoV-2 assay; testing at Austin Health was performed using the Coronavirus Typing (8-well) panel (AusDiagnostics, Mascot, Australia) [14] or the Xpert® Xpress SARS-CoV-2 assay, and testing at MDU PHL was performed using the Aptima SARS-CoV-2 assay and the Panther Fusion SARS-CoV-2 assay (Hologic, Marlborough, Massachusetts, USA) according to manufacturer's instructions [15].

A workflow was developed to plan for the event of a positive antigen test result; in that event, the swab collected for RT-PCR would be expedited using the Xpert® Xpress SARS-CoV-2 RT-PCR assay. The purpose of this strategy was to provide rapid confirmatory RT-PCR results to enable clinical and public health action during the validation phase.

2.3. Antigen testing

The Abbott PanBioTM COVID-19 Ag test is a lateral flow immunoassay that detects the SARS-CoV-2 nucleocapsid protein in nasopharyngeal swabs. In this study, deep nasal swabs were collected, in accordance with national testing guidelines [16]. All swabs were collected by trained healthcare professionals and swabs were placed into the accompanying sample elution buffer and tested immediately after sample collection. Results were read by two independent observers and interpreted according to manufacturer's instructions; RT-PCR results were not available to observers. The result was recorded in a REDCap database as either “positive”, “negative” or “invalid” and a photograph of the result was also uploaded. In addition, qualitative data were collected regarding the attitudes towards ease-of-use and potential management implications of the Abbott PanBio™ COVID-19 Ag test in clinical practice.

2.4. In-vitro testing

To assess the analytical sensitivity of the Abbott PanBioTM COVID-19 Ag test, the limit of detection (LoD) was determined using heat-inactivated SARS-CoV-2. A SARS-CoV-2 (VIC01) isolate from a patient in Melbourne [17] was grown in Vero cells, quantified based on infective dose (50% Tissue Culture Infectious Dose [TCID50]) using the method of Reed and Muench [18], then heat inactivated at 60 °C for 15 min. All processing was performed at Biosafety Level 3 in the Department of Microbiology and Immunology at the University of Melbourne. Serial dilutions were prepared in triplicate in saline and 50 µl spiked into the Abbott PanBio™ COVID-19 Ag test collection tubes containing sample elution buffer (supplied with the assay). Testing was performed as described by the manufacturer. Analytical sensitivity was quantified in TCID50 per mL. All testing was performed in triplicate to give nine replicates in total for determination of the LoD. In addition, a total of twenty replicates were tested at 0.5x, 1x and 2x LoD to determine assay precision and 95% confidence limits for the LoD. The ability of the Abbott PanBioTM COVID-19 Ag test proprietary extraction buffer to inactivate SARS-CoV-2 was also assessed. To do this, 30 µL of undiluted stocks of SARS-CoV-2 (concentration 105.3TCID50/mL) was spiked into 300 µl of proprietary extraction buffer, or infection media (Minimal Essential Media [MEM] supplemented with 10 μM HEPES, 2 mM glutamine and antibiotics). One aliquot of extraction buffer did not have virus added and was used as a control for cell cytotoxicity. The presence and quantity of infectious virus was assessed using a TCID50 assay after 30 s, five minutes and ten minutes incubation, with all timepoints at room temperature, as previously described [19].

2.5. Statistical analysis

Sensitivity, specificity and positive and negative predictive values (PPV and NPV) were calculated by comparing the results of the Abbott PanBioTM COVID-19 Ag test with RT-PCR. Where appropriate, results were reported with 95% confidence intervals. Statistical analyses were performed using SPSS statistical software package version 27 (SPSS Inc., Chicago, IL, USA), or GraphPad Prism, version 9.0 (San Diego, CA, USA).

2.6. Ethics approval

Ethics review and study approval was provided by Monash Health Human Research and Ethics Committee (RES-20-0000-678A) and local Governance approval was provided by Melbourne Health and Austin Health Offices for Research.

2.7. Role of the funding source

This work was funded by the Victorian Government, Victoria, Australia. The funder was not involved in data collection, analysis or manuscript preparation.

3. Results

3.1. Clinical studies

In total, 2,418 individuals underwent both antigen testing and RT-PCR testing at the hospital sites (899 at the RMH; 528 at Monash Health; 991 at Austin Health) and 26 as part of additional testing from patients with known COVID-19 (Table 1; Supplementary Table 2). Five patients were excluded (two at RMH due to missing data during pre-pilot phase and one each at RMH, Monash Health and Austin Health with no RT-PCR available) leaving 2,413 for analysis. The median age of patients across the three clinical sites was 35 years (range 1 to 93 years), and 44.2% were male. 124 (5.1%) were asymptomatic, and of those who were symptomatic, the median symptom duration was 2 days (range 0 to 36 days) (Table 1).

Table 1.

Clinical data from all hospital sites and additional samples collected from 26 patients with confirmed COVID-19 infection as notified to DHHS.

RMH Monash Austin Across all hospitals Additional samples*
Staff performing test (n) 101 6 16 123 N/A
Total participants (n) 899 528 991 2,418 26
Excluded (n) 3 1 1 5 4
Total included (n) 896 527 990 2413 22
Median age (range) 32 (16-97) 35 (5-93) 36 (1-93) 35 (1-97) 40 (18-73)
Male sex (%) 397 (44.3%) 268 (50.9%) 401 (40.5%) 1,066 (44.2%) 8 (36.4%)
Asymptomatic (%) 10 (1.1%) 36 (6.8%) 78 (7.9%) 124 (5.1%) 0 (0%)
Median days of symptoms (range) 2 (0-22) 2 (0-14) 1 (0-36) 2 (0-36) 5 (1-33)
Positive RT-PCR result (%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) N/A
Positive Abbott PanBioTM COVID-19 Ag result (%) 0 (0%) 1 (0.2%) 0 (0%) 1 (0.04%) 17 (77.3%)

Additional clinical samples collected from participants with known COVID-19, as reported to the Victorian DHHS, to assist with test validation. Note that 4/26 participants were negative by RT-PCR at the time of Abbott Panbio testing. Abbreviations: n; number, RMH; Royal Melbourne Hospital, RT-PCR; reverse-transcription polymerase chain reaction.

Of the 2,413 individuals tested in hospitals (or associated screening clinics), no individuals tested positive using RT-PCR, and one individual tested positive using the Abbott PanBioTM COVID-19 Ag test, giving a clinical specificity of 99.96% (95% CI 99.73 - 100%). This participant was asymptomatic and tested negative via RT-PCR (which was taken simultaneously). There was no interobserver variability reported and 81% of staff reported the test to be “very easy” or “easy” to use. Nearly 40% of staff reported that if the result was accurate, it would not result in a change in clinical management. This is likely because all symptomatic patients are advised to return to their residence and isolate until complete symptom resolution regardless of the result, although over one quarter of test users reported they would send the patient home with different advice (Supplementary Table 1).

Of the 26 individuals with notified COVID-19, the time from symptom onset ranged from 1 to 33 days. Seventeen individuals tested positive using the Abbott PanBioTM COVID-19 Ag test and 22 tested positive using both the Aptima SARS-CoV-2 assay and the Panther Fusion SARS-CoV-2 assay. True positive and negative samples were defined based on the results of the Hologic Panther Assay giving a positive performance agreement (PPA) of 77.3% (95% CI 54.6 - 92.2%) for all participants regardless of symptom duration and 100% (95% CI 78.2-100%) for participants within 7 days of symptom onset (Supplementary Table 2).

3.2. Identification of logistical and implementation challenges

A framework was iteratively developed by investigators to enable implementation of antigen testing in a low prevalence setting (Supplementary Figure 2). Specifically, this framework was designed to mitigate the clinical and public health impact of false positive and false negative results and ensure the safety of testing staff. Further, a list of key logistical and implementation challenges related to the use of antigen tests was developed by investigators (Table 2).

Table 2.

Pre-analytical, analytical and post-analytical issues identified in this study to address in implementing SARS-CoV-2 antigen testing.

Pre-analytical issues Pre-analytical solutions Analytical issues Analytical solutions Post-analytical issues Post-analytical solutions
Purpose Clearly define the purpose of testing, e.g., screening or diagnostic testing Quality control Perform negative and positive controls on new batches Waste Provide biohazardous waste disposal and ensure disposable antigen test devices are discarded appropriately
Target group Clearly define the appropriate population, e.g., based on symptoms or epidemiological risk factors Test validation Clearly identify the reference standard and the limitations of the reference standard Confirmation Define pathways to confirm positive results with RT-PCR, especially low-prevalence settings, when a positive result has higher likelihood of being false-positive
Resourcing Identify testing location, adequate space for registration, swabbing, donning/doffing and ensure resourcing of PPE/equipment Impact: Disease prevalence Understand the impact of the prevalence of disease on positive and negative predictive value of the test Exclusion Define pathways to perform RT-PCR if result is negative and clinical/ epidemiological suspicion remains high, especially in high-prevalence settings
Specimen collection Ensure staff are available and trained in PPE and infection control in addition to training staff in the use of novel diagnostic tests Impact: Assay characteristics Understand the sensitivity and specificity of the antigen assay in the target group of interest Reporting Communicate results in a timely and accurate manner, e.g., text message / telephone call if no cellular device
Leadership Ensure adequate clinical supervision is available to identify issues, escalate and communicate with key stakeholders Impact: Clinical characteristics Understand the performance on the test based on the clinical characteristics of the patient, e.g., duration of symptoms Quality assurance Consider role of external quality assurance program and incorporate into quality management system
Data management Ensure data is captured, accurate, confidential and stored securely (e.g., photograph result of lateral flow Ag assay) Impact: Public health Consider public health implications in the context of above, e.g., false negative result in aged care worker
Specimen handling Handle all specimens using appropriate PPE and consider testing as close to patient as feasible to reduce transport needs

3.3. Laboratory validation

Using serial dilutions of heat inactivated SARS-CoV-2, the analytical sensitivity of the Abbott PanBio™ COVID-19 Ag test was 250 TCID50 (equivalent to 175 plaque forming units [pfu]/ml). In contrast, the analytical sensitivity of the Aptima and Panther Fusion SARS-CoV-2 assays were 0.01 TCID50 (0.007 pfu/ml). The elution buffer was not shown to be completely viricidal following ten minutes of exposure at the concentrations examined, although there was a significant reduction in infectious SARS-CoV-2 titre (Supplementary Figure 3).

4. Discussion

Here, we describe the implementation of a rapid antigen test for COVID-19 in a low prevalence setting in Melbourne, Australia. Although the relatively small number of cases limited our analysis of clinical sensitivity, we identified a range of logistical and implementation challenges that will inform future roll-outs of antigen testing, particularly in low-prevalence settings.

As expected, the analytical sensitivity of the antigen test was less than RT-PCR, with a detection limit of 175 pfu/mL, compared to 0.007 pfu/ml by RT-PCR. Our findings are similar to a recent study by Corman et al. that observed an analytical sensitivity of 88 pfu/mL when using SARS-CoV-2-infected cell culture supernatants with the Abbott PanBioTM COVID-19 Ag test [20]. Further, our findings are in keeping with the manufacturer's stated analytical sensitivity of 2.5 × 101.8 TCID50/mL of SARS-CoV-2 [21]. Importantly however, there is limited standardisation of protocols between studies; for example, cell culture methods may vary and the reference strain of virus may also differ between studies [22]. Accordingly, there is a clear need for agreed, standardised laboratory protocols to enable accurate comparison of sensitivity across different antigen test kits. Both the analytical and clinical (within 7 days of symptom onset) specificity of the antigen test were high (100 and 99.96%, respectively), in keeping with other studies [23], [24], [25]. However, even with a highly specific test, in a low prevalence setting, the majority of positive results are likely to represent false positives. As such, confirmatory testing of positive antigen test results by RT-PCR is critical. Similar to another recent study, we also found that the proprietary test buffer was not virucidal, even after ten minutes of exposure to the buffer [24]. Accordingly, appropriate biosafety measures should be in place when undertaking testing outside a laboratory setting. In our study, individuals collecting and performing the test wore personal protective equipment consisting of gown, gloves, a N95 respirator mask, and eye protection. It is likely that different settings implementing antigen testing will require specific risk assessments to minimise the risk of exposure to infectious virus.

In addition to assessing the performance characteristics of the test, we also worked through a range of practical challenges with test implementation. For example, to enable test tracking and ‘traceability’ a custom REDCap database was used to input patient data and upload photographs of each test as a record of each result. However, implementing this testing into public health responses at scale will require a more systematic and streamlined approach to data integration (e.g., the use of QR codes for test registration and tracking; cloud-based interfacing with laboratory information systems). Currently in Australia, there are regulatory restrictions for the use of POC COVID-19 testing such that antigen tests can only be supplied to accredited laboratories, medical practitioners, healthcare professionals working in residential and aged care facilities, or health departments [8]. Best practice guidelines for POC testing in Australia recommend training and competency assessments for staff performing this testing, in addition to an overarching quality framework to ensure appropriate quality control and quality assurance [26]. The supply of COVID-19 kits for self-testing at home is presently prohibited in Australia [8]. However, in countries that are currently experiencing a much higher disease prevalence than Australia, widespread deployment of lateral flow antigen kits is underway, including self-testing [27]. Importantly, outbreaks of COVID-19 can emerge and spread quickly, as evidenced by the rapid emergence of a ‘second wave’ in Melbourne between July and September [9]. As such, appropriate planning to overcome the logistical, governance, regulatory and implementation challenges we identified for scaled-up antigen testing is valuable preparation, especially in low-prevalence settings.

There were some limitations with our study. Most obviously, the study was conducted in a setting where disease prevalence was extremely low, although this experience is shared with other members of the Western Pacific region. Furthermore, in order to evaluate sensitivity in a low prevalence setting, all positive Abbott PanBio™ COVID-19 Ag tests were obtained from patients with known COVID-19, limiting a comprehensive appraisal of utility. Our study was conducted specifically in a hospital setting, predominantly amongst symptomatic individuals. Given the propensity for SARS-CoV-2 to spread in healthcare facilities [28,29], RT-PCR remains the gold standard high sensitivity clinical diagnostic test in this context. However, antigen tests may have utility as a rapid screening tool in the hospital setting that can help triage symptomatic patients and improve patient flow while awaiting confirmatory RT-PCR testing. Further work is required to establish the optimal framework for antigen testing as a surveillance tool in the community setting, particularly in areas with a low disease prevalence and in asymptomatic individuals.

In summary, we describe the validation and implementation of a widely available antigen test in a low prevalence setting. We identified several practical challenges to scaling up this testing, mostly related to pre- and post-analytical stages of testing. Our findings will help inform the responsible use of antigen tests in other low-prevalence countries.

Declaration of Competing Interest

SL reports grants from National Institutes of Health (NIH), grants from American Foundation for AIDS Research (amfAR), grants from Gilead Sciences, grants from Merck, grants from ViiV, grants from Leidos, grants from Wellcome Trust, grants from Australian Centre for HIV and Hepatitis Virology Research (ACH2), grants from Melbourne HIV Cure Consortium, grants from Victorian Department of Health and Human Services (DHHS), grants from Medical Research Future Fund (MRFF), outside the submitted work. KG reports grants from Royal Australasian Society of Physicians (RACP), grants from Murdoch Children's Research Institute (MCRI), other from Isabel & John Gilbertson Charitable Trust, grants from Department of Health and Human Services (DHHS) Victoria, outside the submitted work.

Acknowledgments

Author contributions

Stephen Muhi: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Validation, Visualization, Writing - original draft, Writing - review & editing. Nick Tayler: Data curation, Investigation, Methodology, Resources, Software, Visualization, Writing - original draft, Writing - review & editing. Tuyet Hoang: Conceptualization, Data curation, Investigation, Methodology, Project administration, Resources, Software, Writing - original draft, Writing - review & editing. Susan A. Ballard: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Validation, Writing - original draft, Writing - review & editing. Maryza Graham: Data curation, Investigation, Resources, Supervision, Writing - review & editing. Amanda Rojek: Data curation, Investigation, Resources, Software, Writing - review & editing. Jason C. Kwong: Data curation, Investigation, Resources, Supervision, Writing - review & editing. Jason A. Trubiano: Investigation, Resources, Supervision, Writing - review & editing. Olivia Smibert: Investigation, Resources, Supervision, Writing - review & editing. George Drewett: Data curation, Investigation, Writing - review & editing. Fiona James: Data curation, Investigation, Resources, Supervision, Writing - review & editing. Emma Gardiner: Data curation, Investigation, Resources, Writing - review & editing. Socheata Chea: Formal analysis, Validation, Writing - review & editing. Nicole Isles: Data curation, Formal analysis, Validation, Resources, Writing - review & editing. Michelle Sait: Formal analysis, Methodology, Validation, Writing - review & editing. Shivani Pasricha: Formal analysis, Methodology, Validation, Writing - review & editing. George Taiaroa: Formal analysis, Methodology, Validation, Writing - review & editing. Julie McAuley: Formal analysis, Methodology, Validation, Writing - review & editing. Eloise Williams: Investigation, Writing - review & editing. Katherine B. Gibney: Data curation, Investigation, Writing - review & editing. Timothy P. Stinear: Methodology, Validation, Supervision, Writing - review & editing. Katherine Bond: Investigation, Writing - review & editing. Sharon R. Lewin: Supervision, Writing - review & editing. Mark Putland: Supervision, Writing - review & editing. Benjamin P. Howden: Conceptualization, Supervision, Writing - original draft, Funding acquisition, Project administration, Writing - review & editing. Deborah A. Williamson: Conceptualization, Methodology, Project administration, Supervision, Writing - original draft, Writing - review & editing, Writing - review & editing.

Acknowledgements

SM and KB are supported by Postgraduate Scholarships from the National Health and Medical Research Council (NHMRC) of Australia (GNT1191368 and GNT1191321). KG and JCK are supported by an Early Career Fellowship Grants from the National Health and Medical Research Council (NHMRC) of Australia (GNT1120816, GNT1142613). BPH is supported by a Practitioner Fellowship from the National Health and Medical Research Council (NHMRC) of Australia (APP1105905). DAW is supported by an Investigator Grant from the National Health and Medical Research Council (NHMRC) of Australia (APP1174555). We thank the Victorian Infectious Diseases Reference Laboratory for providing reference material. We also thank study staff at each hospital for collection of swabs from patients.

Data sharing statement

All authors confirm that they had full access to all the data in the study and accept responsibility to submit for publication. De-identified data is available from the time of publication and available for five years following article publication. Requests should be directed to the corresponding author.

Footnotes

Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.lanwpc.2021.100115.

Appendix. Supplementary materials

mmc1.pdf (758.4KB, pdf)
mmc2.docx (193.2KB, docx)

References

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

mmc1.pdf (758.4KB, pdf)
mmc2.docx (193.2KB, docx)

Articles from The Lancet Regional Health: Western Pacific are provided here courtesy of Elsevier

RESOURCES