ABSTRACT
Home testing for infectious disease has come to the forefront during the COVID-19 pandemic. There is now considerable commercial interest in developing complete home tests for a variety of viral and bacterial pathogens. However, the regulatory science around home infectious disease test approval and procedures that test manufacturers and laboratory professionals will need to follow have not yet been formalized by the U.S. Food and Drug Administration (FDA), with the exception of Emergency Use Authorization (EUA) guidance for COVID-19 tests. We describe the state of home-based testing for influenza with a focus on sample-to-result home tests, discuss the various regulatory pathways by which these products can reach populations, and provide recommendations for study designs, patient samples, and other important features necessary to gain market access. These recommendations have potential application for home use tests being developed for other viral respiratory infections, such as COVID-19, as guidance moves from EUA designation into 510(k) requirements.
KEYWORDS: home test, influenza, regulation, study design, FDA
INTRODUCTION
Point-of-care (POC) tests have been used in a wide variety of clinical settings for some time, with multiple assays approved for use in Clinical Laboratory Improvement Amendments (CLIA)-waived settings such as ambulatory clinics and pharmacies (1). A growing number of tests are also used by individuals outside of health care settings, with sample-to-result testing carried out at home (i.e., not home collection with mail-in testing), which we term “at-home” or “home use” tests (2, 3). While several in vitro diagnostics (IVD), such as capillary glucose testing and urine pregnancy testing, have been approved for many years for home use, only one test for an infectious disease pathogen, namely, HIV, has been approved by the U.S. Food and Drug Administration (FDA) (4). Recently, the FDA has issued guidance for Emergency Use Authorization (EUA) of over-the-counter (OTC) tests (5, 6), and several tests (e.g., the Ellume test for COVID-19) have been granted EUA for home use, but guidance for 510(k) authorization of OTC tests has not been issued. The processes described here are generally in line with the new EUA guidance for COVID-19 OTC tests (6), but we discuss rational and competing factors for determining appropriate study designs.
Annually, 5% to 20% of the U.S. population will get influenza (7, 8), leading to approximately 200,000 hospitalizations and 8,200 to 20,000 deaths (8). Testing remains the province of clinics, hospitals, physicians’ offices, and reference laboratories, creating problems of access for many, particularly disadvantaging those who must take time off work, often the poor in our community. As the COVID-19 pandemic has revealed, requiring potentially infectious individuals to travel to fixed locations, such as clinics, limits access to testing and increases risk of spreading infections. Home testing potentially reduces risk of transmission and may facilitate access to testing in vulnerable populations. Minimizing costs and increasing availability could also mitigate delays in testing and allow timely treatment or quarantine actions, but such assumptions need to be fully evaluated. However, as we have learned in the pandemic, cost, insurance coverage, and test complexity problems still must be resolved to realize this potential.
Four recent advances have boosted commercial interest in developing home tests for numerous viral and bacterial pathogens. First, significant technological advances have enabled tests with the ease of use and performance needed for at-home tests, including molecular assays (9). Second, there is the potential for smartphones to act as adjuncts to test procedures, by use of built-in image recognition tools, signal processing, and guidance for users on test procedures (10). Third, there is strong evidence that individuals are able to obtain samples themselves from their own throat, nose, vagina, stool, and urine, as well as finger-stick blood samples, with an accuracy to detect pathogens (or other analytes) similar to that of health care workers (HCWs) (11). Fourth, there has been a surge in public awareness and interest in diagnostic testing, often combined with telemedicine care delivery, largely resulting from the COVID-19 pandemic.
The FDA, the in vitro diagnostics (IVD) industry, laboratory scientists, and the health care community all have substantial interest in advancing home testing. However, the regulatory science around home test approval continues to evolve, and the clinical microbiological community should be aware of and drive optimal science in test development and evaluation.
In this paper, we outline the regulatory processes that affect how and whether such tests reach the market, describe critical new issues that arise in home test study design, and provide detailed descriptions of various requirements and options for study designs needed to support regulatory clearance or approval for home testing. While we focus on influenza, most of this analysis applies to other respiratory diseases, and portions would be applicable to home tests for other infectious diseases. By having consistent scientific approaches that meet regulatory requirements, the IVD industry, including both laboratory scientists and regulatory professionals, could rapidly develop and market these important diagnostic devices via more efficient FDA pathways.
Regulatory pathways to market for home use tests. Home tests are defined as “medical devices labeled for use in any environment outside a professional health care facility,” which can include multiple types of settings, and are intended to be used by “untrained persons without the help of a health care professional” (12). The FDA identifies two main types of home use tests, namely, “test kits,” where individuals take their own samples, perform a test, and read the result, and “collection kits,” where individuals take their own sample and either mail it to a laboratory where an assay is run or drop it off at a laboratory or health care site (saving shipping time and costs), with results returned to the individual (12). This report is focused on complete tests performed at home (“home test kits”).
The route to market access via FDA 510(k). There are several pathways for tests to obtain market access, depending on the risk of the product and its fundamental technology. Every medical device falls into one of three classes that are generally associated with risk (class I being low risk to class III being high risk) (13, 14). The classification determines the regulatory controls that the FDA deems necessary to ensure an appropriate level of safety and effectiveness of medical devices. For context, the HIV at-home test systems (example, product code QLW) are class III, OTC pregnancy tests (example, product code LCX) are class II, and uric acid tests system for prescription home use (example, product code PTC) are class I (15).
General controls, such as registering and listing the device with the FDA, are required of all medical devices in the United States. Special controls are requirements deemed necessary by the FDA to ensure the safety and effectiveness of the device on the market and may include, for example, performance standards or special labeling to ensure proper use of the device (13, 14).
While an influenza test is considered a class II IVD and is regulated under 21 Code of Federal Regulations (CFR) 866.3328, its home use variant has yet to be defined. Tests may be provided as a prescription-based home (Rx home use) test or an over-the-counter (OTC) product purchased by consumers without prescription or guidance from a clinician.
It is likely that the first home use influenza test would go through a de novo request and receive a class II designation. The de novo request is a pathway designated for novel devices which are intended to be marketed in the United States. To be considered for a de novo request, a novel device must be moderate in risk and not have a directly comparable device on the market (predicate device) to show the same level of safety and effectiveness. Once the first home use influenza test receives market authorization via the de novo request, subsequent home use influenza tests could gain market clearance through a 510(k) application, which uses marketed predicate devices to show substantial equivalency (16, 17).
UNIQUE REGULATORY CHALLENGES RAISED BY HOME USE RESPIRATORY INFECTION TESTS
While product requirements and studies required for FDA clearance are well established for CLIA-waived tests intended for performance by HCWs, home testing raises many new issues (18). CLIA-waived tests exclude complete at-home testing, the focus of this article. Table 1 outlines key regulatory areas derived from existing guidance for CLIA-waived tests (19, 20) along with our assessment of critical differences that should be addressed for regulation of OTC tests. We describe important considerations related to at-home tests for influenza and suggest ways that manufacturers should design studies that demonstrate adequate product safety and effectiveness to the FDA based either on existing evidence or on our opinion. Our recommendations are generally consistent with recent FDA guidance for COVID-19 OTC (“non-laboratory”) tests (6), but we provide rationale and discussion of trade-offs in study design options. The following sections follow the organization of Table 1.
TABLE 1.
Component | Tests for clinical sites | Tests for home use |
---|---|---|
Intended use operator and intended use site | HCW at clinical sites. Nonlaboratorians but may have experience with other tests and typically have training as part of site quality control, samples collected by HCW, potentially reduced operator diversity compared to general population, recruitment targeted to people naturally seeking care. | Diverse, untrained, first-time users at nonclinical sites, including home. Operators may not have any experience with diagnostic test procedures or underlying medical knowledge, training opportunity and effectiveness may be less than that of HCW, likely higher user diversity than that of HCW population, potential recruitment bias if health-seeking behavior is not natural. |
Study design: 3 sites, 3 representative operators per site testing in their typical workflow, diversity of operator skill is narrowed by medical training. | Study design: Intended sites are decentralized (e.g., home), but study design may allow for contrived study sites. Test operators should be inexperienced individuals with diversity representative of the intended population (e.g., age, education). | |
Intended use population (criteria to test) | HCW determines who to test, typically patients with symptoms during flu season. Patients present at clinic later in disease and may represent more severe disease (spectrum bias). | For Rx home use test, HCW retains control over who to test. For an OTC test, testing is self-determined, potentially addressed by labeling but not enforceable. |
Study design: Recruitment at clinical site matches intended use; eligibility criteria ensured by in-person HCW. | Study design: Recruitment at clinical site does not match intended use but allows greater certainty of eligibility criteria. Community recruitment with monetary incentive may bias study sample, especially inclusion criteria based on self-reporting. Enrollment may be based on motivation of getting a test result. | |
Sample type and collection | Typically use well-established best-known sampling methods performed by HCW; new methods require validation, allowing future tests to adopt new methods. | Non-HCW collection sampling methods may be limited. Sampling may need to consider self-collection, parent-child collection, adult-adult collection. More restricted options for self-sampling may drive less-proven sampling methods; new methods will require (“bridging”) validation studies. |
Reference standard/comparator | Best-known sample (e.g., NP swab) collected by HCW and tested by high-quality laboratory method. | Best-known sample collected by HCW may not be possible in some study designs. Self-collected reference sample may require validation. |
Usability/human factor study | None | Essential. FDA recommends human factor study before clinical study. Healthy population in a contrived setting is acceptable. Near-the-cutoff studies (see below) can be performed on the same population. |
Analytical studies | LOD (analytical sensitivity), inclusivity (analytical reactivity), cross-reactivity (analytical specificity), microbial interference, endogenous/exogenous interfering substances. | Same as tests for clinical sites. |
Flex studies | Established framework and specific guidance available (1, 42). Performed in laboratory by study staff. | Same as CLIA waiver but should address new and elevated risks for diverse, untrained, first-time users and variations in home use environment. These include ambient temp and humidity, lighting conditions, reading time, testing delays, sample and diluent vol, sample elution technique, disturbances, and orientation during analysis, if applicable, to the specific test under study (6). |
Stability/shelf life studies | Stored in clinical setting. | Stored at home with no oversight or enforcement. |
Reproducibility panel (near the cutoff, performance near LOD) | Contrived samples near the LOD (spiked human matrix); performed by operators similar to clinical samples. | Contrived samples near the LOD (spiked human matrix); usability study group is a candidate for this testing. |
Implementation: product labeling | HCW interprets labeling and ensures clinical oversight. | Rx home use: HCW interprets labeling and enforces clinical oversight. OTC: Users must be capable of understanding and following labeling, but this cannot be enforced. |
Implementation: product disposal | Sites have disposal systems in place (e.g., sharps, biohazardous materials). | Need to dispose in usual household waste. |
Implementation: connection to care and reporting | Inherently connected to care. Sites have reporting systems in place for notifiable diseases. | Rx home use: Connected to care and usual reporting systems. OTC: No connection to care, or care must be initiated by user, no reporting requirements. |
Implementation: product postmarket surveillance | Medical oversight and reporting increase chance of observing emerging problems with performance. | Potential medical or public health oversight if reporting of test results is enforced but lack of context to recognize problems. |
HCW, health care worker; OTC, over the counter test; Rx home use, prescription-based home use test; NP, nasopharyngeal; LOD, limit of detection.
Intended use operator.
A challenge in designing studies to validate home tests is how to ensure that the test is evaluated by individuals who would use the test. The FDA has published two guidance documents that guide IVD developers (sponsors) in some aspects of developing home influenza tests (2, 21). In the guidance on designing home use devices, the FDA stresses how home users differ from professional users and may vary in abilities to perform a test; thus, a test should be designed appropriately “to prevent reasonably foreseeable misuse” (2).
(i) Requirement for novice users.
Tests designed for use at home cannot rely on prior experience or training. The FDA uses the term “lay users” (laypeople) for home use devices to exclude anyone with testing experience, including laboratory professionals and HCWs who use CLIA-waived tests. In certain instances, other exclusions for test evaluation, such as persons with diabetes who routinely test blood glucose, may apply.
(ii) Feasibility of training.
Offering product training to home users may be impractical, particularly for tests that are used infrequently or even single-use only. Thus, studies on devices intended for home users should be conducted with first-time users who have received no training on test operation other than training that will be provided with the commercial product (e.g., product inserts, phone app, demonstration video) (22). Such training does need to accommodate different levels of technology familiarity and language proficiency, particularly among underserved populations who may lack Internet access.
(iv) Multiple sample collection scenarios.
Conventional tests involve a HCW taking a sample from a patient, whereas home testing introduces new sampling relationships that may lead to variations in performance. Home users might include not only those who are using the test on themselves but also those using the test on others, such as the elderly or children. Separate studies may therefore be needed to provide adequate evidence for the appropriate labeling of home use tests to ensure that, for example, parents can confidently administer the test to their child at home.
(v) User diversity.
Factors that might affect device usability should be considered in designing studies and choosing users, especially cognitive abilities, physical abilities, literacy skills, and language(s) of the test instructions (21). Additionally, specific device characteristics may affect required user abilities: tests that require visual reading of results are influenced by a user’s eyesight, including whether they are colorblind; reliance on downloading a smartphone app introduces barriers for users without the compatible phone or technical skills; and physical dexterity is required to place a small amount of sample in a tube or to handle test strips. Because addressing these limitations through labeling is not feasible, user dependencies like these should be addressed to the extent possible through premarket study design. This means that studies must be done in broad populations representing an expected distribution of test users in terms of cognitive abilities, physical abilities, literacy, and language skills.
Intended use site.
As at-home tests are intended to be used unaided and unsupervised, the question is whether typical clinical research settings can mimic this type of testing environment for evaluations of home tests or if testing should be carried out in home settings. The FDA defines a home setting as “… any environment outside a professional health care facility. This includes but is not limited to outdoors, offices, schools, vehicles, emergency shelters, and independent living retirement homes” (2). A study design that included all of these settings would be unfeasible; therefore, studies should be designed to assess variations in OTC test settings that might affect the practical performance of the test. For example, many point-of-care (POC) tests require the user to read the presence or absence of test lines appearing on a lateral flow test strip, and poor lighting can reduce the accuracy of a reading. Moreover, storage conditions in home settings may deviate from typical clinical storage temperature (15°C to 30°C) and/or humidity, potentially requiring wider stability ranges or additional storage deviation detection. Similarly, operating temperatures and settings in which the test would be used (e.g., moving vehicles) may extend beyond professional settings. Furthermore, the number and complexity of steps needed to conduct the test and produce results are key considerations; these may include collecting the sample from the individual’s respiratory tract, preprocessing before insertion into the testing device, and any steps required to complete the assay. Moreover, home tests that require app downloads and/or connectivity between the assay and the user’s smartphone in order to operate (e.g., to read or interpret results) are further steps that need to be evaluated. These steps and settings should all coincide with that in the intended use information (2).
If evaluating test performance in home settings in the community is not feasible due to risks (e.g., risk of infection to research staff), costs, or other barriers such as geographically disparate study subjects, then the study designs could potentially mimic home settings for performance and usability testing.
Intended use population (criteria to test).
For prescription-based tests, a health care provider determines whether it is used based on their clinical assessment. However, for an OTC test, a consumer determines whether it is used based on their interpretation of product labeling. When recruiting a population for test performance, using a consistent and referenced definition for the disease or condition that will assist in determining the labeling of the device is crucial. This definition should reflect the labeling in the intended use and on the product container. Studies seeking to recruit individuals who self-identify as having an influenza-like illness (ILI) are complicated by differing ILI definitions between organizations. To date, the FDA has not given an ILI definition that they believe should be used consistently for gaining regulatory approval. For OTC use, recruitment should ideally be based on broad criteria (e.g., people who “think they have the flu”); participants should be allowed to interpret product labeling to determine whether to test themselves, as they would with a marketed product.
Labeling affects both performance and study design. If labeling (or interpretation) is too broad, a smaller proportion of people testing will have the disease (necessitating a larger sample size), and false-positive test results become a larger proportion of all positives (reducing the positive predictive value). While it may seem logical to recruit people who present at a physician’s office for respiratory illness/ILI, this could lead to spectrum bias (having sicker patients with potentially higher viral load). Spectrum bias may falsely increase sensitivity compared to that of the intended test population (e.g., less severe illness) (23).
Given that influenza virus shedding usually decreases over 2 to 3 days after symptoms have appeared, another related consideration is timing of testing. Therefore, antigen tests may detect virus only in the first few days of infection, whereas molecular tests may detect virus for longer (24). Other factors also influence viral shedding, e.g., age (children shed more than adults for influenza), antiviral use, and illness severity. As a result, the WHO and others recommend that subjects of test accuracy studies participate within 4 days of symptom onset and that the duration of illness is documented (25, 26).
Sample type and collection considerations.
Sampling effectiveness depends on the sampling site characteristics, specimen collection device characteristics, method of transferring material to the test device, and the lay user's ability to perform tasks (e.g., performing self-collection). A home use influenza test must prove that its sample collection method can be understood and executed correctly by lay users; samples collected by lay users must be comparable to those of clinicians and provide similar, if not identical, levels of accuracy when tested.
(i) Sampling site characteristics. Sampling in clinical settings for influenza has typically involved nasopharyngeal swabs, which can only be performed by a trained HCW. More recently, midturbinate and lower nasal (anterior nares) swabs have become more accepted in clinical settings for influenza virus (and severe acute respiratory syndrome coronavirus 2 [SARS-CoV-2]) and are the only reasonable options for self-swabbing; saliva is also emerging as a potential for sampling, given ease of collection (27). However, sampling from the throat is discouraged, as this often induces choking or coughing and thus risks spreading potentially infected respiratory secretions.
(ii) Specimen collection device characteristics. Brand, model, material (e.g., polyester, nylon), device size, and structure (spun, flocked, foam) used and their impact on sample collection and analysis must be specified (28, 29), and data must be presented to the FDA to validate the swab performance with the specific test (28).
(iii) Transference method. There are various potential methods for transferring viral material from the swab to the testing device. These can involve different ways to elute the material from the swab to the sample port of the test while minimizing specimen dilution. Methods should provide high recovery and be robust for variations in user technique; both factors can be affected by the selection of swab material (29).
(iv) Effectiveness of self-sampling. Sample collection must consider various testing scenarios, such as individuals sampling themselves or others (e.g., elderly, children) and the details of exactly how the sampling is performed (e.g., one or both nostrils, depth of swab, number of rotations of the swab, need for observation by HCW in person or remotely). There is now convincing evidence that self-swabbing is comparable to professional collection (11, 30–32). Midturbinate nasal flocked swabs specifically designed for infants and children can be used by parents without reducing influenza virus detection rates while significantly increasing the patient's involvement and acceptance of the procedure, simplifying collection (33, 34).
Reference standard/comparator method.
The reference standard to evaluate comparative accuracy of a home influenza test needs to be “the best available method for establishing a subject’s true status with respect to a target condition” (33), which for influenza is reverse transcription (RT)-PCR using a professionally collected specimen. Although obtaining the sample needed for these assays is particularly challenging for studies of home use influenza tests, there are options.
For comparative accuracy studies conducted unsupervised at home (without research staff present), the study participant could self-collect a second sample and return it to a research laboratory by shipment or mail (22, 35–37). Incorporating a marker of human DNA in the reference sample to confirm it was in contact with the respiratory tract epithelium helps to mitigate the risk of inadequate sampling. Repeating swabs of the nose has not been shown to reduce viral material available for testing, as illustrated by a 2017 decision on the Alere influenza test (38). Expediting return shipping to the laboratory with clear instructions for safe packaging mitigates the concern for in-transit sample deterioration, which is a significant issue, as experience with influenza virus shows that it is unstable in transit. Another alternative is releasing preservatives once the sample is collected and the container is closed.
The main barrier for using a self-collected sample as the source for reference testing is that the FDA requires the comparator to be an FDA-cleared or -approved test. Although evidence suggests that research participants can obtain samples that produce detection rates similar to those obtained by research staff (11) and self-collected samples are being used for COVID-19 laboratory tests under FDA EUA (17), these may not translate into a high-quality reference test. Improving test reliability can be achieved by video instruction and observation during self-collection. If a study participant knows the result of their at-home test under investigation, there is also the potential of introducing bias in self-sampling for the reference sample; an individual who does not see their result may be more curious about the reference test result and be more attentive to the procedure, whereas an individual who sees their test result may be less motivated or rigorous in obtaining a second sample, leading to potential bias.
Alternatives to home-based self-sampling for reference testing include having the participant visit a health care site for a professionally collected sample within a short period. However, given the rapid decline in viral shedding for influenza virus, having the index test and reference test separated by time risks discordant results; although the acceptable time interval is not known, minimizing this interval lowers the risk of discordant results. Another option that could provide greater certainty of sampling is to have the researcher visit the study participant at home to collect a sample. However, this method increases the researcher's health risks, is inefficient for dispersed participants, and may cause participants to perform the index test in different ways than in situations where they are not observed.
Finally, conducting reference sampling in a simulated home setting could simplify sample collection but introduces health risks to those in the clinical area and adds delays in recruiting study participants after illness onset, potentially impacting test accuracy.
Usability/human factor study.
As an extension to the design and labeling of a home use influenza test, a clear procedure or guide for lay users on test execution is crucial to mitigate use errors. The test procedure should highlight proper test handling and usage by the intended user in the intended environment (2). While this article does not provide prescriptive solutions, we suggest several areas based on existing evidence where test manufacturers can design the product and studies to optimize the chance of success in product use while assuring the FDA of its safety and effectiveness. These include language proficiency, where the manufacturer can develop instructions and labeling using plain language at the 8th grade level (39). Furthermore, to improve readability, instructions should use large type fonts and descriptive images.
A practical test procedure should cover the total testing process, which has three major phases: preanalytical, analytical, and postanalytical. Proper guidance for each of these phases reduces the risk of failure due to misuse. However, the official definitions provided in the ISO 15189:2012, “Medical laboratories—requirements for quality and competence for pre-analytical and post-analytical phases,” are not suited for home use tests. In this context, definitions and regulatory considerations include the following:
-
1.
Preanalytical phase. This phase includes processes that start, in chronological order, from the intended user’s decision to obtain the home use test, transport and storage of the home use test, and reading the training materials.
-
2.
Analytical phase. This phase includes collection and handling the sample and performing the running of the sample with the home use test.
-
3.
Postanalytical phase. This phase includes processes such as obtaining and interpreting the test results and disposing of materials, if applicable.
We recommend that studies work to isolate aspects of the total testing process and evaluate the performance of each phase in the home or home-simulated setting. This facilitates identification of potentially vulnerable components in the testing process that may contribute to lower accuracy than in health care settings. These studies fall under the rubric of human factor testing, and the FDA has relevant guidance (21). For example, the areas of health literacy, dexterity with the test apparatus, and other features of the test process need examination in usability testing. Through this process, researchers can better isolate errors, redesign tests, and retest, if necessary. After optimal performance has been achieved in this environment, a home use study in uncontrolled environments can then provide the required data on safety and accuracy.
Analytical studies.
The FDA has proscribed a clear set of analytical studies for laboratory-based and CLIA-waived tests that include limit of detection (analytical sensitivity), inclusivity (analytical reactivity), cross-reactivity (analytical specificity), microbial interference, and endogenous/exogenous interfering substances (5, 6, 20, 40). These studies are carried out by sponsor personnel in a laboratory setting and are decoupled from influence of operator and setting. Thus, analytical studies should closely follow existing guidance.
Flex studies.
All reasonable potential sources of error related to the test need to be identified and evaluated for their risk of leading to hazardous situations. To best determine sources of errors, manufacturers of home tests should consider using a systematic approach to risk detection. Tools such as hazard analysis and design failure mode and effect analysis should be utilized during the design, development, and testing of the home use flu test (41).
Flex studies are needed to address all sources of error by testing the limits and robustness of the home use test (4). This can be supplemented by the implementation of control materials to help mitigate errors or prevent the test from being used outside of its operational limits. Flex studies intentionally stress the testing system to determine whether the test produces robust results in these situations and to determine possible sources of errors (4). This contrasts with usability studies, where features or elements of the testing system are maintained as designed by the manufacturer. The FDA has provided guidance on flex studies for CLIA-waived tests (19), publishes decision letters describing cleared tests (42), and has provided guidance for flex studies for COVID-19 tests (5, 6).
Stability/shelf life studies.
The FDA requires real-time stability testing under worst-case storage conditions to establish the claimed shelf life, including temperature and humidity stability, in-use/open-kit stability, inverted storage testing, and shipping tests (41). Compared to products intended for use in clinics, home use tests may be subject to a wider range of storage conditions and reduced adherence to labeling. For instance, a home user may store a diagnostic test in a bathroom cabinet, where it could be subject to far higher humidity than expected in clinical storage. Transportation may also be less controlled, e.g., if a home user left a test in a hot or cold vehicle. Tests may need to be evaluated against a wider range of storage controls and require extra engineering controls to prevent or report damage.
Performance with analyte concentrations near the cutoff.
For a CLIA-waived test, the FDA requires testing of low-concentration reference samples (near the cutoff and performance around the limit of detection [LOD]) by intended operators as part of their normal workflow and recommends testing by more than one operator per site and testing multiple samples by each operator (43). For a home use test, the concept of operators and sites is quite different. Healthy subjects could be recruited to run reference samples in a contrived home setting, perhaps alongside the usability study that the FDA recommends before running the clinical study. Combining this testing with the usability study allows for recruitment of a more representative user population (e.g., age, education level, language) than may be feasible in the clinical study. As with clinical testing, users should be naive to the test and receive no special instruction beyond the material intended for commercialization. For visually read tests, additional testing using contrived test results may be useful as part of usability studies.
Considerations for test implementation.
Beyond studies for evaluating performance, there are several areas of test implementation that must address regulatory requirements (Table 1).
(i) Product labeling.
All IVD tests must comply with 21 CFR 809.10 “Labeling for in vitro diagnostic products.” Home use influenza tests are intended for laypeople, adding complexity that must be addressed not only in the test instructions but also in other places such as the outer container, i.e. how to present all the necessary information in terms lay users will fully understand. As such, comprehension and self-selection studies are recommended (44).
(ii) Product disposal.
Home users will not have access to biohazard disposal, but the concept of biohazard risk in the home setting differs from that in a medical setting. A used swab, for example, is no more of a threat than a used tissue, so no special handling should be required. However, disposal of chemical waste and biological reagents (e.g., DNA oligonucleotides, DNA intercalating dyes, DNA-manipulating enzymes like CRISPR) requires special consideration.
(iii) Connection to care.
Prescription-based tests, including Rx home use tests, by definition require some type of connection to health care and an opportunity for results transmission through the HCW. However, loss to follow-up, i.e., patients not getting results, is likely unless there is a means to enforce reconnection and transmission to the patient once the test result is complete. OTC tests are further removed from these clinical connections. Tests that are not readable by humans and have the ability to transmit results can enforce connection to care and reporting in both cases (e.g., the Ellume flu test).
(iv) Product postmarket surveillance.
Home use influenza tests require a plan for monitoring test performance after approval/clearance by the FDA. The FDA will determine whether these postmarket surveillance plans facilitate risk mitigation and provide reasonable assurance of safety and effectiveness of the marketed devices (45). The requirements for postmarket surveillance of all medical devices include reporting to the FDA via the Manufacture and User Device Facility Experience (MAUDE) adverse event reporting system (46). Unfortunately, reporting of IVD adverse events does not adequately measure test performance in the community (47), and device performance in the community can be inferior to that in the clinical studies that supported regulatory applications (48). We recommend systematic prospective collection of data from the community by the manufacturer in collaboration with laboratory professionals that includes identifying viral mutations that could affect test performance, changes in the spectrum of illness presenting for testing, changes in disease prevalence and influenza virus strains, and variations in users from the original studies. Prospective data collection could be achieved by offering modest reimbursement to establish a cohort of individuals collecting reference swabs for analysis.
RECOMMENDATIONS FOR CLINICAL STUDY DESIGNS FOR HOME RESPIRATORY INFECTION TESTS
Table 2 summarizes our view of key considerations for clinical study design for home use tests. We define two broad settings for evaluation of home tests: (i) contrived environments, involving a clinic/research type setting mimicking a home setting, and (ii) use of individuals’ “real-life” homes. Within these two broad types of settings, we define three main types of recruitment that can be used: clinic capture, where individuals are recruited from those attending a clinical setting (49), community recruitment involving people not seeking care (22, 35, 50), and preenrollment, in which subjects are recruited prior to the onset of illness (e.g., prior to the usual “influenza season”) and await some signal to trigger testing (36, 37, 51). Within each of these modalities of study, we outline how each varies by the representativeness of intended populations, disease prevalence and test cost efficiencies, logistical burden, reference sample validity, and index test sampling and training bias.
TABLE 2.
Parameter | Contrived environment for index test |
“Real-life” home setting for index test |
||||
---|---|---|---|---|---|---|
Clinic capture | Community recruitment | Preenrollment | Clinic capture | Community recruitment | Preenrollment | |
Description | Patients seeking clinical care are recruited, subject is directed to run the test, and HCW collects the reference sample. | Community members meeting inclusion criteria are called to contrived study site to run the test, and HCW collects the reference sample. | Participants are enrolled before illness develops; if they trigger criteria, subject is called to contrived study site to run the test, and HCW collects the reference sample. | Patients seeking clinical care are recruited, HCW collects the reference sample, and subject is given a test to take home to run. | Community members meeting inclusion criteria are delivered a test to run at home and also to collect a reference sample that is returned by mail/courier. | Enrolled participants receive a test to hold (prepositioned); if they trigger criteria, they are directed to run the test at home and collect a reference sample that is returned by mail/courier. |
Recruitment options | Recruit subjects presenting at clinical site. Recruit based on review of electronic medical records for those meeting certain criteria and ask to attend clinic. |
Recruit via social media, community organizations, advertising. | Recruit via social media, community organizations, advertising. | Recruit subjects presenting at clinical site. | Recruit via social media, community organizations, advertising. | Recruit via social media, community organizations, advertising. |
Index test | Participant at clinic | Participant at clinic | Participant at clinic | Participant at home | Participant at home | Participant at home |
Reference sample | Collection by HCW at clinic | Collection by HCW at clinic | Collection by HCW at clinic | Collection by HCW at clinic | Participant at home | Participant at home |
Representative of intended population | Poor to good: controlled recruitment provides more certainty about ILI/symptom presence but misses those not seeking care. Potentially higher severity of illness. | Variable: highly dependent on recruiting message; can be good if recruiting message mimics test-seeking behavior without incentive-driven bias. Potentially lower severity of illness. | Variable: highly dependent on recruiting message; can be excellent if recruitment is designed to capture representative sample. Potential lower severity of illness. | Poor to good: controlled recruitment avoids those without ILI or symptoms but misses those who are not seeking care. Potentially higher severity of illness. | Variable: highly dependent on recruiting message; can be good if recruiting message mimics test-seeking behavior without incentive-driven bias. Potential lower severity of illness. | Variable: highly dependent on recruiting message; can be excellent if recruitment is designed to capture representative sample. Potential lower severity of illness. |
Efficiency (prevalence, recruitment, test cost) | Likely higher prevalence. | Likely lower prevalence. Likely lower participation due to need to travel to clinic. | Likely lower prevalence. Likely lower participation due to need to travel to clinic. Many tests go unused. | Likely higher prevalence. Some tests will go unused due to lack of direct supervision. | Likely lower prevalence. Some tests will go unused due to lack of direct supervision. | Likely lower prevalence. Many tests will not be triggered and will go unused. |
Logistical burden | Good: centralized logistics but requires significant HCW involvement and associated costs. | Poor: significant recruitment effort and significant HCW involvement and associated costs. | Poor: upfront recruitment effort, extended study duration, and significant HCW involvement and associated costs. | Excellent: centralized logistics and minimal HCW involvement. | Poor: significant recruitment efforts and significant HCW involvement, test delivery must be rapid and can be expensive. | Good to excellent: upfront recruitment efforts, extended study duration, but low overhead during study. An app or web-based system is likely needed to query for triggers. |
Reference sample validity | Excellent: standard of care sample; also can be opportunity to validate new sampling methods. | Excellent: standard of care sample; also can be opportunity to validate new sampling methods. | Excellent: standard of care sample; also can be opportunity to validate new sampling methods. | Excellent: standard of care sample; also can be opportunity to validate new sampling methods. | Variable: dependent on FDA acceptance of sampling method and chain of custody. | Variable: dependent on FDA acceptance of sampling method and chain of custody. |
Index test sampling and training bias | Subject may be “trained” after experiencing HCW test (if performed first); may perform differently if observed; sample order can be randomized. | Subject may be “trained” after experiencing HCW test (if performed first); may perform differently if observed; sample order can be randomized. | Subject may be “trained” after experiencing HCW test (if performed first); may perform differently if observed; sample order can be randomized. | Subject may be “trained” after experiencing HCW test (if performed first). | Excellent: sampling order can be randomized to prevent favoring one test. | Excellent: sampling order can be randomized to prevent favoring one test. |
Option for OTC testing | No | No | Maybe | No | Yes | Yes |
Option for Rx home use testing | Yes | Maybe | Maybe | Yes | Maybe | Maybe |
Ideally, a study would be carried out in one’s home and triggered by one’s interest in seeking an influenza test for actual diagnosis and by obtaining a reference sample collected in the home by a HCW. However, this study would have unrealistic costs for advertising, rapid test delivery, and home visits. Therefore, there is a need to balance what is realistic with the internal validity, time, and cost of a study while also meeting regulatory requirements.
Study design using “real-life” settings.
A home-based study can be carried out efficiently by recruiting people presenting at a clinic, taking a reference sample, and sending them home with a test kit (49). This provides a high-quality reference sample plus a realistic home test environment, low logistical burden, and high disease prevalence for cost efficiency; but significant biological differences (e.g., disease severity) could bias performance evaluation. A more realistic study population can be reached through real-time community recruitment; however, this requires significant marketing costs and logistics and the costs of rapid test delivery and reference sample return, and the reference test relies on a self-collected sample (22, 35, 50). Running the study through an organization with existing communication channels (for marketing), a focused geography (for test distribution), and diversity of age, education, and language (e.g., a church, workplaces) could reduce costs and provide a strong study design, provided that self-collected samples were trusted. For Rx home use, recruitment through telemedicine could provide a good compromise and also mimics a likely use case for Rx home use tests. Preenrollment can also reach a realistic study population while reducing the logistics and costs of test delivery (36, 37, 51), with the potential additional benefit of recruiting diverse subjects. However, this still relies on a self-collected reference sample and would require a much larger study size due to low prevalence. However, if the subject test is low cost, preenrollment could provide reasonable cost, since reference tests are needed only for those triggering a test.
Study design using contrived settings.
While contrived settings represent a compromise compared to “real-life” settings, they may provide practical advantages in study design. A significant advantage of testing at a centralized contrived site is the opportunity to obtain a reference sample collected by a HCW or research staff, which increases the validity of the sample and reduces logistical issues for sample return. Recruiting patients from clinics to test in a contrived setting is efficient (passive recruiting, no marketing, high prevalence), and the study population may be appropriate for Rx home use studies (patients seeking care), but the population is not ideal for OTC studies due to potentially strong spectrum bias. Community recruitment and prerecruitment for testing in a contrived setting may reduce spectrum bias, but participation rates are likely to be low due to the inconvenience of traveling to the testing site, and it unnecessarily increases exposure compared to at-home testing. Thus, contrived environments may be suitable for Rx home studies but are less suitable for OTC studies.
Recruitment incentives.
With reduced clinician oversight (Rx home use) or no clinical oversight (OTC), it is especially important for studies to mimic the health-seeking behaviors of the intended use population. Though the FDA has historically not allowed test subjects to receive the results of the subject test, this can be overcome within the Investigational Device Exemption (IDE) application. Keen interest in the test result would be expected to motivate users to be attentive to running the tests properly. The ability to return test results during clinical studies would increase the realism of the study population and test results. The risk of returning erroneous results could be mitigated to some degree by acting on the reference test result (similar to current POC testing with laboratory confirmation). The FDA has allowed return of results from subject tests in the case of COVID-19 and may do so for influenza, as return of results presents relatively low risk to patients.
Recommended study design for Rx home use.
We suggest that clinic capture recruitment with a take-home test can provide a quality reference sample, high study efficiency, and a real-life test environment while providing a reasonably generalizable study population (49). Household members could be used to expand test populations and, possibly, include a greater number of individuals with a higher risk of infection. Recruitment through telemedicine with rapid test delivery would mimic the likely dominant use case for an Rx home use test, thus providing a very realistic population, high study efficiency, and a real-life test environment. It would rely on self-collected reference samples, but they could be collected under telemedicine observation to increase sample quality.
Recommended study design for OTC tests.
For a low-cost OTC test, preenrollment of a diverse population with prepositioned test kits would provide a realistic population and test environment with manageable logistics and cost (36, 37, 51). Community recruitment can provide a strong study design if marketing costs can be kept low, e.g., by recruiting through a centralized organization with a diverse population. In all cases, video-based observed collection of a reference sample would mitigate one of the primary design weaknesses.
CONCLUSION/SUMMARY AND OUTLOOK
Technologies once confined exclusively to the hospital or physician’s office have come into the home. Devices as complex as respirators or dialysis have received approval from the FDA for home use, while the diagnostic industry has stayed more steadfastly with HCW. Prior to the COVID-19 pandemic, only one home diagnostic test for an infectious disease (HIV) had FDA approval. The COVID-19 pandemic has led to widespread public, commercial, and clinical interest in different testing options for SARS-CoV-2, including several home tests using antigen or molecular assays that are now emergency use authorized. However, the pandemic also highlighted multiple gaps in testing for respiratory viruses that had been largely overlooked by the IVD and scientific community previously, such as relative yield of different swab types/locations, whether swab type matters, and viral shedding expected at various time points prior to, during, and after symptoms appear. While the seasonality and/or endemicity of SARS-CoV-2, influenza virus, and other respiratory viruses is currently unclear, we speculate that demand for home tests for these viruses (and others, such as respiratory syncytial virus [RSV]) will continue to grow. The IVD community also needs to identify which viral pathogens to include in multiplex tests directed for home use and how evidence to support their use among home test users specifically can be generated.
We describe the regulatory requirements for market access and suggest the types of studies that must be completed in order to achieve the FDA’s agreement. The recent FDA EUA for home COVID tests both illustrates the recognition by the FDA of the importance of testing for infectious diseases in the home setting and provides a roadmap to getting such tests to the market. We have built on published FDA guidance and examples of recent test authorizations and approvals to provide a detailed roadmap for generating evidence needed to support home tests in development for influenza and other respiratory viral infections.
ACKNOWLEDGMENTS
We gratefully acknowledge the comments of Gina Conenello and Kim Sapsford of the U.S. FDA Center for Devices and Radiological Health on prior versions of this manuscript.
This study was funded by Gates Ventures.
The funder was not involved in the design of the study and does not have any ownership over the management and conduct of the study, the data, or the rights to publish.
Contributor Information
Larry G. Kessler, Email: kesslerl@uw.edu.
Romney M. Humphries, Vanderbilt University Medical Center
REFERENCES
- 1.Washington State Clinical Laboratory Advisory Council. 2017. Point-of-care testing guidelines. Washington State Department of Health, Olympia, WA. https://www.doh.wa.gov/portals/1/Documents/2700/POCT.pdf. [Google Scholar]
- 2.Food and Drug Administration Center for Devices and Radiological Health. 2014. Guidance for industry: design considerations for devices intended for home use. https://www.fda.gov/media/84830/download.
- 3.Food and Drug Administration. Home use tests. https://www.fda.gov/medical-devices/vitro-diagnostics/home-use-tests. Accessed January 2021.
- 4.Conenello G. 2016. Microbiology devices panel meeting: over-the-counter diagnostic tests for the detection of pathogens causing infectious diseases. https://www.fda.gov/media/99903/download.
- 5.Food and Drug Administration Center for Devices and Radiological Health. 2020. Guidance for industry: policy for Coronavirus disease-2019 tests during the public health emergency (revised). https://www.fda.gov/media/135659/download.
- 6.Food and Drug Administration. Template for developers of molecular and antigen diagnostic COVID-19 tests for home use. https://www.fda.gov/media/140615/download. Accessed November 2021.
- 7.Centers for Disease Control and Prevention. Key facts about influenza (flu). https://www.cdc.gov/flu/about/keyfacts.htm. Accessed January 2021.
- 8.WebMD. What are your odds of getting the flu? https://www.webmd.com/cold-and-flu/flu-statistics. Accessed January 2021.
- 9.Binnicker MJ, Espy MJ, Irish CL, Vetter EA. 2015. Direct detection of influenza A and B viruses in less than 20 minutes using a commercially available rapid PCR assay. J Clin Microbiol 53:2353–2354. doi: 10.1128/JCM.00791-15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Hernández-Neuta I, Neumann F, Brightmeyer J, Ba Tis T, Madaboosi N, Wei Q, Ozcan A, Nilsson M. 2019. Smartphone-based clinical diagnostics: towards democratization of evidence-based health care. J Intern Med 285:19–39. doi: 10.1111/joim.12820. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Seaman CP, Tran LTT, Cowling BJ, Sullivan SG. 2019. Self-collected compared with professional-collected swabbing in the diagnosis of influenza in symptomatic individuals: a meta-analysis and assessment of validity. J Clin Virol 118:28–35. doi: 10.1016/j.jcv.2019.07.010. [DOI] [PubMed] [Google Scholar]
- 12.Food and Drug Administration. 2006. Blood Products Advisory Committee 86th meeting, session on proposed studies to support the approval of over-the-counter (OTC) home use HIV test kits. https://wayback.archive-it.org/7993/20170405061033/https://www.fda.gov/ohrms/dockets/ac/06/briefing/2006-4206B2_1.pdf.
- 13.Food and Drug Administration. Classify your medical device. https://www.fda.gov/medical-devices/overview-device-regulation/classify-your-medical-device. Accessed January 2021.
- 14.Food and Drug Administration. Overview of medical device classification and reclassification. https://www.fda.gov/about-fda/cdrh-transparency/overview-medical-device-classification-and-reclassification. Accessed January 2021.
- 15.Food and Drug Administration. Product classification database. https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfPCD/PCDSimpleSearch.cfm. Accessed October 2021.
- 16.Food and Drug Administration. De novo classification request. https://www.fda.gov/medical-devices/premarket-submissions/de-novo-classification-request. Accessed October 2021.
- 17.Food and Drug Administration. In vitro diagnostics EUAs. https://www.fda.gov/medical-devices/coronavirus-disease-2019-covid-19-emergency-use-authorizations-medical-devices/vitro-diagnostics-euas. Accessed January 2021.
- 18.Centers for Disease Control and Prevention. Waived tests. https://www.cdc.gov/labquality/waived-tests.html. Accessed January 2021.
- 19.Food and Drug Administration Center for Devices and Radiological Health. 2020. Guidance for industry: recommendations for Clinical Laboratory Improvement Amendments of 1988 (CLIA) waiver applications for manufacturers of in vitro diagnostic devices. https://www.fda.gov/media/109582/download.
- 20.Food and Drug Administration Center for Devices and Radiological Health. 2011. Guidance for industry: establishing the performance characteristics of in vitro diagnostic devices for the detection or detection and differentiation of influenza viruses. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/establishing-performance-characteristics-vitro-diagnostic-devices-detection-or-detection-and. Accessed January 2021.
- 21.Food and Drug Administration Center for Devices and Radiological Health. 2016. Guidance for industry: applying human factors and usability engineering to medical devices. https://www.fda.gov/media/80481/download.
- 22.Thompson M, Zigman Suchsland ML, Lyon V, Kline E, Huang SChu, Chu HY, Rieder M, Starita L, Bosua J, Lutz BR. 2019. A community-wide study to evaluate the accuracy of self-testing for influenza: works in progress. Open Forum Infect Dis 6:S654. doi: 10.1093/ofid/ofz360.1638. [DOI] [Google Scholar]
- 23.Hall MK, Kea B, Wang R. 2019. Recognising bias in studies of diagnostic tests part 1: patient selection. Emerg Med J 36:431–434. doi: 10.1136/emermed-2019-208446. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Centers for Disease Control and Prevention. Guide for considering influenza testing when influenza viruses are circulating in the community. https://www.cdc.gov/flu/professionals/diagnosis/consider-influenza-testing.htm. Accessed January 2021.
- 25.Ngaosuwankul N, Noisumdaeng P, Komolsiri P, Pooruk P, Chokephaibulkit K, Chotpitayasunondh T, Sangsajja C, Chuchottaworn C, Farrar J, Puthavathana P. 2010. Influenza A viral loads in respiratory samples collected from patients infected with pandemic H1N1, seasonal H1N1 and H3N2 viruses. Virol J 7:75. doi: 10.1186/1743-422X-7-75. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.World Health Organization. 2005. WHO recommendations on the use of rapid testing for influenza diagnosis. https://www.who.int/influenza/resources/documents/RapidTestInfluenza_WebVersion.pdf.
- 27.Sueki A, Matsuda K, Yamaguchi A, Uehara M, Sugano M, Uehara T, Honda T. 2016. Evaluation of saliva as diagnostic materials for influenza virus infection by PCR-based assays. Clin Chim Acta 453:71–74. doi: 10.1016/j.cca.2015.12.006. [DOI] [PubMed] [Google Scholar]
- 28.Uyeki TM, Bernstein HH, Bradley JS, Englund JA, File TM, Fry AM, Gravenstein S, Hayden FG, Harper SA, Hirshon JM, Ison MG, Johnston BL, Knight SL, McGeer A, Riley LE, Wolfe CR, Alexander PE, Pavia AT. 2019. Clinical practice guidelines by the Infectious Diseases Society of America: 2018 update on diagnosis, treatment, chemoprophylaxis, and institutional outbreak management of seasonal influenza. Clin Infect Dis 68:e1–e47. doi: 10.1093/cid/ciy866. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Panpradist N, Toley BJ, Zhang X, Byrnes S, Buser JR, Englund JA, Lutz BR. 2014. Swab sample transfer for point-of-care diagnostics: characterization of swab types and manual agitation methods. PLoS One 9:e105786. doi: 10.1371/journal.pone.0105786. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Thompson MG, Ferber JR, Odouli R, David D, Shifflett P, Meece JK, Naleway AL, Bozeman S, Spencer SM, Fry AM, Li D‐K. 2015. Results of a pilot study using self-collected mid-turbinate nasal swabs for detection of influenza virus infection among pregnant women. Influenza Other Respir Viruses 9:155–160. doi: 10.1111/irv.12309. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Dhiman N, Miller RM, Finley JL, Sztajnkrycer MD, Nestler DM, Boggust AJ, Jenkins SM, Smith TF, Wilson JW, Cockerill FR, Pritt BS. 2012. Effectiveness of patient-collected swabs for influenza testing. Mayo Clin Proc 87:548–554. doi: 10.1016/j.mayocp.2012.02.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Arnold MT, Temte JL, Barlow SK, Bell CJ, Goss MD, Temte EG, Checovich MM, Reisdorf E, Scott S, Guenther K, Wedig M, Shult P, Uzicanin A. 2020. Comparison of participant-collected nasal and staff-collected oropharyngeal specimens for human ribonuclease P detection with RT-PCR during a community-based study. PLoS One 15:e0239000. doi: 10.1371/journal.pone.0239000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Zoch-Lesniak B, Ware RS, Grimwood K, Lambert SB. 2020. The respiratory specimen collection trial (ReSpeCT): a randomized controlled trial to compare quality and timeliness of respiratory sample collection in the home by parents and healthcare workers from children aged <2 years. J Pediatric Infect Dis Soc 9:134–141. doi: 10.1093/jpids/piy136. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Esposito S, Molteni CG, Daleno C, Valzano A, Tagliabue C, Galeone C, Milani G, Fossali E, Marchisio P, Principi N. 2010. Collection by trained pediatricians or parents of mid-turbinate nasal flocked swabs for the detection of influenza viruses in childhood. Virol J 7:85. doi: 10.1186/1743-422X-7-85. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Kim AE, Brandstetter E, Wilcox N, Heimonen J, Graham C, Han PD, Starita LM, McCulloch DJ, Casto AM, Nickerson DA, Van de Loo MM, Mooney J, Ilcisin M, Fay KA, Lee J, Sibley TR, Lyon V, Geyer RE, Thompson M, Lutz BR, Rieder MJ, Bedford T, Boeckh M, Englund JA, Chu HY. 2021. Evaluating specimen quality and results from a community-wide, home-based respiratory surveillance study. J Clin Microbiol 59:e02934-20. doi: 10.1128/JCM.02934-20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.ClinicalTrials.gov. Home testing of respiratory illness. https://clinicaltrials.gov/ct2/show/NCT04245800. Accessed January 2021.
- 37.Heimonen J, McCulloch DJ, O’Hanlon J, Kim AE, Emanuels A, Wilcox N, Brandstetter E, Stewart M, McCune D, Fry S, Parsons S, Hughes JP, Jackson JL, Uyeki TM, Boeckh M, Starita LM, Bedford T, Englund JA, Chu HY. 2021. A remote household-based approach to influenza self-testing and antiviral treatment. MedRxiv https://www.medrxiv.org/content/10.1101/2021.02.01.21250973v1. [DOI] [PMC free article] [PubMed]
- 38.Food and Drug Administration. 2014. 510(k) Substantial equivalence determination decision summary K141520. https://www.accessdata.fda.gov/cdrh_docs/reviews/K141520.pdf.
- 39.Food and Drug Administration. FDA strategic plan for risk communication and health literacy 2017–2019. https://www.fda.gov/media/108318/download. Accessed October 2021.
- 40.Food and Drug Administration Center for Devices and Radiological Health. 2009. Guidance for industry: testing for detection and differentiation of influenza A virus subtypes using multiplex assays—class II special controls. https://www.fda.gov/medical-devices/guidance-documents-medical-devices-and-radiation-emitting-products/testing-detection-and-differentiation-influenza-virus-subtypes-using-multiplex-assays-class-ii. Accessed January 2021.
- 41.International Organization for Standardization. 2019. ISO 14971:2019(E). Medical devices—application of risk management to medical devices. https://www.iso.org/standard/72704.html.
- 42.Food and Drug Administration. CLIA waiver by application decision summaries. https://www.fda.gov/about-fda/cdrh-transparency/clia-waiver-application-decision-summaries. Accessed November 2021.
- 43.Food and Drug Administration. Template for developers of molecular diagnostic tests. https://www.fda.gov/media/135900/download. Accessed October 2021.
- 44.Food and Drug Administration Center for Drug Evaluation and Research. 2013. Guidance for industry: self-selection studies for nonprescription drug products. https://www.fda.gov/media/81141/download.
- 45.Food and Drug Administration Center for Devices and Radiological Health. 2006. Guidance for industry: class II special controls guidance document: reagents for detection of specific novel influenza A viruses. https://www.fda.gov/media/71140/download.
- 46.Food and Drug Administration. Manufacturer and user facility device experience database—(MAUDE). https://www.fda.gov/medical-devices/mandatory-reporting-requirements-manufacturers-importers-and-device-user-facilities/manufacturer-and-user-facility-device-experience-database-maude. Accessed January 2021.
- 47.Barlas S. 2017. FDA flags inconsistent hospital reporting of medical device problems: hazy reporting rules beget confusion. P T 42:97–115. [PMC free article] [PubMed] [Google Scholar]
- 48.Bressler NM, Hawkins BS, Bressler SB, Miskala PH, Marsh MJ, Submacular Surgery Trials Research Group. 2004. Clinical trial performance of community- vs university-based practices in the submacular surgery trials (SST): SST report no. 2. Arch Ophthalmol 122:857–863. doi: 10.1001/archopht.122.6.857. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Lyon V, Zigman Suchsland M, Chilver M, Stocks N, Lutz B, Su P, Cooper S, Park C, Lavitt LR, Mariakakis A, Patel S, Graham C, Rieder M, LeRouge C, Thompson M. 2020. Diagnostic accuracy of an app-guided, self-administered test for influenza among individuals presenting to general practice with influenza-like illness: study protocol. BMJ Open 10:e036298. doi: 10.1136/bmjopen-2019-036298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Geyer RE, Kotnik JH, Lyon V, Brandstetter E, Zigman Suchsland M, Han PD, Graham C, Ilcisin M, Kim AE, Chu HY, Nickerson DA, Starita LM, Bedford T, Lutz B, Thompson MJ. 2022. Diagnostic accuracy of an at-home, rapid self-test for influenza: a prospective comparative accuracy study. JMIR Public Health Surveill 8:e28268. doi: 10.2196/28268. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Kotnik JH, Cooper S, Smedinghoff S, Gade P, Scherer K, Maier M, Juusola J, Ramirez E, Naraghi-Arani P, Lyon V, Lutz B, Thompson M. 2022. Flu@home: the comparative accuracy of an at-home influenza rapid diagnostic test using a prepositioned test kit, mobile app, mail-in reference sample, and symptom-based testing trigger. J Clin Microbiol 60:e02070-21. doi: 10.1128/jcm.02070-21. [DOI] [PMC free article] [PubMed] [Google Scholar]