Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Jun 13.
Published in final edited form as: N Engl J Med. 2015 May 27;372(23):2258–2264. doi: 10.1056/NEJMsr1501194

The FDA and Genomic Tests — Getting Regulation Right

Barbara J Evans 1, Wylie Burke 1, Gail P Jarvik 1
PMCID: PMC4464691  NIHMSID: NIHMS695912  PMID: 26014592

The Food and Drug Administration (FDA) recently advanced two draft guidances1,2 proposing a regulatory framework for laboratory-developed tests, a category that includes many but not all genomic tests. The FDA convened a workshop in February 2015 to discuss the oversight of next-generation sequencing.3,4 President Barack Obama’s Precision Medicine Initiative calls for the FDA to modernize its approach to genomic testing5,6 as Senate and House committees also weigh options for genomic and other diagnostic tests.7,8

The recent initiatives by the FDA kindled debate about the legal authority of the agency to regulate genomic testing, as well as about the potential effects that such regulation may have on discovery and innovation.913 Skeptics raise important concerns, but there is little doubt that the FDA has ample power to impose at least some new regulatory requirements on genomic testing — enough, in any event, to make laboratory directors squirm. The question is not whether the FDA can regulate genomic testing but whether the FDA can regulate it well. Does the FDA have the correct set of statutory powers to make genomic technologies safe and effective for consumers — persons undergoing testing, whether as patients, research participants, or direct purchasers — while still fostering innovation? We believe the answer is “no.” To press forward with the powers the FDA now has could subject genomic testing to counterproductive regulatory burdens that may — ironically — diminish consumer safety and chill innovation. Yet a relatively modest set of statutory reforms that builds on concepts the FDA already has developed for drugs and other medical devices could position the agency to play a crucial and constructive role.

THE ANTIQUATED DEVICE FRAMEWORK OF THE FDA

In its draft guidances on laboratory-developed tests, the FDA proposed to phase out the enforcement-discretion policy that shields many laboratory-developed tests from being regulated as medical devices.1,2 The agency believes its “policy of general enforcement discretion” for laboratory-developed tests “is no longer appropriate”1 in light of profound changes in technology and business practices. This raises a question: Are the FDA medical device regulations also out of date? These regulations rely heavily on statutory powers that Congress granted in the Medical Device Amendments of 197614 and the Safe Medical Devices Act of 199015 — before the advent of many of the diagnostic technologies the FDA now seeks to regulate.

In this commentary, we address genome-scale tests with the potential to generate large numbers of data points not defined in advance of the testing. Examples include next-generation sequencing assays that detect any variant present in a specific set of genes, whole-exome and whole-genome sequencing tests, and copy-number variant arrays. In its discussion paper on next-generation sequencing,3 the FDA admits that these tests strain its existing regulatory methods and contrasts them with other technologies for detecting genetic variants, such as polymerase-chain-reaction and single-nucleotide-polymorphism arrays, that generally are designed to capture predefined data points that are known in advance of testing and are better suited to traditional regulation. Accordingly, we do not consider tests — even those that may be run with the use of a next-generation sequencing platform — that specifically target a discrete, predefined set of pathogenic variants, such as well-established pathogenic variants in the gene CFTR that are associated with cystic fibrosis.

The FDA draft guidance states that under the Food, Drug, and Cosmetic Act, “the FDA assures both the analytic validity (e.g., analytic specificity and sensitivity, accuracy and precision) and clinical validity of diagnostic tests through its premarket clearance or approval process.”1 This statement is true for many diagnostic tests but is overly optimistic for genome-scale tests. The FDA acknowledges “certain challenges”3 that demand novel approaches to assure the analytic and clinical validity of next generation sequencing tests. We discuss clinical validity first, to highlight important unmet needs in the regulatory oversight of analytic validity.

THE CHALLENGE OF CLINICAL VALIDITY

For each of the more than 22,000 human genes, many different deviations from the reference sequence are found in a tested population. Next-generation sequencing and other genomic tests offer an efficient means of identifying this range of genetic variants. After a variant is detected, clinical validity speaks to its effect on health — that is, whether there is a strong, well-validated association between having the variant and having a particular disease or predisposition.16 The FDA discussion paper on next-generation sequencing proposes to assess the clinical validity of a test by referring to “high quality curated genetic databases” such as those “curated by NIH’s ClinGen program and deposited in ClinVar” and other “FDA-recognized evidence-based assessments of the clinical significance of gene variants,” including databases created by disease advocacy organizations.3

The notion that the FDA can harness pre-existing data resources is unduly optimistic. Whole-genome sequencing detects more than 3.5 million variants in a typical person, 0.5 million of which are rare or novel.17 At least 90 to 125 variants merit further evaluation for clinical significance on the basis of current knowledge,18 but for most variants, the clinical implications, if any, remain unknown. Rigorous review has shown that many research reports of pathogenicity are incorrect or do not provide a level of proof acceptable for clinical practice.19 ClinVar has just 76,606 unique variants with clinical interpretations (including variants of unknown significance).20

That leaves millions of variants for which the FDA seemingly would require premarket studies to demonstrate clinical validity. The consequent costs and delays could deter many laboratories in the United States from providing anything beyond variant calls, which could then be uploaded electronically for stand-alone interpretation, possibly by offshore laboratories beyond the jurisdiction of the FDA. Requiring premarket review thus could accelerate a trend — already under way21 — to unbundle the process of gene sequencing, in which a patient’s variants are detected, from the process of interpreting the variants to assess their clinical significance. In our opinion, driving legitimate U.S. laboratories out of the global business of genomic interpretation would diminish the safety of American consumers.

Even in theory, premarket review cannot ensure clinical validity for every variant a genome-scale test may detect, because the full range of variants becomes clear only after the test is widely used — presumably after the FDA clears or approves it. That is the inherent contradiction of premarket review of clinical validity: much of the evidence needed to establish that a test is safe and effective can be generated only after regulators allow it onto the market.

The FDA has successfully addressed the limitations of premarket review processes in other contexts. Drug-safety problems a decade ago — including the well-publicized risks that forced rofecoxib (Vioxx) off the market — laid bare the fact that the FDA premarket drug-approval process cannot ensure consumer safety because the clinical trials are too small, brief, and unrepresentative of real patient contexts to detect latent risks that inevitably emerge when drugs are prescribed to large, diverse clinical populations. A 2006 Institute of Medicine report22 recommended a life-cycle regulatory approach, and Congress later enacted the Food and Drug Administration Amendments Act of 200723 (FDAAA).

FDAAA left the FDA premarket drug-approval process intact — the public loves the idea of premarket review, despite its evidentiary limitations — but it authorized the FDA to develop a major new data infrastructure, the Sentinel System, which marshals data for 178 million Americans for use in active postmarketing drug-safety surveillance.24 FDAAA also strengthened the power of the FDA to respond to emerging evidence (e.g., by requiring labeling changes). The problem was too large for the FDA to address through guidance or amendments to its own regulations. It demanded a whole new statute — action by Congress — to create a new regulatory infrastructure.

Genomic tests raise similar issues. The existing FDA 510(k) and premarket approval regulations for medical devices include postmarketing regulatory controls, but a 2011 Institute of Medicine report25 concluded that these do not compare with the richer array of postmarketing tools that FDAAA provides for drugs. Device “post-market surveillance” generally can last just 36 months and is directed at serious safety problems.26 Yet genomic testing needs an ongoing, decades-long program of continuous learning to clarify both benefits and risks that are not yet known. Moreover, the medical device regulations require cumbersome, 1970s-era legal procedures that can slow the regulatory response to emerging evidence: decisions that the FDA can implement swiftly for a drug by means of an order may, for a device, require the agency to promulgate a whole new regulation.25 The current FDA device regulations simply do not add up to a comprehensive, modern framework to support continuous learning and nimble response.

INFRASTRUCTURE FOR CONTINUOUS LEARNING

The FDA device division, the Center for Devices and Radiological Health (CDRH), understands the limits of premarket review and is already working to rebalance its focus more toward postmarketing data collection27 and to design a National Medical Device Postmarket Surveillance System (MDS) to support this shift.28,29 Its current MDS proposal focuses mainly on nondiagnostic devices (e.g., hip implants and cardiac devices).29 A genomic MDS has not yet been proposed.

The FDA Office of In Vitro Diagnostics and Radiological Health (OIR), the CDRH subdivision spearheading the recent initiatives on laboratory-developed tests1,2 and next-generation sequencing,3 has clung to the concept of premarket review — albeit one that relies on external databases instead of making test sponsors conduct their own premarket studies to generate evidence. Sadly, the needed external data resources scarcely exist and need to be created. For that to happen, the role of the FDA cannot remain a passive one of consecrating “FDA-recognized”3 genomic databases created by others. The agency needs authority to actively foster infrastructure development.

Inferring the clinical significance of novel genetic variants demands extremely large-scale data resources — sequences for hundreds of thousands to millions of people30 — and deep phenotypes: longitudinal health data including all diagnoses (not just the indication for the test), as well as treatments and outcomes. It is estimated that by 2014, researchers had at least partly sequenced 228,000 human genomes around the globe.31 The Precision Medicine Initiative envisions developing a 1-million-person cohort — a resource that is immensely valuable yet far too small to establish the clinical validity of most of the variants detected by genomic tests.

Achieving the right scale of data resources for this task will require capturing sequencing data from both research and clinical settings, with the latter destined to provide an ever-growing share of the available data as clinical translation proceeds. The National Institutes of Health (NIH) data-deposit policies, which feed data into ClinGen and ClinVar, are binding only for data generated with NIH funds and generally include only variants of interest, not all variants identified in a research participant. Insurer-funded commercial clinical laboratories are not bound by the NIH data-deposit requirements, so their data may not be captured in ClinGen or ClinVar, although many do contribute. Neither research laboratories nor clinical laboratories currently have access to the deep phenotypes required. Linkage to existing longitudinal health data resources — possibly including the FDA Sentinel System — could help address this gap, although the Sentinel System relies heavily on insurance-claims data, which are an incomplete source of phenotypic truth.

This all sounds daunting, but the FDA has worked with Congress and forged public–private partnerships to overcome such challenges in the past, as exemplified by its Sentinel System. Legislatively authorized uses of data by public health agencies like the FDA fit within consent exceptions that facilitate access to data under the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule and the Federal Policy for the Protection of Human Subjects (“Common Rule”).32 Moreover, well-designed system architecture and privacy and security policies can afford strong privacy and ethics protections — all subject to regular congressional oversight to enhance public trust.

Sustainable financing for genomic databases was a crucial concern voiced at the recent FDA workshop on next-generation sequencing. The $10 million of FDA funding envisioned in the Precision Medicine Initiative is not the right order of magnitude. For comparison, the FDA Sentinel Initiative launched its 5-year pilot project in 2009 with about $120 million.29 An intriguing point, however, is that the FDAAA authorizes a range of uses for the Sentinel System infrastructure and supplies a legal mechanism33 that would allow the FDA, in the future, to enter into user agreements to mobilize sustainable, private-sector financing of the congressionally authorized uses.34 Legislation modernizing the regulation of genomic tests by the FDA could include similar mechanisms to foster public–private collaborations and ensure continuity and financial sustainability, which remain insecure as long as funding comes solely from governmental grants.

The Food, Drug, and Cosmetic Act currently treats genomic uncertainty as static and beyond regulators’ control: the FDA can block clinical claims that strike the agency as too uncertain at a given moment but has no power to hasten the pace at which uncertainty is resolved. Yet genomic uncertainty is dynamic, shrinking whenever anyone establishes a new, significant association between a genotype and a phenotype. Right regulation demands that Congress and regulators play active roles to facilitate that process.

FDA OVERSIGHT OF ANALYTIC VALIDITY

Analytic validity — making sure that a test correctly detects gene variants and understanding its accuracy, precision, and limits of detection — is fundamental. The Clinical Laboratory Improvement Amendments of 198835 (CLIA) already authorize the Centers for Medicare and Medicaid Services (CMS) to oversee analytic validity. The FDA draft guidance on laboratory-developed tests cites lags in CLIA oversight, which is provided biennially during routine surveys, as justifying FDA premarket review of analytic validity.1 In the FDA workshop on next-generation sequencing, however, it was noted that premarket review is challenging for genomic tests in which analytes cannot be predefined, may be rare or novel, and may serve indefinite clinical uses.36

The agency has outlined several options for the validation of sequencing platforms, such as using carefully selected subsets of genetic markers or comparing performance against orthogonal methods.36 These seem conceptually sound even if the devil is in the details. The FDA recognizes that analytic validity depends not just on devices — instruments, reagents, and software — but also on the laboratory processes and workflows for using the devices.36 Challenges include developing an accepted definition of technical metrics for data quality,3,37 defining appropriate validation strategies for the multiple technical options at each step in the genomic test workflow, and expanding the availability of proficiency testing for genomic-based tests.3,38 The FDA is considering a continuum of approaches: Test-by-test premarket review of analytic performance lies at one extreme. The other extreme would rely on performance standards as evidence that laboratories are competent to assemble next-generation sequencing tests, develop software, or perform genomic interpretation and would certify the conformance of laboratories with these standards.36

The glaring, unanswered question is how the standards-based approach proposed by the FDA would improve on the oversight that CMS already provides under CLIA. The FDA draft guidance on laboratory-developed tests criticized CLIA for focusing on laboratory processes rather than on the devices themselves,1 so the proposal by the FDA to focus on next-generation sequencing laboratory processes strikes some onlookers as ironic. Is the FDA trying to fill the void left by the 2007 decision by CMS not to create a genetic-testing specialty under CLIA?39 The FDA workshop on next-generation sequencing dubbed genomic testing as “über-high complexity,”4 requiring proficiencies that CLIA high-complexity laboratories may not possess. Or is the FDA attempting to compensate for lax CLIA enforcement — including the lack of proficiency testing — cited in a Government Accountability Office report?40 CMS has attributed its decisions to cost considerations and resource constraints.39,40 Can the cash-strapped25 device division of the FDA ride to the rescue?

The FDA has an important role to play in relation to analytic validity — one that if properly focused neither duplicates nor usurps the authority granted to CMS under CLIA. The consistency and comparability of sequencing data generated at different laboratories are major concerns that were voiced during the FDA workshop on next-generation sequencing. CLIA regulates laboratory by laboratory and supplies no common language for expressing gradations of genomic data quality or for characterizing interlaboratory quality differentials. Proficient, CLIA-certified laboratories often follow different processes and do so for good reasons, to optimize the analytic validity of the particular tests they offer. One laboratory may optimize its detection of apples, while another optimizes its detection of oranges. Their resulting streams of data relating to other regions of the genome (“grapefruit”) may not be comparable. CLIA does not require transparent, detailed public disclosure of the processes used in laboratories, so quality differences can be hard to infer.

The lack of consistency presents safety issues in genomic testing when, for example, the discrepancies affect variant calls for a potentially actionable incidental finding. Persons tested at HIPAA-covered laboratories have a new right of access to laboratory-held sequencing data — including information that goes beyond what is routinely included in the final laboratory report that becomes part of a patient’s medical record41 — so dubious variant calls that laboratories would not voluntarily report may nevertheless be accessible for inappropriate use.

Analytic validity is not just a safety issue. It is also an infrastructure issue, and this supplies perhaps the strongest rationale for FDA oversight of analytic validity. Genomic databases that inform decisions by the FDA about clinical validity demand a common framework for integrating data from different laboratories. Continuous learning requires apples-to-apples comparisons of data from multiple sources and the ability to flag when apples are compared with oranges. It remains the responsibility of CMS to oversee laboratory processes, and it would be inappropriate for the FDA to dictate or interfere with those processes. The FDA could, however, add substantial value by standardizing what needs to be disclosed and communicated about laboratory processes and their potential effect on data quality.

CONCLUSIONS

Clinical translation of genomic testing is an ongoing long-term process that is just starting to unfold and is likely to continue for decades, and it offers the prospect of significant improvements in patient care, disease prevention, and possibly even the cost-effectiveness of health care. The stakes are high — high enough to warrant taking time to get regulation right. There is no pressing emergency that demands a rushed effort to cram genomic technology into the 40-year-old medical device regulations of the FDA. Federal agencies naturally find it expedient to work within existing statutes so as to avoid the delay and uncertainty of legislative reform. Expediency is illusory if the available statute cannot achieve the objective, which is to protect consumers while promoting innovation.

Establishing the clinical validity of genomic tests is largely a postmarketing pursuit. It requires the accrual and review of evidence throughout the entire commercial life of a test and, indeed, requires access to postmarketing data not just from that test but from all other tests that are trolling the same region or regions of the human genome. Premarket review is the wrong tool, and the traditional product-by-product regulatory focus of the FDA is myopic. Ensuring the safety and effectiveness of the “trees” (each test) will require the FDA to harness evidence from the whole “forest” (all competing tests with similar intended uses) continuously. Statutory reforms should focus on granting the FDA a correct package of legal powers, seed funding, and legal pathways to encourage public–private partnerships to develop and sustain data resources for the right regulation of genomic testing.

Footnotes

Disclosure forms provided by the authors are available with the full text of this article at NEJM.org.

References

RESOURCES