Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Oct 8.
Published in final edited form as: Med Decis Making. 2010 Jan 4;30(3):328–340. doi: 10.1177/0272989X09347014

Personalized Medicine and Genomics: Challenges and Opportunities in Assessing Effectiveness, Cost-Effectiveness, and Future Research Priorities

Rena Conti 1, David L Veenstra 2, Katrina Armstrong 3, Lawrence J Lesko 4, Scott D Grosse 5
PMCID: PMC4598076  NIHMSID: NIHMS725927  PMID: 20086232

Abstract

Personalized medicine is health care that tailors interventions to individual variation in risk and treatment response. Although medicine has long strived to achieve this goal, advances in genomics promise to facilitate this process. Relevant to present-day practice is the use of genomic information to classify individuals according to disease susceptibility or expected responsiveness to a pharmacologic treatment and to provide targeted interventions. A symposium at the annual meeting of the Society for Medical Decision Making on 23 October 2007 highlighted the challenges and opportunities posed in translating advances in molecular medicine into clinical practice. A panel of US experts in medical practice, regulatory policy, technology assessment, and the financing and organization of medical innovation was asked to discuss the current state of practice and research on personalized medicine as it relates to their own field. This article reports on the issues raised, discusses potential approaches to meet these challenges, and proposes directions for future work. The case of genetic testing to inform dosing with warfarin, an anticoagulant, is used to illustrate differing perspectives on evidence and decision making for personalized medicine.

Keywords: cost-effectiveness analysis, pharmacoeconomics, resource allocation


Personalized medicine is health care that tailors interventions to individual variation in risk and treatment response. Although medicine has long strived to achieve this goal, advances in genomics promise to facilitate this process. Ultimately, genomic technologies could lead to therapies targeted to groups of patients with specific genetic variants.1 Specific applications include the use of genomic information to classify individuals according to disease susceptibility or expected responsiveness to a pharmacologic treatment and to provide targeted interventions. Consequently, our focus is on “stratified” medicine using genomics—segmenting a patient population into subgroups based on hereditary risk of a disease occurrence, recurrence or likelihood of treatment response, or somatic changes in a tissue, most often a tumor.2 We discuss the use of evidence to inform regulatory policies and clinical guidelines for genetic testing in the United States, with focus on the Evaluation of Genomic Applications in Practice and Prevention (EGAPP) initiative of the Centers for Disease Control and Prevention (CDC) and the Food and Drug Administration (FDA). Figuring prominently in our discussion is the application of 2 types of decision-analytic modeling, cost-effectiveness and value of information, to the use of pharmacogenomics (PGx) and tumor genomics in personalizing medical treatment.

A symposium at the annual meeting of the Society for Medical Decision Making on 23 October 2007, sponsored by the Agency for Healthcare Research and Quality (AHRQ), highlighted challenges and opportunities in translating molecular medicine into clinical practice. A panel of experts in medical practice, regulatory policy, technology assessment, and medical innovation was assembled: Katrina Armstrong, professor and chair, Department of Medicine, School of Medicine, University of Pennsylvania; Lawrence J. Lesko, director, Office of Clinical Pharmacology, Center for Drug Evaluation and Research, Food and Drug Administration; David L. Veenstra, associate professor, Department of Pharmacy, University of Washington; and Rena Conti, instructor, Department of Pediatrics, faculty affiliate of The Center for Health and the Social Sciences, and codirector of the Program on Pharmaceutical Policy, University of Chicago. Dr. Armstrong, a physician and health services researcher with expertise on cancer genetics, was an original member of the EGAPP Working Group. Dr. Veenstra, a pharmacoeconomist who specializes in pharmacogenomics, is currently a member of the EGAPP Working Group. Dr. Lesko is a frequently cited expert in personalized medicine. Dr. Conti is a health economist who specializes in the financing and organization of medical innovation, particularly genetic approaches to diagnose and treat illness. Scott D. Grosse, research economist and associate director for health services research and evaluation, Division of Blood Disorders, National Center on Birth Defects and Developmental Disabilities, Centers for Disease Control and Prevention, acted as moderator and provided introductory comments. Dr. Grosse is a member of the EGAPP Steering Committee. This article is coauthored by all 4 panelists and the moderator. It summarizes the presentations by Armstrong, Lesko, Veenstra, and Conti in sequence and reviews major themes and promising directions for future research. In addition, references have been added, including relevant publications appearing from October 2007 through February 2009.

PERSONALIZED MEDICINE AND EVIDENCE-BASED PRACTICE: A PHYSICIAN’S PERSPECTIVE ON EVIDENCE-BASED REVIEWS OF GENETIC TESTS AND RECENT PROGRESS IN THE UNITED STATES

The collection and integration of evidence of safety and effectiveness into clinical and public health guidelines provide rigorous means to inform clinical decision making.3 Domestic examples of evidence-based review processes include the US Preventive Services Task Force (USPSTF) and the Evidence-based Practice Centers sponsored by the AHRQ. Many countries have health technology assessment agencies responsible for preparing guidelines such as the UK National Institute for Health and Clinical Excellence (NICE).4 It should be noted that each of these groups focuses guidelines on clinical outcomes; the prospective value of information per se, which may be particularly important for certain types of genetic testing (see Discussion), is typically outside the groups’ respective mandates.

Evidence-based review of genetic tests may be more challenging than that of drug treatment and screening tests for several reasons. First, a challenge specific to predictive genetic testing, as opposed to pharmacogenomics, is that genetic alterations with high penetrance and clinical expression are relatively uncommon. For example, the chance of finding a BRCA1/2 mutation in the general population is approximately 1 in 1000.5 For low prevalent risk factors, whether genotype or not, a highly specific screening test is needed to avoid undue inconvenience to large numbers of people with false-positive test results, and treatment should provide large benefits for those detected to justify the cost of the test and treatment.

Second, interventions and clinical outcomes are often poorly defined when a diagnostic test, whether molecular or other, first becomes available. For example, when BRCA1 was identified in 1995, it was not clear how test results would be used to change management for women at increased risk.5 These research gaps can make it very difficult to develop coherent strategies for testing use.

Third, although most discussions have focused on relatively well-defined genetic tests, including all of those considered here, many other genomic applications are not well defined. One example consists of the “cardiogenomic profiles” that are currently being marketed to consumers by some companies. The genes included in a cardiogenomic profile vary by company, and there is little scientific consensus regarding the predictive value of most of the gene variants that are included.6 Consequently, intertest reliability is questionable, and translating results into patient care is challenging.

Fourth, no review or approval process is required in the United States before a laboratory-developed test (i.e., a test conducted at the developer’s lab and not marketed as a test kit) is introduced or marketed to the public through direct-to-consumer (DTC) sales.7 Thus, evidence-based reviews generally will be conducted only after the test has already been made available. Changing an already established behavior is more challenging, and the impact of evidence-based recommendations may be smaller.

The primary challenge to creating evidence-based review of genomic tests is the limited evidence base available. In particular, randomized controlled trial (RCT) and other high-quality evidence is generally lacking for these technologies, as is the case for many diagnostics or medical devices.8 The Secretary’s Advisory Committee on Genetics, Health, and Society (SACGHS) recently called for increased federal funding for research to help provide an adequate evidence base for the oversight of genetic testing.7 The estimates of sensitivity and specificity for detecting the genotype or, even better, for predicting the phenotype of interest are often missing or misleading.

Translational research is needed to apply the results of basic research on the human genome to clinical practices that improve individual and population health. Khoury and others9 have distinguished 4 phases of translational research in genomic or personalized medicine. Phase 1 (T1) and phase 2 (T2) translational research informs the development of clinical interventions and evidence-based guidelines, phase 3 (T3) research assesses the implementation of guidelines in health practice, and phase 4 (T4) research evaluates the health outcomes of changes in practice following the implementation of guidelines.9 The bulk of research funding is in T1 research, including RCTs. In the development of evidence-based guidelines (T2), it is essential to address the ethical, legal, and social issues the tests raise to minimize harms to individuals and populations.10

Formal processes for conducting reviews and the use of evidence to formulate recommendations may provide better understanding by stakeholders of the potential benefits, harms, and costs of using genomic tests. Such processes are most successful if they are transparent and credible, minimize bias, and identify gaps in knowledge to underscore where additional research is needed. Reviews of evidence assessments prior to publication by experts, patient groups, and health care providers can facilitate transparency.

One such process is the EGAPP initiative funded by the CDC to create publicly accountable reviews of new technologies in genomic medicine.11 The independent, nonfederal, multidisciplinary EGAPP Working Group, established in 2005, selects topics to review, oversees the systematic review of evidence, and makes recommendations based on that evidence.12 Disciplines represented on the working group include economics, ethics, epidemiology, genetics, medicine, and others. The EGAPP Working Group is committed to minimizing conflicts of interest while ensuring the involvement of stakeholders and maximizing the utility of its recommendations for clinical practice.

In selecting which tests to review, EGAPP incorporates information on the expected impact a test may have on clinical practice, the amount of evidence available, and the potential for a public health impact.12 EGAPP builds upon the ACCE framework, to identify Analytic validity, Clinical validity, Clinical utility, and Ethical, legal, and social issues (ELSI).13 The first step is defining the clinical scenario—the specific setting in which the test might be used. The second relates to identifying and addressing issues of analytic validity (how well the assay measures what it claims to measure). The third is determining clinical validity (how well the test predicts clinical or surrogate markers) and clinical utility (whether the use of the test improves or harms outcomes of importance to patients). A broad definition of clinical utility is used, in which ethical, legal, and social outcomes are considered alongside health outcomes.14 As is the case with the USPSTF, the EGAPP Working Group does not use cost-effectiveness as a criterion in evaluating the appropriateness of applications to clinical practice, but the plan is to include information on costs, where available, to inform payers and other decision makers regarding the economic implications of recommended tests.

One of the key differences between EGAPP and other evidence review processes is that the working group allows the inclusion of observational data, as well as relevant information that has not yet been published in the peer-reviewed literature. Because RCT data are available for very few genetic tests, the ability to bring new data from other sources to the evaluation process can be critical. However, the limitations of those data are also important to recognize in developing the recommendations, particularly when the data are provided by companies that developed and are marketing these tests without scientific peer review. EGAPP is committed to evidence-based practice, and the working group includes both the current chair and former chair of the USPSTF.

Only a small number of genomic applications can be systematically reviewed by any single process. The working group issued its first recommendation statement in December 2007, regarding testing for cytochrome P450 polymorphisms in adults with nonpsychotic depression treated with selective serotonin reuptake inhibitors (SSRIs).15 Three other recommendation statements were published in January 2009, regarding UGT1A1 genotyping in patients with metastatic colorectal cancer treated with irinotecan,16 molecular testing for Lynch syndrome in newly diagnosed patients with colorectal cancer,17 and breast cancer tumor gene expression profiling.18 The SACGHS has called for an expansion of evidence-based reviews of genomic testing in the United States, building on the EGAPP model.7 Despite the challenges, we believe these and similar efforts are the foundation for medical decision making in personalized medicine in the future.

PERSONALIZED MEDICINE AND REGULATORY POLICY: A US FOOD AND DRUG ADMINISTRATION PERSPECTIVE ON REGULATING GENOMIC TESTS

The FDA regulates genetic tests in 2 different ways. First, genetic tests, including test kits and analyte-specific reagents, that are developed for sale to multiple laboratories in different states are subject to regulation as devices by the FDA’s Center for Devices and Radiological Health, although the only requirement is analytic validity of the test in terms of the reliability of measuring the targeted analyte. Second, the FDA’s Center for Drug Evaluation and Research (specifically the Office of Clinical Pharmacology) regulates new drug applications that include the use of genetic tests to indicate whether a drug should be prescribed or the appropriate dosing of a drug using PGx.

The personalized medicine revolution at the FDA began with trastuzumab (trade name Herceptin) approval in 1998. In 2007, the FDA approved a CCR5 inhibitor (trade name Maraviroc) that requires a viral DNA test for a coreceptor tropism to determine if the human immunodeficiency virus (HIV) is of the type that will respond to the drug. In 10 years, there were probably two dozen examples of applications of genomics to therapeutics. The remarks in this section are largely confined to the regulatory approach that has evolved to address these challenges that incorporate PGx-based information, illustrated by a recent case, the 2006 adoption of a label by the FDA suggesting the incorporation of PGx-based testing into the dosing of warfarin for anticoagulation-naive patients.

In general, regulatory decision making at the FDA has tried to break down genomics into different categories and look at each based on a risk-based approach, where risk is used in the sense of potential harms. For example, predicting whether somebody is going to have a disease or not is quite serious. Needing to test somebody before his or her physician decides whether he or she should get a drug is critical but perhaps less serious in its implications than disease prediction. Finally, a genetic test may be used to guide the dosing of the drug once the physician makes the decision to administer the drug to the patient. Along each of these strata of importance, different types of evidence are required to inform regulatory decision making.

To approach these scenarios, the FDA has at least 2 different, overlapping processes in risk management and evaluation. The risk assessment approach is a standard, general, 4-step process: 1) identify the hazard; 2) look at the exposure relative to that hazard; 3) figure out what the population characteristics are for that exposure, how many people take the drug, and for how long; and 4) come up with a risk characterization. Once risk characterization is completed, the regulatory decision to act comes from a menu of choices that could include one or more of the following: a dear doctor letter, relabeling the product, or removing the drug from the market. These outcomes are issue and drug specific. Concerns that the FDA does not normally consider explicitly within its mandate to protect and promote public health include whether a genetic test is going to be covered and reimbursed by insurance companies and what the potential physician liability is in its clinical application. Although these issues are important to society, these issues should not affect regulatory actions based primarily on scientific and clinical evidence.

The FDA Guidance for Industry, titled Providing Clinical Evidence of Effectiveness for Human Drugs and Biological Products, considers both benefits and risks of drugs.19 For evidence of benefit, the FDA usually requires 1 or more adequate and well-controlled trials to prove that a drug or intervention works as claimed in the indication. However, benefits and risks can be demonstrated using observational data in the absence of RCTs in some cases, depending on prior knowledge and information. It would be preferable to have RCTs to address every question, but this is not feasible or practical.

The regulatory decisions made by the FDA about safety, aside from the benefit/risk ratio, are based on the overall risk characterization, including the magnitude of the risk, clinical significance of the risk to individuals and individual subgroups, the public health population implications of the risk, and finally the certainty of the scientific evidence. The FDA now attempts to get safety information into the public domain earlier than in the past once it believes it is supported by credible evidence.

The first 3 issues were consequential regarding the use of PGx tests for the dosing of warfarin considered by the FDA in 2006. Warfarin is number 2 among drugs associated with emergency room visits in the United States.20 The probability of major bleeding is approximately 2.5% per year or higher, and minor bleeding event rates have been as high as 36% per year.21 The majority of all patients who experience bleeding events during the first 28 days on warfarin are reported to have a CYP2C9 or VKORC1 gene variant.22

The key clinical issue related to the potential value of incorporating PGx-based information is getting the initial dose of warfarin right. Dosing strategies aim to achieve and maintain the patient’s international normalized ratio (INR) in a narrow range between 2 and 3. Once a physician initiates an anticoagulation regimen with a patient, it can take 1 to 3 months to get to a stable maintenance dose within the target INR range. Current standard of care suggests that physicians should try to get to a stable dose as quickly as possible because it is common for adverse events to occur in the first month of warfarin therapy. INR values of 4 or more are associated with an increased risk of bleeding, and INR values of less than 2 are associated with risk of thromboembolism.

Guidelines suggest that the initial warfarin dose may be lowered for elderly patients and those with certain health conditions or a high risk of bleeding,23 but these qualitative factors are not standardized. Some experts have suggested that genetics should also be considered.2426 The reasoning is that the initial warfarin dose is often adjusted for risk factors such as age, body weight, and gender,23 yet a patient’s CYP2C9 and VKORC1 status accounts for more variability in dosing (~35%) than do age, gender, and weight taken together (~20%).27,28 When the FDA considered relabeling warfarin in light of the new PGx dosing algorithm, RCT data were unavailable. A regulatory working group identified 9 credible population-based observational studies, 8 of which found statistically significant associations between lower dose requirements and CYP2C9 or VKORC1 gene variants. Subsequent observational studies confirmed the relevance of genomic information for warfarin dosing.22,27 The FDA felt that it was plausible to link the genetic effects on INR to a likelihood of clinical effectiveness because of the causal mechanism between this biomarker and clinical endpoint of bleeding and the strong dose-INR relationship. However, the FDA also believed that it was important to acknowledge uncertainty because no RCTs at the time had evaluated genotype-guided dosing.

The approved label on warfarin basically implies to a physician and patient, “We suggest that you consider genetic factors in figuring out the right dose.” The new label does not require that genetic testing be conducted. By updating the label, the FDA did not imply either “do the test” before prescribing the drug or “forget INR” in monitoring the patient. The goal of relabeling warfarin with genetic information was to inform decision makers about how to better determine maintenance doses but stop short of telling them what to do with the information (i.e., genotype-specific doses).

The FDA decision to relabel warfarin has generated debate because many experts do not believe that genetic testing for the purpose of guiding warfarin dosing is yet ready for routine clinical use, and professional guidelines have not endorsed its use.23,29,30 At present, there is a lack of evidence that genomic information used to inform warfarin dosing improves outcomes such as bleeding events. Until such evidence is available, it would be premature to require or strongly recommend use of genomic testing to guide warfarin dosing.27,2931 The major challenge is that because serious bleeding is uncommon, a very large RCT would be required to be adequately powered to evaluate clinically significant endpoints. Existing, small RCTs of genetic testing for warfarin dosing have focused on time to stable INR and number of dose adjustments required. Two small RCTs yielded different findings.32,33 The first RCT reported no significant difference in time to INR, as a primary endpoint, but did report fewer dose adjustments required.32 A second RCT reported that CYP2C9 genotype-guided dosing was associated with significantly improved outcomes, including reduced time to stable INR, less time outside the normal range, and fewer minor bleeding events.33 A newly published international analysis of pooled data on more than 5000 patients enrolled in PGx studies of warfarin dosing concluded that a PGx-based dosing algorithm would significantly reduce both underdosing and overdosing.34

Real-time regulatory and clinical decisions must acknowledge the tradeoff in using results of observational data v. waiting for the results of a large RCT. The US National Heart, Lung, and Blood Institute (NHLBI) has funded a large RCT, the Clarification of Optimal Anticoagulation through Genetics (COAG) trial, which recently started enrolling an expected total of 1200 patients and will take until late 2011 to complete data collection.35 Because the primary outcome for the study is time in therapeutic range during the first month of therapy, it may not shed light on the crucial question of preventing serious adverse clinical outcomes. An RCT will provide greater certainty if designed correctly with a typical standard of care arm, but requiring a RCT before relabeling would expose patients to a risk of bleeding that might have been detected by genetic testing. And delaying communication of genetic risk factors for warfarin dosing would exclude health care practitioners and patients from weighing the benefits and risks of testing and making informed decisions. However, if insurers do not reimburse for PGx testing, few patients are likely to choose to be tested even if they or their physicians consider the balance of testing’s benefits and risks to be favorable.

PERSONALIZED MEDICINE AND HEALTH TECHNOLOGY ASSESSMENT: A HEALTH ECONOMIST’S PERSPECTIVE ON COST-EFFECTIVENESS ASSESSMENTS OF PHARMACOGENOMICS

In the early part of this decade, third-party payers charged with coverage and reimbursement decision making in the United States did not show much interest in PGx-based diagnostic tests. Now this appears to be changing. For example, the Academy of Managed Care Pharmacy (AMCP) in 2005 explicitly included PGx tests within its format for submissions for drug formulary consideration and called for information presentations on the analytic validity, clinical validity, clinical utility, and cost-effectiveness of PGx tests.36 Medco, a major pharmacy benefit management firm, has worked with the FDA to research the use and dosing of warfarin and tamoxifen and in May 2008 began promoting PGx testing for initial warfarin dosing in response to interest from its clients, major health care payers.37 In August 2008, the US Centers for Medicare and Medicaid Services (CMS) initiated a National Coverage Analysis of PGx testing for warfarin dosing to determine whether it meets the “reasonable and necessary” criteria for coverage by the Medicare program.38

For the purpose of informing payer decisions, cost-effectiveness analyses often take the health care system perspective rather than that of society or the patient. It should be noted that unlike NICE in the United Kingdom, US-based decision makers generally do not specify a fixed willingness to pay for improved health outcomes (e.g., $50,000 or 30,000 per quality-adjusted life year [QALY]), as a decision rule, or necessarily use QALYs in decision making.39,40 In this section, the major challenges to conducting cost-effectiveness analyses (CEAs) with regard to new genetic-based tests in the United States are reviewed.

One challenge is identifying the economic cost of a testing strategy. The current reimbursement system for diagnostic tests is cost based rather than value based. For example, the price of certain genetic tests is several thousand dollars, particularly those that look at multiple markers or somatic gene expression profiles. There are also tests that look at individual single-nucleotide polymorphisms (SNPs), for which charges are in the range of $100 to $500. Technology advance complicates this further—the price per SNP analyzed is rapidly dropping.41 Given these conditions, it can be difficult to estimate a true economic cost of the test that is reflective of the economic value of the test at any given point in time and comparable over time. Also, prices do not reflect the total opportunity cost of conducting tests because test results may induce costs for genetic counseling and medical care.40 Pragmatically, the “hassle” cost for providers in dealing with complicated or ambiguous test results also needs to be examined.

Second, and most important, evaluating the cost-effectiveness of PGx testing is challenging due to the lack of data on patient and clinician behavior based on test results and the outcomes of treatment decisions (i.e., effectiveness).19,42 For example, with warfarin PGx testing and TMPT testing for patients taking 6-mercaptopurine or allopurinol,43 dose adjustment is a relatively straightforward intervention, yet we lack information on how physicians alter their practice in response to the revised probabilities of poor outcomes given the results of these tests for individual patients. The issue is more complicated for drug selection when the results of a test indicate that a person should not get a certain drug. Are there alternative therapies? If so, how well does sequential decision making incorporate the risk-benefit calculus of alternatives? The COAG study and a number of small trials of PGx testing for warfarin dosing are being conducted around the world that may provide relevant information salient to these concerns. For other questions, RCTs will not be available, and decision-analytic modeling techniques may be used to assess indirect evidence.

The potential clinical benefit of testing from a payer’s perspective shares logical similarities with the FDA’s approach to risk-benefit assessment.44 For example, assessing the economic value of warfarin PGx testing for payers is particularly challenging because of limited data availability. Inconclusive or contradictory results from small (N = ~200) RCTs are insufficient for robust estimates of effectiveness.29,31 Because of a paucity of direct evidence of benefits and harms of PGx testing, formal decision modeling and cost-effectiveness analysis can be helpful in identifying data needs, but future work remains to be done quantifying uncertainty to inform recommendations for appropriate clinical decision making. The COAG trial will provide a larger sample size, but it may be most applicable to patients cared for in anticoagulation clinics, where the incremental benefit of PGx testing may be smaller than in a general care setting. Furthermore, the primary outcome of the trial will be the percentage of time that INR is in the therapeutic range over the first month of therapy and translating this into meaningful clinical outcomes to inform coverage, and reimbursement decision making will be difficult. A combination of modeling and the cumulative synthesis of evidence from ongoing RCTs of warfarin PGx may provide the most pragmatic approach to informing reimbursement decisions.

If payers are faced with a promising technology that may improve patient safety, likely poses low risk of harm, but has significant uncertainty associated with its clinical value, the value of additional research and policy options such as coverage with evidence development (CED) should be considered. In CED, promising medical technologies for which RCT data are lacking receive provisional approval for coverage by payers on condition that additional data on effectiveness are collected through RCTs or patient registries to improve the evidence base.45,46 The US Center for Medicare and Medicaid Services (CMS) introduced CED in 2005,8 but it has not yet been applied to genomics or personalized medicine.46

On 6 May 2009, CMS proposed the use of CED for warfarin PGx testing given the clinical promise but lack of sufficient evidence of clinical utility.47 The decision, if enacted, would require that patients be enrolled in an RCT designed to compare PGx testing to standard of care for warfarin dosing as a condition for reimbursement. CMS would require that the trial measure the following endpoints: “major hemorrhage, minor hemorrhage, thromboembolism related to the primary indication for anticoagulation, other thromboembolic event, or mortality,” health outcomes beyond the primary ones captured in the recently initiated COAG trial.35 It is unclear whether any trials would meet the outcome requirements listed in the CMS CED proposal. Also, because testing for patients enrolled in RCTs is often covered by trial funding, the CED proposal may have limited practical effect, although it will provide some assistance for private-sector-funded RCTs.

PUBLIC AND PRIVATE INVESTMENTS IN PERSONALIZED ONCOLOGY: A HEALTH ECONOMIST’S PERSPECTIVE ON USING VALUE OF INFORMATION METHODS TO PRIORITIZE CLINICAL TRIALS

Oncology has benefited from advances in “stratified” therapeutics and diagnostic tests. Insights into tumor biology and PGx have led to clinical strategies such as monoclonal antibody-based therapy (e.g., imatinib for chronic myelogenous leukemia), antiangiogenic agents used in combination with traditional chemotherapy (e.g., bevacizumab), PGx-based tests to tailor dosing of traditional chemotherapeutic agents (e.g., irinotecan for colorectal cancer), and genetic tests in combination with therapies (e.g., trazumitab for Her2/neu-positive breast cancer). These developments have dramatically reduced mortality for some forms of cancer48 and appear to have potential to produce still more successes.

At least 3 defining features of the market for these therapies are important to our discussion of the role of economics in personalizing medicine. First, the public sector has been an active funder of research and development (R&D) in this area. The development of many new therapeutic approaches is the result of public investments in basic science and, more recently, translational R&D. Biotechnology and pharmaceutical firms have also increasingly invested in the development of novel oncology products, sometimes in public-private collaborations. Second, there is substantial financial risk in developing novel therapies for cancer, with 74% phase III clinical trial failures compared to 5% for traditional products.49 The average cost of developing a drug approved by the FDA is more than $1 billion, which appears to be driven in large part by monoclonal antibodies and other biologic-based methods.50 Third, the public is a major payer in the treatment of cancer, and costs have increased significantly due in part to the introduction and use of innovative compounds.51 As a consequence, targeted cancer treatments’ high list prices have fuelled media attention and public concern.52

In the face of growing federal budgetary deficits, cancer agencies have called for increasing the evidence base to identify and prioritize research priorities in “personalized” medicine diagnostics and therapeutics.53 Any method employed should explicitly account for the above market features: scientific uncertainty, economic risk, and perspective of public payers.

A handful of economic methods for prospective investment decision making under uncertainty may have the potential to provide a systematic foundation for public decision making on future R&D. The method most explicitly linked to the public’s perspective as the funder of novel therapeutic techniques and end-product payer is value-of-information (VOI) analysis, which is based in Bayesian methods for decision making under uncertainty.54,55 VOI allows an analyst to identify key sources of clinical uncertainty regarding expected cost-effectiveness. It employs the results of a cost-effectiveness analysis of alternative and standard treatments. The expected VOI is calculated as the value of improved outcomes that would result from additional research resulting in a change in recommended strategies that leads to improved outcomes. The VOI approach can also incorporate effects on costs to measure the benefits of research net of costs of treatment and/or costs of research.

The VOI method has been most widely discussed as useful to prioritize medical R&D in the UK context, where a quasi-governmental body—NICE—employs cost-effectiveness analysis to assess coverage decisions, and there is an active, publicly funded R&D enterprise closely linked to the National Health Service.54,56 Although unlikely to be adopted in the United States outright, some researchers in the United States have suggested that it may become part of a toolkit used by policy makers to value research on heterogeneous effects of57 or preferences over58 existing therapies in subpopulations.

In particular, VOI could be used to assess research on stratified treatment of cancer patients based on genomic or PGx factors using existing therapies. For example, it is increasingly common for candidate genomic markers for response to oncology drugs to be included in oncology phase III trials; such markers may be clinically useful for predicting absolute response in certain patient subpopulations or differential dosing regimens. The adoption of the public’s perspective in this application in the US context may be appropriate: the incorporation of tumor biomarkers and PGx into oncology trials appears to be a National Institutes of Health (NIH) funding priority.51 In addition, the pursuit of clinical research to subset a population that appears responsive based on average treatment effects may not be in the interest of for-profit firms if they are unable to appropriate the value of this new information—for example, through extended patent life, newly packaged products for specific populations, or higher prices.

There are some important analytic caveats. First, VOI calculations presume a probabilistic cost-effectiveness analysis. Second, this approach assumes the availability of estimates of future trial performance costs, treatment costs, improvements to patients’ quality of life, and patient population size. The true cost of conducting trials to generate subgroup analyses for PGx is unknown. Third, if cost-effectiveness is not used to determine treatment recommendations, as is likely in the US context, or research is unlikely to alter the recommended treatment, VOI estimates may not guide funding decisions.

In sum, the application of VOI may help US-based public decision makers identify opportunities to prioritize oncology research investment using genomic information. The application of these methods to the development of personalized therapies is subject to significant analytical challenges but could potentially help improve the efficiency of public resource use in an area of significant scientific promise but limited budgets. Targeted investment in R&D based on scientific opportunity and clinical and cost implications for patients and payers would be complementary to evidence-based guideline development and health technology assessments of new products, placing emphasis on expected cost-effectiveness from society’s perspective at the earliest stages of innovative activity.

KEY THEMES OF THE SYMPOSIUM: PERSPECTIVES OF THE MODERATOR AND SYMPOSIUM PARTICIPANTS

Each panelist mentioned, directly or indirectly, 5 main challenges to translating the promise of genomics into personalized medicine: the first relates to the limited quantity and quality of available evidence of effectiveness. Without evidence of effectiveness, there can be no cost-effectiveness, no VOI analyses, and limited regulatory decision making. The limited availability of information from RCTs for PGx tests requires decision makers to determine what types of evidence are relevant and acceptable. When do we have “enough” scientific evidence on personalized medical strategies to use them in clinical practice or pursue them further in clinical trials? In practice, the quality and quantity of the evidence required depends on the type of decision and who is making the decision. In particular, the evidence bar for permitting a test to be introduced (e.g., FDA labeling) is lower than that set for evidence-based clinical guidelines, which is revealed by the contrasting opinions expressed about the use of PGx testing in guiding initial dosing with warfarin. Health care payers may require good-quality evidence that morbidity or mortality is improved before they provide reimbursement. As mentioned above, innovative reimbursement agreements such as CED may offer the opportunity to provide patients access to innovative technologies such as warfarin PGx testing while improving our evidence base and decreasing decision uncertainty.8 However, the CMS CED proposal for warfarin-associated PGx testing could impose stringent requirements on study design and, because PGx testing is generally covered in RCTs, would not necessarily expand patient access to such testing.

Second, the complexity of the information provided by currently available tests is challenging. Newly available genetic tests, such as those used to predict response or dose an available medicine, share the same concerns about sensitivity and specificity with older diagnostics. Challenges unique to molecular testing include the fact that an outcome is likely to be influenced by multiple genes and that each gene can influence multiple outcomes. To complicate matters further, the influence of a genetic variant on a given outcome can vary across individuals, modified by interactions with other genes and with environmental exposures, including diet, drug therapies, and disease states. Thus, each piece of genomic information can provide information about multiple diseases or drugs, but the importance and accuracy of that information may depend on multiple factors, which are often unknown at the time the genetic test is used in clinical practice.59 For genomic medicine to truly transform medical care, it will require linking many genes to many diseases. With the large number of potential factorial combinations among genetic variants and the lack of evidence about the effects of most of these, establishing the clinical validity of newer genetic tests and prospective planning of clinical trials investigating the application of PGx to identifying patient subtypes is challenging. The true costs of conducting such trials may be prohibitive, and their relative value to clinical practice is also unknown at this time.

Third, the rapid commercialization of new genetic tests with limited oversight or regulation of genetic testing in the United States is challenging to physicians, patients, and payers.4 Minimal regulation of laboratory testing practices, none of which is specific to genetic tests, is provided by CMS through the Clinical Laboratory Improvement Amendments of 1988 (CLIA). The FDA regulates as devices genetic tests that are sold to multiple laboratories in different states and also regulates the proposed use of genetic tests in guiding prescription medications. No regulation at the federal level of DTC laboratory-developed genetic tests exists, even though health care providers may be asked to interpret the results and to order follow-on diagnostic tests. Because physicians remain responsible for interpreting the results of tests and determining an appropriate course of medical care, the current state of oversight complicates the traditional role that physicians have in determining which test or combination of tests may be clinically most useful. DTC marketing of genomic tests consisting of sets of SNPs for which little is known about clinical validity and essentially nothing about clinical utility strains the ability of physicians to incorporate new information into practice. It also underscores the need for enhanced provider-patient communication on the place of genetics in medical care treatment and prevention. The number of potential tests that could be developed and marketed directly to patients has created an increasing sense of urgency about creating order out of the complexity of genomic medicine.

Fourth, an important challenge to the translation of advances in molecular medicine to clinical practice is the apprehension with which many providers, patients, and payers view genomics. Most physicians have little understanding of genomics and the benefits and risks of specific genetic tests. Many have concerns about the potential for their lack of knowledge to lead to mistakes and even legal liability.59 This situation may be exacerbated by inadequate and poorly understood reimbursement for providing genetic test and counseling services as well as incentives to practice “population-based” medicine under current quality improvement systems.60 Fear of the potential for insurance discrimination had been an urgent concern for patients and medical providers prior to the recent passage of the Genetic Information Nondiscrimination Act (GINA).61 Commentators have raised concerns about how well GINA will protect against discriminatory practices moving forward.60 ELSI issues go beyond insurance discrimination to include other potential adverse effects of labeling patients, which are not considered in cost-effectiveness analyses.

Fifth, implicit in all discussions of the present and future of personalized medicine is the fundamental concern about the value of new information and products for patients, physicians, and the medical care delivery system. Discussions of genetic tests and newer personalized diagnostics and therapeutics suggest that clinical utility is a key measure of value. Participants agreed that the assembly of new data on outcomes of genomic testing strategies, including impacts on morbidity, mortality, quality of life, costs of clinical trials, and diagnosis and treatment costs, is needed. Potential downstream implications for clinical practice and insurance coverage and reimbursement need to be considered. In particular, pharmacogenomic profiles may aid in the treatment of comorbid disorders and future illnesses.

These challenges will require concerted efforts across multiple stakeholders to realize the benefits of personalized medicine and involve regulatory issues, research investment, incentive structures, and education. Tools and frameworks that have been developed in health economics over the past decades, such as decision modeling, preference assessment, cost-effectiveness analysis, and VOI analysis, provide a framework for weighing the benefits, risks, and costs of genetic testing. Their use can improve understanding of the uncertainty and complexity inherent in PGx, the value of additional research that may be funded by multiple stakeholders, and the value to patients of genomic information.

An inherent limitation of conventional health economics approaches is the narrow focus on clinical endpoints and costs to the health care system. Such an approach can yield information on which strategy is most likely to be cost-effective on average. However, it ignores the promise of personalized medicine to provide tailored therapies that take into account individual differences in risk and values.58 The balance of expected risks and benefits for each option and hence the optimal strategy is likely to differ among individuals because of preference heterogeneity. A full economic evaluation of personalized medicine incorporating genotypic information requires the modeling of individual preferences, including preferences about different treatment options.62

From the societal perspective, the utility of genetic tests can include the value of information itself, or the autonomous right to “know” one’s own genetic makeup, predisposition to illness, and information’s potential utility in reproductive decision making.63 Such endpoints can also be negative, to the extent that they result in labeling and stigmatization. Likewise, the genetic and pharmacogenomic information available through personalized medicine—including that of tumor biomarkers and genomic profiles—may have value in and of itself. To assess the perceived value of genetic information, one must collect data on patient, public, and provider preferences. Existing methods for calculating quality-adjusted life years, which focus on health outcomes, are inadequate to the task, and other methods such as choice experiments have been proposed to elicit preferences on the value of genomic information and other attributes of testing as well as health outcomes.63,64 However, it is unclear whether insurers will reimburse for genomic tests and profiles based on personal utility apart from differences in clinical outcomes.65,66 Reimbursement decisions are likely to continue to drive current and future investments in personalized medicine.

Finally, current challenges and future research agendas highlighted in the symposium suggest a critical role for the assembly of new types of health services research teams—those that combine expertise in genetics and PGx with clinical practice, health technology assessment, and the organization and financing of medical care and medical innovation.

CONCLUSIONS

This symposium aimed to identify opportunities for translating scientific advance into personalizing medical treatment and challenges in the practical implementation of this promise, featuring experts in the delivery of clinical medicine, guideline development, regulatory policy, health technology assessment, and the organization and financing of medical innovation. Uniting all panelist presentations was the systematic application of methods for decision making under uncertainty. This empirical framework can help inform public decision making across the continuum of drug and diagnostic discovery, testing, FDA approval, and clinical use. Future research and ongoing discussions are required to examine what evidence is needed and how economic methods may be employed to aid in the translation of personalized medicine into clinical practice.

Acknowledgments

The symposium was supported by conference grant R13 HS017305 from the Agency for Healthcare Research and Quality. We acknowledge helpful comments on this manuscript from Cindy Bryce, William D. Dotson, and 4 anonymous reviewers.

Footnotes

Disclaimer: The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention or the Food and Drug Administration.

This material was presented in a symposium at the annual meeting of the Society for Medical Decision Making in Pittsburgh, Pennsylvania, on 12 October 2007.

AUTHOR CONTRIBUTIONS

Cindy Bryce, Conti, and Grosse are responsible for the conception of the original symposium panel discussion and its composition. Conti, Veenstra, Lesko, and Armstrong are responsible for the conception and scholarship underlying their original presentations for the symposium. Conti, Veenstra, and Grosse are responsible for editing the text of the presentations into manuscript form reflecting current published scholarship and public policy. Grosse is responsible for managing all correspondence with contributing authors and editors and coordinating revision.

Contributor Information

Rena Conti, Department of Pediatrics and Center for Health and the Social Sciences, University of Chicago, Chicago, Illinois.

David L. Veenstra, Department of Pharmacy, University of Washington, Seattle.

Katrina Armstrong, Department of Medicine, School of Medicine, University of Pennsylvania, Philadelphia.

Lawrence J. Lesko, Office of Clinical Pharmacology, Center for Drug Evaluation and Research, Food and Drug Administration, Silver Spring, Maryland.

Scott D. Grosse, National Center on Birth Defects and Developmental Disabilities, Centers for Disease Control and Prevention, Atlanta, Georgia.

References

  • 1.Hoffman EP. Skipping toward personalized molecular medicine. N Engl J Med. 2007;357:2719–22. doi: 10.1056/NEJMe0707795. [DOI] [PubMed] [Google Scholar]
  • 2.Trusheim MR, Berndt ER, Douglas FL. Stratified medicine: strategic and economic implications of combining drugs and clinical biomarkers. Nat Rev Drug Discov. 2007;6:287–93. doi: 10.1038/nrd2251. [DOI] [PubMed] [Google Scholar]
  • 3.Burke W, Khoury MJ, Stewart A, Zimmern RL Bellagio Group. The path from genome-based research to population health: development of an international public health genomics network. Genet Med. 2006;8:451–8. doi: 10.1097/01.gim.0000228213.72256.8c. [DOI] [PubMed] [Google Scholar]
  • 4.Williams I, Bryan S, McIver S. How should cost-effectiveness analysis be used in health technology coverage decisions? Evidence from the National Institute for Health and Clinical Excellence approach. J Health Serv Res Policy. 2007;12:73–9. doi: 10.1258/135581907780279521. [DOI] [PubMed] [Google Scholar]
  • 5.Nelson HD, Huffman LH, Fu R, Harris EL. Genetic risk assessment and BRCA mutation testing for breast and ovarian cancer susceptibility: systematic evidence review for the U.S. Preventive Services Task Force. Ann Intern Med. 2005;143:362–79. doi: 10.7326/0003-4819-143-5-200509060-00012. [DOI] [PubMed] [Google Scholar]
  • 6.Janssens AC, Gwinn M, Bradley LA, Oostra BA, van Duijn CM, Khoury MJ. A critical appraisal of the scientific basis of commercial genomic profiles used to assess health risks and personalize health interventions. Am J Hum Genet. 2008;82:593–9. doi: 10.1016/j.ajhg.2007.12.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Secretary’s Advisory Committee on Genetics, Health, and Society (SACGHS) US System of Oversight of Genetic Testing: A Response to the Charge of the Secretary of Health and Human Services. 2008 Apr; doi: 10.2217/17410541.5.5.521. Available from: http://www4.od.nih.gov/oba/sacghs/reports/SACGHS_oversight_report.pdf. [DOI] [PMC free article] [PubMed]
  • 8.Tunis SR, Pearson SD. Coverage options for promising technologies: Medicare’s ‘coverage with evidence development’. Health Aff (Millwood) 2006;25:1218–30. doi: 10.1377/hlthaff.25.5.1218. [DOI] [PubMed] [Google Scholar]
  • 9.Khoury MJ, Gwinn M, Yoon PW, Dowling N, Moore CA, Bradley L. The continuum of translation research in genomic medicine: how can we accelerate the appropriate integration of human genome discoveries into health care and disease prevention? Genet Med. 2007;9:665–74. doi: 10.1097/GIM.0b013e31815699d0. [DOI] [PubMed] [Google Scholar]
  • 10.Burke W, Pinsky LE, Press NA. Categorizing genetic tests to identify their ethical, legal, and social implications. Am J Med Genet. 2001;106:233–40. doi: 10.1002/ajmg.10011. [DOI] [PubMed] [Google Scholar]
  • 11.Evaluation of Genomic Applications in Practice and Prevention (EGAPP) Implementation and Evaluation of a Model Approach. Available from: http://www.egappreviews.org/about.htm.
  • 12.Teutsch SM, Bradley LA, Palomaki GE, et al. The Evaluation of Genomic Applications in Practice and Prevention (EGAPP) initiative: methods of the EGAPP Working Group. Genet Med. 2009;11:3–14. doi: 10.1097/GIM.0b013e318184137c. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Haddow JE, Palomaki GE. ACCE: a model process for evaluating data on emerging genetic tests. In: Khoury MJ, Little J, Burke W, editors. Human Genome Epidemiology: A Scientific Foundation for Using Genetic Information to Improve Health and Prevent Disease. New York: Oxford University Press; 2004. pp. 217–33. [Google Scholar]
  • 14.Grosse SD, Khoury MJ. What is the clinical utility of genetic testing? Genet Med. 2006;8:448–50. doi: 10.1097/01.gim.0000227935.26763.c6. [DOI] [PubMed] [Google Scholar]
  • 15.Evaluation of Genomic Applications in Practice and Prevention (EGAPP) Working Group. Recommendations from the EGAPP Working Group: testing for cytochrome P450 polymorphisms in adults with nonpsychotic depression treated with selective serotonin reuptake inhibitors. Genet Med. 2007;9:819–25. doi: 10.1097/gim.0b013e31815bf9a3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Evaluation of Genomic Applications in Practice and Prevention (EGAPP) Working Group. Recommendations from the EGAPP Working Group: can UGT1A1 genotyping reduce morbidity and mortality in patients with metastatic colorectal cancer treated with irinotecan? Genet Med. 2009;11:15–20. doi: 10.1097/GIM.0b013e31818efd9d. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Evaluation of Genomic Applications in Practice and Prevention (EGAPP) Working Group. Recommendations from the EGAPP Working Group: genetic testing strategies in newly diagnosed individuals with colorectal cancer aimed at reducing morbidity and mortality from Lynch syndrome in relatives. Genet Med. 2009;11:35–41. doi: 10.1097/GIM.0b013e31818fa2ff. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Evaluation of Genomic Applications in Practice and Prevention (EGAPP) Working Group. Recommendations from the EGAPP Working Group: can tumor gene expression profiling improve outcomes in patients with breast cancer? Genet Med. 2009;11:66–73. doi: 10.1097/GIM.0b013e3181928f56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Food and Drug Administration. Guidance for Industry: Providing Clinical Evidence of Effectiveness for Human Drugs and Biological Products. Rockville, MD: US Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research (CDER), Center for Biologics Evaluation and Research (CBER); 1998. Available from: www.fda.gov. [Google Scholar]
  • 20.Budnitz DS, Pollock DA, Weidenbach KN, Mendelsohn AB, Schroeder TJ, Annest JL. National surveillance of emergency department visits for outpatient adverse drug events. JAMA. 2006;296:1858–66. doi: 10.1001/jama.296.15.1858. [DOI] [PubMed] [Google Scholar]
  • 21.Diener HC Executive Steering Committee of the SPORTIFF III and V Investigators. Stroke prevention using the oral direct thrombin inhibitor ximelagatran in patients with non-valvularatrial fibrillation: pooled analysis from the SPORTIF III and V studies. Cerebrovasc Dis. 2006;21:279–93. doi: 10.1159/000091265. [DOI] [PubMed] [Google Scholar]
  • 22.Schwarz UI, Ritchie MD, Bradford Y, et al. Genetic determinants of response to warfarin during initial anticoagulation. N Engl J Med. 2008;358:999–1008. doi: 10.1056/NEJMoa0708078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ansell J, Hirsh J, Hylek E, et al. Pharmacology and management of the vitamin K antagonists: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines (8th edition) Chest. 2008;133:160S–98S. doi: 10.1378/chest.08-0670. [DOI] [PubMed] [Google Scholar]
  • 24.Lesko LJ. The critical path of warfarin dosing: finding an optimal dosing strategy using pharmacogenetics. Clin Pharmacol Ther. 2008;84:301–3. doi: 10.1038/clpt.2008.133. [DOI] [PubMed] [Google Scholar]
  • 25.Wadelius M, Pirmohamed M. Pharmacogenetics of warfarin: current status and future challenges. J Pharmacogenomics. 2007;7:99–111. doi: 10.1038/sj.tpj.6500417. [DOI] [PubMed] [Google Scholar]
  • 26.Gage BF, Eby C, Johnson JA, et al. Use of pharmacogenetic and clinical factors to predict the therapeutic dose of warfarin. Clin Pharmacol Ther. 2008;84:326–31. doi: 10.1038/clpt.2008.10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Limdi NA, McGwin G, Goldstein JA, et al. Influence of CYP2C9 and VKORC1 1173C/T genotype on the risk of hemorrhagic complications in African-American and European-American patients on warfarin. Clin Pharmacol Ther. 2008;83:312–21. doi: 10.1038/sj.clpt.6100290. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Flockhart DA, O’Kane D, Williams MS, et al. Pharmacogenetic testing of CYP2C9 and VKORC1 alleles for warfarin. Genet Med. 2008;10:139–50. doi: 10.1097/GIM.0b013e318163c35f. [DOI] [PubMed] [Google Scholar]
  • 29.Garcia DA. Warfarin and pharmacogenomic testing: the case for restraint. Clin Pharmacol Ther. 2008;84:303–5. doi: 10.1038/clpt.2008.131. [DOI] [PubMed] [Google Scholar]
  • 30.Hynicka LM, Cahoon WD, Jr, Bukaveckas BL. Genetic testing for warfarin therapy initiation. Ann Pharmacother. 2008;42:1298–303. doi: 10.1345/aph.1L127. [DOI] [PubMed] [Google Scholar]
  • 31.Limdi NA, Veenstra DL. Warfarin pharmacogenetics. Pharmacotherapy. 2008;28:1084–97. doi: 10.1592/phco.28.9.1084. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Anderson JL, Horne BD, Stevens SM, et al. Randomized trial of genotype-guided versus standard warfarin dosing in patients initiating oral anticoagulation. Circulation. 2007;116:2563–70. doi: 10.1161/CIRCULATIONAHA.107.737312. [DOI] [PubMed] [Google Scholar]
  • 33.Caraco Y, Blotnick S, Muszkat M. CYP2C9 genotype-guided warfarin prescribing enhances the efficacy and safety of anticoagulation: a prospective randomized controlled study. Clin Pharmacol Ther. 2008;83:460–70. doi: 10.1038/sj.clpt.6100316. [DOI] [PubMed] [Google Scholar]
  • 34.Klein TE, Altman RB, Eriksson N, et al. Estimation of the warfarin dose with clinical and pharmacogenetic data. N Engl J Med. 2009;360:753–64. doi: 10.1056/NEJMoa0809329. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Clinicaltrials.gov. Clarification of Optimal Anticoagulation Through Genetics (COAG), NCT00839657. 2009 Report Number: NCT00839657. Available from: http://clinicaltrials.gov/NCT00839657.
  • 36.Academy of Managed Care Pharmacy. Section 1.3: Evidence for pharmacogenomic tests and drugs. The AMCP Format for Formulary Submissions. Version 2.1. 2005 Apr; Available from: http://www.fmcpnet.org/index.cfm?pl=A0EE1897. [PubMed]
  • 37.Kelly C. Medco warfarin, tamoxifen genomic testing finding traction among payers. The Pink Sheet. 2008 Dec 2; Available from: http://www.biopharmatoday.com/2008/12/medco-warfarin-tamoxifen-genomic-testing-finding-traction-among-payers.html.
  • 38.Pharmacogenomic Testing for Warfarin Response. (CAG-00400N). Available from: https://www.cms.hhs.gov/mcd/indexes.asp.
  • 39.Grosse SD. Assessing cost-effectiveness in healthcare: history of the USD 50,000 per QALY threshold. Expert Rev Pharmacoecon Outcomes Res. 2008;8:165–78. doi: 10.1586/14737167.8.2.165. [DOI] [PubMed] [Google Scholar]
  • 40.Steinbrook R. Saying no isn’t NICE: the travails of Britain’s National Institute for Health and Clinical Excellence. N Engl J Med. 2008;359:1977–81. doi: 10.1056/NEJMp0806862. [DOI] [PubMed] [Google Scholar]
  • 41.Garrison LP, Jr, Austin MJ. Linking pharmacogenetics-based diagnostics and drugs for personalized medicine. Health Aff (Millwood) 2006;25:1281–90. doi: 10.1377/hlthaff.25.5.1281. [DOI] [PubMed] [Google Scholar]
  • 42.Veenstra DL. The cost-effectiveness of warfarin pharmacogenomics. J Thromb Haemost. 2007;5:1974–5. doi: 10.1111/j.1538-7836.2007.02699.x. [DOI] [PubMed] [Google Scholar]
  • 43.Fargher EA, Tricker K, Newman W, et al. Current use of pharmacogenetic testing: a national survey of thiopurinemethyltransferase testing prior to azathioprine prescription. J Clin Pharm Ther. 2007;32:187–95. doi: 10.1111/j.1365-2710.2007.00805.x. [DOI] [PubMed] [Google Scholar]
  • 44.Garrison LP, Jr, Towse A, Bresnahan BW. Assessing a structured, quantitative health outcomes approach to drug risk-benefit analysis. Health Aff (Millwood) 2007;26:684–95. doi: 10.1377/hlthaff.26.3.684. [DOI] [PubMed] [Google Scholar]
  • 45.Hutton J, Trueman P, Henshall C. Coverage with evidence development: an examination of conceptual and policy issues. Int J Technol Assess Health Care. 2007;23:425–32. doi: 10.1017/S0266462307070651. [DOI] [PubMed] [Google Scholar]
  • 46.Miller FG, Pearson SD. Coverage with evidence development: ethical issues and policy implications. Med Care. 2008;46:746–51. doi: 10.1097/MLR.0b013e3181789453. [DOI] [PubMed] [Google Scholar]
  • 47.Centers for Medicare and Medicaid Services. Proposed Decision Memo for Pharmacogenomic Testing for Warfarin. 2009 Report Number: CAG-00400N. Available from: http://www.cms.hhs.gov/mcd. [PubMed]
  • 48.American Cancer Society. Cancer facts and figures 2007, 2007 update. Available from: http://www.cancer.org/downloads/STT/CAFF2007PWSecured.pdf.
  • 49.DiMasi JA, Grabowski HG. Economics of new oncology drug development. J Clin Oncol. 2007;25:209–16. doi: 10.1200/JCO.2006.09.0803. [DOI] [PubMed] [Google Scholar]
  • 50.Adams CP, Brantner VV. Estimating the cost of new drug development: is it really 802 million dollars? Health Aff (Millwood) 2006;25:420–8. doi: 10.1377/hlthaff.25.2.420. [DOI] [PubMed] [Google Scholar]
  • 51.Yabroff KR, Lamont EB, Mariotto A, et al. Cost of care for elderly cancer patients in the United States. J Natl Cancer Inst. 2008;100:630–41. doi: 10.1093/jnci/djn103. [DOI] [PubMed] [Google Scholar]
  • 52.Herper M. Forbes; Jun 8, 2004. Cancer’s cost crisis. Available from: www.forbes.com/technology/2004/06/08/cx_mh_0608costs.html. [Google Scholar]
  • 53.National Cancer Institute. The Nation’s Investment in Cancer Research, Fiscal Year 2007, Fiscal Year 2008 and Fiscal Year 2009. Available from: http://plan.cancer.gov/
  • 54.Claxton KP, Posnett J. An economic approach to clinical trial design and research priority setting. Health Econ. 1996;5:513–34. doi: 10.1002/(SICI)1099-1050(199611)5:6<513::AID-HEC237>3.0.CO;2-9. [DOI] [PubMed] [Google Scholar]
  • 55.Willan AR. Clinical decision making and the expected value of information. Clin Trials. 2007;4:279–85. doi: 10.1177/1740774507079237. [DOI] [PubMed] [Google Scholar]
  • 56.Claxton KP, Sculpher MJ. Using value of information analysis to prioritize health research: some lessons from recent UK experience. Pharmacoeconomics. 2006;24:1055–68. doi: 10.2165/00019053-200624110-00003. [DOI] [PubMed] [Google Scholar]
  • 57.Walton SM, Schumock GT, Lee KV, Alexander GC, Meltzer D, Stafford RS. Prioritizing future research on off-label prescribing: results of a quantitative evaluation. Pharmacotherapy. 2008;28:1443–52. doi: 10.1592/phco.28.12.1443. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Basu A, Meltzer D. Value of information on preference heterogeneity and individualized care. Med Decis Making. 2007;27:112–27. doi: 10.1177/0272989X06297393. [DOI] [PubMed] [Google Scholar]
  • 59.Henrikson NB, Burke W, Veenstra DL. Ancillary risk information and pharmacogenetic tests: social and policy implications. Pharmacogenomics J. 2008;8:85–9. doi: 10.1038/sj.tpj.6500457. [DOI] [PubMed] [Google Scholar]
  • 60.Brandt R, Ali Z, Sabel A, McHugh T, Gilman P. Cancer genetics evaluation: barriers to and improvements for referral. Genet Test. 2008;12:9–12. doi: 10.1089/gte.2007.0036. [DOI] [PubMed] [Google Scholar]
  • 61.Erwin C. Legal update: living with the Genetic Information Nondiscrimination Act. Genet Med. 2008;10:869–73. doi: 10.1097/GIM.0b013e31818ca4e7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Huang ES, Shook M, Jin L, Chin MH, Meltzer DO. The impact of patient preferences on the cost-effectiveness of intensive glucose control in older patients with new-onset diabetes. Diabetes Care. 2006;29:259–64. doi: 10.2337/diacare.29.02.06.dc05-1443. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Grosse SD, Wordsworth S, Payne K. Economic methods for valuing the outcomes of genetic testing: beyond cost-effectiveness analysis. Genet Med. 2008;10:648–54. doi: 10.1097/gim.0b013e3181837217. [DOI] [PubMed] [Google Scholar]
  • 64.Lee JT, Bridges JFP, Shockney L. Can pharmacoeconomics and outcomes research contribute to the empowerment of women affected by breast cancer? Exp Rev Pharmacoeconomics Outcomes Res. 2008;8:73–9. doi: 10.1586/14737167.8.1.73. [DOI] [PubMed] [Google Scholar]
  • 65.Grosse SD, McBride CM, Evans J, Khoury MJ. Personal utility and genomic information: look before you leap. Genet Med. 2009 Jul 18; doi: 10.1097/GIM.0b013e3181af0a80. Epub ahead of print. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Rogowski W, Grosse SD, Khoury MJ. Challenges of translating genetic tests into clinical and public health practice. Nat Rev Genet. 2009;10:489–95. doi: 10.1038/nrg2606. [DOI] [PubMed] [Google Scholar]

RESOURCES