Abstract
Background
Systematic approaches to stakeholder-informed research prioritization are a central focus of comparative effectiveness research. Genomic testing in cancer is an ideal area to refine such approaches given rapid innovation and potentially significant impacts on patient outcomes.
Objective
To develop and pilot-test a stakeholder-informed approach to prioritizing genomic tests for future study in collaboration with the cancer clinical trials consortium SWOG.
Methods
We conducted a landscape-analysis to identify genomic tests in oncology using a systematic search of published and unpublished studies, and expert consultation. Clinically valid tests suitable for evaluation in a comparative study were presented to an external stakeholder group. Domains to guide the prioritization process were identified with stakeholder input, and stakeholders ranked tests using multiple voting rounds.
Results
A stakeholder group was created including representatives from patient-advocacy groups, payers, test developers, regulators, policy-makers, and community-based oncologists. We identified nine domains for research prioritization with stakeholder feedback: population impact; current standard of care, strength of association; potential clinical benefits, potential clinical harms, economic impacts, evidence of need, trial feasibility, and market factors. The landscape-analysis identified 635 studies; of 9 tests deemed to have sufficient clinical validity, 6 were presented to stakeholders. Two tests in lung cancer (ERCC1 and EGFR) and one test in breast cancer (CEA/CA15-3/CA27.29) were identified as top research priorities.
Conclusions
Use of a diverse stakeholder group to inform research prioritization is feasible in a pragmatic and timely manner. Additional research is needed to optimize search strategies, stakeholder group composition and integration with existing prioritization mechanisms.
Keywords: landscape analysis, comparative effectiveness research, research prioritization, genomics, cancer
Introduction
With rapid technological innovation, competing interests, and a complex healthcare system, prioritizing research for evidence generation in healthcare is not easy. To improve the likelihood that stakeholders’ needs are met in an environment of constrained research resources, a research prioritization process informed by stakeholder perspectives may offer significant value, and is a central theme of comparative effectiveness research (CER).[1-4]
Genomic testing in cancer is an ideal area to refine such approaches given the rapid pace of innovation and potentially significant impacts on patient outcomes. Genomic tests, defined herein as tests that detect genomic variation directly or in downstream molecular markers, are especially important in the realm of oncology due to the role of inherited and acquired genetic variation in tumor development, molecular classification of cancer subtypes and proliferation of a wide array of targeted therapies.
We describe the development and methods of a systematic and stakeholder-driven approach to prioritizing genomic tests in oncology for evaluation in a future comparative study. To our knowledge, this is one of the first undertakings to incorporate a systematic literature review, expert analysis, and external stakeholder input to inform research prioritization in collaboration with a clinical trials consortium.
Methods
CANCERGEN
The Center for Comparative Effectiveness Research in Cancer Genomics (CANCERGEN) is a consortium including the Fred Hutchinson Cancer Research Center (FHCRC), SWOG the largest cancer clinical trials consortium in the US, the University of Washington (UW), and the Center for Medical Technology and Policy (CMTP) - a non-profit organization involved in CER and stakeholder engagement activities. The CANCERGEN structure incorporates an external stakeholder group and the overall goal of CANCERGEN is to facilitate prioritization and the rapid design and implementation of prospective CER studies of genomic tests.
We developed a process to identify and select candidate genomic tests for presentation to stakeholders, and a process for prioritization based on stakeholder input (Figure 1).
Landscape-analysis
We refer to our process to identify genomic assays through evaluation of the literature, together with domain-specific expert consultation, as a ‘landscape analysis’, rather than as a ‘horizon scan’.[5-7] Horizon scanning efforts generally focus on interventions at the clinical horizon, and may not include expert review; our landscape analysis includes interventions still in the research phase, and therefore includes a larger range of technologies, coupled with expert consultation.
The unit of our search was studies of genomic and protein biomarkers in oncology that could be used for prediction (patient response to a particular therapy) and/or prognostication (patient disease prognosis). To ensure maximum population impact and manage the scope of the analysis, we limited our initial search to the five most prevalent cancers: lung, breast, colorectal, bladder and prostate. However, tests outside of these specific areas were included if identified by experts or collaborators.
The peer-reviewed literature search conducted in Medline was limited to the last five years (2004-2009); assays described in peer-reviewed published articles before this time with a high degree of importance should already be in development, or have further evidence published within our 5-year time window. We limited the search to “core” clinical journals (“jsubsetaim”, Abridged Index Medicus).
We searched the grey literature, [8] including oncology conference proceedings for 2 years prior to include studies that may not yet reached the peer-reviewed literature. We also reviewed evidence reports and topics of investigation from organizations such as the Early Detection Research Network (EDRN) and Evaluation of Genomic Applications in Practice and Prevention (EGAPP)[7].
CANCERGEN investigators identified one independent expert at the FHCRC and UW in each of the five cancer areas, to whom a list of studies was sent, after applying 2nd round exclusion criteria (described below). Experts were asked to rate the potential healthcare impact of the genomic technologies as well as provide feedback on any potential tests our landscape process failed to identify. We also elicited expert input from SWOG leadership, including disease-specific committee members on tests of potential interest, as well as external stakeholder input for identification of tests.
Inclusion/Exclusion criteria
Exclusion criteria were applied in four rounds. First, studies were excluded if they were non-quantitative reviews, did not include predictive or prognostic endpoints, and were not associated with any genomic tests. Second, studies that were not validated, study size too small (n<50), prediction/prognostication effect uncertain, and study not validated (not replicated in independent, separate finding) were excluded. Third round exclusion criteria increased the stringency for acceptance in terms of trial size (n>90), clinical impact on affected population, predictive rather than solely prognostic tests, and those interventions with large RCTs already in progress. Fourth round inclusion/exclusion criteria for final selection included strength of biomarker validity, the potential for healthcare system impact, availability of patients for trials, SWOG investigator expertise and interest in subject area, relevance to current clinical development programs, availability of trial funding, and evidentiary gaps. These criteria were developed iteratively based upon the studies identified, rather than being pre-specified.
Stakeholder Selection Criteria
An external stakeholder advisory group (ESAG) was formed to engage stakeholders in the evidence-generation and feedback process. Potential categories for stakeholder participation (patient/consumer, payer, clinician, policy-maker/regulator, industry) were established requiring nominees that were senior-level members within their affiliated organization, and possessed a working knowledge of genetics, personalized medicine, and/or oncology. Practical experience in evaluation of technology including genomic tests was also desirable.
Stakeholder education and prioritization framework
We developed a prioritization framework modeled on the approach used by EGAPP, an evidenced-based recommendation group for genomic tests, and refined these with the input of the external stakeholder group. [9-11] One-page Test-Target-Profiles (TTPs) were created summarizing these nine criteria alongside more detailed Topic-Briefs of 3-5 pages. The TTPs and Topic-Briefs were sent to stakeholders approximately two weeks ahead of an in-person meeting.
Prioritization process
Before the in-person meeting, stakeholders were asked to rank tests in order of importance. At the meeting, each of the tests was presented by the investigator who led the preparation of the respective TTP and Topic Brief. Stakeholders could bring up tests of particular interest to them, not previously identified through the landscape analysis and/or expert input. After all genomic applications were discussed, stakeholders were asked to again rank the tests in terms of research prioritization. A written meeting summary was shared with the entire ESAG and a follow-up teleconference was held two weeks post-meeting with those members that were unable to attend in-person. A third round of voting on prioritization was conducted on-line immediately following the teleconference. A modified-Delphi procedure [12] with an electronic audience polling system was used to obtain stakeholder responses throughout the prioritization process.
Results
Landscape analysis
The Medline search identified 199 studies; 436 studies were identified through the grey literature. These 635 studies were then narrowed to 188 studies based on the first set of exclusion criteria (1st round), and to 49 studies by applying 2nd round exclusion criteria. 3rd round exclusion criteria narrowed the number of unique tests to 9. After joint consultation with SWOG leadership and application of 4th round inclusion/exclusion criteria, 6 tests remained for presentation to stakeholders (Figure 2).
Stakeholder composition and prioritization process
Thirty-one total members were identified of which 13 were recruited; 2 of 4 from patients and consumers, 3 of 6 from health care providers, 2 of 6 from industry, 3 out of 6 from purchasers and producers, and 3 of 9 from policy makers and regulators. The domains of the prioritization framework (Figure 3) included population impact; current standard of care strength of association (analytical and clinical validity); potential clinical benefits, potential clinical harms, economic impacts, evidence of need clinical trial feasibility; and market factors (reimbursement status). The 6 tests selected for presentation to stakeholders (for which TTPs and topic briefs were prepared) are shown in table 1. Eight of 13 ESAG members attended the in-person meeting and voted; in the follow up teleconference all 13 members of the external stakeholder group submitted their votes.
Table 1.
|
CA indicates cancer antigen; CEA, cardnocmbryomc antigen; FISH, fluorescence in siiu hybridization; GEP, gene expression profile; NSCLC non-small-cell lung cancer.
Genomic tests
The three top ranked tests were: 1) EGFR mutation testing for erlotinib maintenance therapy after 1st line chemotherapy in NSCLC, 2) ERCC1 expression testing for platinum-based adjuvant therapy in resected NSCLC, and 3) CEA, CA15-3/CA27.29 tumor markers for detection of breast cancer recurrence after primary therapy (1,2,3 in Table 1). Prospective studies and designs in these areas are now in different stages of preparation/submission.
Discussion
Implications
The processes developed here leveraged wide-ranging expertise coupled with stakeholder involvement in directing a research prioritization effort in CER. Our experience indicates the following aspects are valuable in stakeholder-informed CER prioritization: 1) systematic searches with explicit inclusion/exclusion criteria and expert input, 2) diverse stakeholder group with unique backgrounds, opinions and perspectives, 3) a formal prioritization framework including explicit domains, 4) concise stakeholder educational materials, 5) a ranking methodology involving multiple rounds of feedback and re-evaluation, and 6) early involvement of external stakeholders and leaders within the research organization.
Our findings also suggest that use of expert consultation to identify technologies in the landscape analysis may be more efficient, as interaction with SWOG experts would have independently identified 4 out of 6 of the candidate tests presented to stakeholders. However, a systematic approach may carry greater validity with stakeholders. Additional research is needed in identify an optimal combination of systematic and expert-informed landscape-analysis.
The implications of our study in regard to cancer genomics are as follows: only a small number of tests (9 in the 3rd round and 6 in the 4th round) passed our inclusion/exclusion criteria, suggesting that only a handful of genomic tests not yet already being evaluated are candidates for a large scale prospective clinical study. Notably, stakeholder involvement was influential in highlighting the priority of breast cancer tumor markers, which have been in use in practice for many years but lack high quality evidence supporting their benefit.
Following these efforts, the CANCERGEN team has designed and submitted a prospective cohort study for EGFR testing in lung cancer, and study design is underway for breast cancer tumor markers. Furthermore we are conducting a feasibility analysis on ERCC1 testing in lung cancer, with preliminary studies on patient reported outcomes and preferences underway.
Limitations
There are several limitations of our study worth noting. Our landscape analysis was built upon studies that were published and accessible to us. However, since many studies are not published our results may incorporate a publication or reporting bias. CANCERGEN investigators, rather than stakeholders, were responsible for developing and application of inclusion/exclusion criteria. However, given the large range of studies analyzed and the iterative nature of inclusion/exclusion criteria developed, it would not be pragmatic and timely to engage stakeholders for a discussion of all studies.
We confined our literature search to core clinical journals and the five most prevalent cancers; a search of the entire literature might have revealed promising tests, but this was too resource and time-intensive to carry out. A search of Medline without restriction to core clinical journals yielded over 4000 studies. We sought to mitigate the possibility of missing promising tests by the use of expert input.
We adopted an approach of bringing together diverse stakeholders consistent with CER principles; however, these stakeholders may bring in unique perspectives. By supporting an open discussion in a neutral forum we sought to bring out these opinions, allowing a constructive exchange of ideas and concerns, where specific interests and perspectives could be examined. Although the same stakeholder group was polled consistently throughout the process, the stakeholder group was too small to quantitatively assess patterns in the voting. Lastly, the process described above relied on qualitative evaluation of the potential value of research; we are currently working on formal value of information (or value of research) calculations to derive quantitative estimates for the top three candidates.[4, 13-15]
Conclusion
We developed a CER-based approach that provides diverse stakeholders a process for prioritizing research into genomic tests. SWOG, as one of the leading publically funded cancer cooperative groups, is actively exploring the use of comparative effectiveness principles and expertise in understanding how to better design and conduct relevant clinical trials that help better prevent, diagnose or treat people. Further research is needed to develop approaches for integration with existing prioritization mechanisms.
Acknowledgements
This work was funded by the Center for Comparative Effectiveness Research in Cancer Genomics (CANCERGEN) through the American Recovery and Reinvestment Act of 2009 by the National Cancer Institute, National Institutes of Health under Agency Award # RC2 CA148570 and by CA32102. The content of this manuscript is solely the responsibility of the authors and does not necessarily reflect the views or policies of the National Cancer Institute, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government.
References
- 1.Edwards RT, et al. Economic evaluation alongside pragmatic randomised trials: developing a standard operating procedure for clinical trials units. Trials. 2008;9:64. doi: 10.1186/1745-6215-9-64. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Brass EP. The gap between clinical trials and clinical practice: the use of pragmatic clinical trials to inform regulatory decision making. Clin Pharmacol Ther. 2010;87(3):351–5. doi: 10.1038/clpt.2009.218. [DOI] [PubMed] [Google Scholar]
- 3.Macpherson H. Pragmatic clinical trials. Complement Ther Med. 2004;12(2-3):136–40. doi: 10.1016/j.ctim.2004.07.043. [DOI] [PubMed] [Google Scholar]
- 4.Myers EW, Sanders GD, Ravi G, et al. Evaluating the Potential Use of Modeling and Value-of-Information Analysis for Future Research Prioritization Within the Evidence-based Practice Center Program. In: AHRQ, editor. Methods Future Research Needs Report. AHRQ; 2011. [PubMed] [Google Scholar]
- 5.Brown IT, et al. Medical technology horizon scanning. Australas Phys Eng Sci Med. 2005;28(3):200–3. doi: 10.1007/BF03178717. [DOI] [PubMed] [Google Scholar]
- 6.Douw K, Vondeling H, Oortwijn W. Priority setting for horizon scanning of new health technologies in Denmark: views of health care stakeholders and health economists. Health Policy. 2006;76(3):334–45. doi: 10.1016/j.healthpol.2005.06.016. [DOI] [PubMed] [Google Scholar]
- 7.Gwinn M, et al. Horizon scanning for new genomic tests. Genetics in medicine: official journal of the American College of Medical Genetics. 2011;13(2):161–5. doi: 10.1097/GIM.0b013e3182011661. [DOI] [PubMed] [Google Scholar]
- 8.Auger CP. Information sources in grey literature. Bowker-Saure; London: 1998. [Google Scholar]
- 9.Teutsch SM, et al. The Evaluation of Genomic Applications in Practice and Prevention (EGAPP) Initiative: methods of the EGAPP Working Group. Genet Med. 2009;11(1):3–14. doi: 10.1097/GIM.0b013e318184137c. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Hoffman A, et al. How best to engage patients, doctors, and other stakeholders in designing comparative effectiveness studies. Health Aff (Millwood) 2010;29(10):1834–41. doi: 10.1377/hlthaff.2010.0675. [DOI] [PubMed] [Google Scholar]
- 11.Department of Health and Human Services, J. Federal Coordinating Council for Comparative Effectiveness Research; Report to the President and Congress. Washington, DC; Washington DC. 2009. [Google Scholar]
- 12.Dalkey N. An experimental study of group opinion: The Delphi method Futures. 1969;1(5):408–426. [Google Scholar]
- 13.Phelps CE, Mushlin AI. Focusing technology assessment using medical decision theory. Med Decis Making. 1988;8(4):279–89. doi: 10.1177/0272989X8800800409. [DOI] [PubMed] [Google Scholar]
- 14.Meltzer D. Addressing uncertainty in medical cost-effectiveness analysis implications of expected utility maximization for methods to perform sensitivity analysis and the use of cost-effectiveness analysis to set priorities for medical research. J Health Econ. 2001;20(1):109–29. doi: 10.1016/s0167-6296(00)00071-0. [DOI] [PubMed] [Google Scholar]
- 15.Claxton KP, Sculpher MJ. Using value of information analysis to prioritise health research: some lessons from recent UK experience. Pharmacoeconomics. 2006;24(11):1055–68. doi: 10.2165/00019053-200624110-00003. [DOI] [PubMed] [Google Scholar]
- 16.Cappuzzo F, et al. Erlotinib as maintenance treatment in advanced non-small-cell lung cancer: a multicentre, randomised, placebo-controlled phase 3 study. Lancet Oncol. 2010;11(6):521–9. doi: 10.1016/S1470-2045(10)70112-1. [DOI] [PubMed] [Google Scholar]
- 17.Cobo M, et al. Customizing cisplatin based on quantitative excision repair cross-complementing 1 mRNA expression: a phase III trial in non-small-cell lung cancer. J Clin Oncol. 2007;25(19):2747–54. doi: 10.1200/JCO.2006.09.7915. [DOI] [PubMed] [Google Scholar]
- 18.Kokko R, Holli K, Hakama M. Ca 15-3 in the follow-up of localised breast cancer: a prospective study. Eur J Cancer. 2002;38(9):1189–93. doi: 10.1016/s0959-8049(01)00429-4. [DOI] [PubMed] [Google Scholar]
- 19.Chan DW, et al. Use of Truquant BR radioimmunoassay for early detection of breast cancer recurrence in patients with stage II and stage III disease. Journal of clinical oncology: official journal of the American Society of Clinical Oncology. 1997;15(6):2322–8. doi: 10.1200/JCO.1997.15.6.2322. [DOI] [PubMed] [Google Scholar]
- 20.Pirker R, et al. Cetuximab plus chemotherapy in patients with advanced non-small-cell lung cancer (FLEX): an open-label randomised phase III trial. Lancet. 2009;373(9674):1525–31. doi: 10.1016/S0140-6736(09)60569-9. [DOI] [PubMed] [Google Scholar]
- 21.Khambata-Ford S, et al. Analysis of potential predictive markers of cetuximab benefit in BMS099, a phase III study of cetuximab and first-line taxane/carboplatin in advanced non-small-cell lung cancer. J Clin Oncol. 2010;28(6):918–27. doi: 10.1200/JCO.2009.25.2890. [DOI] [PubMed] [Google Scholar]
- 22.Shaughnessy JD, Jr., et al. A validated gene expression model of high-risk multiple myeloma is defined by deregulated expression of genes mapping to chromosome 1. Blood. 2007;109(6):2276–84. doi: 10.1182/blood-2006-07-038430. [DOI] [PubMed] [Google Scholar]
- 23.Tol J, Nagtegaal ID, Punt CJ. BRAF mutation in metastatic colorectal cancer. N Engl J Med. 2009;361(1):98–9. doi: 10.1056/NEJMc0904160. [DOI] [PubMed] [Google Scholar]
- 24.Roth AD, et al. Prognostic role of KRAS and BRAF in stage II and III resected colon cancer: results of the translational study on the PETACC-3, EORTC 40993, SAKK 60-00 trial. J Clin Oncol. 2010;28(3):466–74. doi: 10.1200/JCO.2009.23.3452. [DOI] [PubMed] [Google Scholar]