Abstract
Results of randomized controlled trials (RCTs) provide valuable comparisons of 2 or more interventions to inform health care decision making; however, many more comparisons are required than available time and resources to conduct them. Moreover, RCTs have limited generalizability. Comparative effectiveness research (CER) using real-world evidence (RWE) can increase generalizability and is important for decision making, but use of nonrandomized designs makes their evaluation challenging. Several tools are available to assist.
In this study, we comparatively characterize 5 tools used to evaluate RWE studies in the context of making health care adoption decision making: (1) Good Research for Comparative Effectiveness (GRACE) Checklist, (2) IMI GetReal RWE Navigator (Navigator), (3) Center for Medical Technology Policy (CMTP) RWE Decoder, (4) CER Collaborative tool, and (5) Real World Evidence Assessments and Needs Guidance (REAdi) tool. We describe each and then compare their features along 8 domains: (1) objective/user/context, (2) development/scope, (3) platform/presentation, (4) user design, (5) study-level internal/external validity of evidence, (6) summarizing body of evidence, (7) assisting in decision making, and (8) sharing results/making improvements.
Our summary suggests that the GRACE Checklist aids stakeholders in evaluation of the quality and applicability of individual CER studies. Navigator is a collection of educational resources to guide demonstration of effectiveness, a guidance tool to support development of medicines, and a directory of authoritative resources for RWE. The CMTP RWE Decoder aids in the assessment of relevance and rigor of RWE. The CER Collaborative tool aids in the assessment of credibility and relevance. The REAdi tool aids in refinement of the research question, study retrieval, quality assessment, grading the body of evidence, and prompts with questions to facilitate coverage decisions.
All tools specify a framework, were designed with stakeholder input, assess internal validity, are available online, and are easy to use. They vary in their complexity and comprehensiveness. The RWE Decoder, CER Collaborative tool, and REAdi tool synthesize evidence and were specifically designed to aid formulary decision making. This study adds clarity on what the tools provide so that the user can determine which best fits a given purpose.
The limitations of traditional randomized controlled trials (RCTs) in informing health care decisions are well known, thus, the recent emphasis on comparative effectiveness research (CER).1-3 CER requires the use of real-world data (RWD), which is rapidly becoming more accessible, with a corresponding increase in demand for its synthesis into real-world evidence (RWE) to guide clinical practice and inform health technology adoption (HTA) decisions.4,5 Over time, managed care decision makers are likely to use more RWE,6 yet evaluating the quality and usefulness of RWE studies can be challenging. Tools are available to assist; however, they have not been compared.
Early efforts to develop a framework to assist health care decision makers in using RWD were led by the International Society for Health Economics and Outcomes Research (ISPOR).7 In 2017, the ISPOR-International Society for Pharmacoepidemiology (ISPE) Special Task Force on RWE in Health Care Decision Making established good procedural practice intended to enhance confidence by decision makers in RWE derived from RWD.8 In 2019, the RWE Transparency Initiative, a partnership among ISPOR, ISPE, the Duke-Margolis Center for Health Policy, and the National Pharmaceutical Council (NPC) released a draft white paper recommending widespread use of registries to improve transparency of RWE studies.9 The AMCP Format for Formulary Submissions, Version 4.1 has evolved to include RWE as part of clinical evidence.6 The Institute for Clinical and Economic Review (ICER) has described the opportunities and challenges of using RWE for coverage decisions and developed a framework to guide optimal development and use of RWE for this purpose.10,11 Governmental and quasi-governmental agencies have been leading similar efforts.12-18
In the context of CER, HTA decision makers are seldom positioned to conduct RWE studies or to undertake comprehensive systematic reviews to inform timely decision making. Existing barriers have slowed the rate at which RWE is used in decision making.19-21 This situation has led to a rise in the development of a number of tools to guide HTA decision making. Yet, in our own HTA work, we find that colleagues are still seeking clarity about how to evaluate RWE. This uncertainty led to creation of the Real World Evidence Assessments and Needs Guidance (REAdi) tool (by the University of Washington’s Comparative Health Outcomes, Policy and Economics [CHOICE] Institute) and, eventually, to a comparison of the features of all 5 tools highlighted in this study.
Program Description
DESCRIPTION OF CURRENT TOOLS
In this report, we describe the 5 best practice tools identified by experts in CER and RWE: the Good Research for Comparative Effectiveness (GRACE) Checklist, the Innovative Medicines Initiative (IMI) GetReal RWE Navigator, the Center for Medical Technology and Policy (CMTP) RWE Decoder, the CER Collaborative tool, and the REAdi tool (Table 1).22-35
TABLE 1.
GRACE Checklist22-25 | IMI GetReal RWE Navigator26,27 | CMTP RWE Decoder28-30 | CER Collaborative31-34 | UW REAdi Tool35 | |
---|---|---|---|---|---|
Domain 1: Objective, targeted user, and context of use | |||||
Objective | Review the quality of observational studies to support decision making, and a set of questions to guide the design, conduct, analysis, and reporting of observational CER studies | Increase awareness about the use of RWE and to understand concepts related to RWE | Aid health care decision makers in use RWE when making coverage decisions and care choices | Aid decision makers to synthesize evidence from multiple studies in a consistent and transparent manner to guide coverage and formulary decisions | Aid decision makers to synthesize evidence from multiple studies in a consistent and transparent manner to guide coverage and formulary decisions |
Target user | Health care decision makers | Wide variety of users including pharmaceutical companies and patients | Health care decision makers | Health care decision makers | Health care decision makers |
Level of tool complexity (see text and URLs to each tool) | Basic | Basic | Novice | Intermediate | Advanced |
Context of use | No | No | Yes | Yes | Yes |
Domain 2: Development and scope | |||||
RWE only | Yes, not intended for RCTs | Yes, not intended for RCTs | No, allows the review of RCT and non-RCT studies | No, allows the review of RCT and non-RCT studies | Yes, not intended for RCTs |
Stakeholders involved in development | Developed through literature review and consultation with experts from professional societies, payer groups, the private sector, and academia. Collaborators includes the ISPE, NPC | Developed through an EU public-private consortium comprised of pharmaceutical companies, academia, HTA agencies and regulators, patient organizations and small and mediumsized enterprises | Developed by the Green Park Collaborative, which works to improve clinical research by cultivating collaborations between drug and device developers, private and public payers, clinicians, researchers, regulators, and the patients that they all serve | Developed through a collaboration among AMCP, ISPOR, NPC | Developed through a collaboration between 1 academic center with input from health sciences librarians and a local payer; beta-tested at the 2018 AMCP Nexus meeting, Orlando, FL |
Developed using a specified framework; contains a tool | Yes/yes | Yes/yes | Yes/yes | Yes/yes | Yes/yes |
Region of focus | United States and Europe | Europe | United States | United States and Europe | United States |
Domain 3: Platform and presentation | |||||
Platform and presentation | 11-item checklist; pdf available online | Online resources | Spreadsheet online | Series of questionnaires accessed through online portal; downloadable summary report or monograph | R-Shiny app; downloadable summary report or monograph |
Cost | 0 | 0 | 0 | 0 | 0 |
Publicly available | Yes | Yes | Yes | Yes | Yes |
Domain 4: User design comparison | |||||
Provides definitions of terms | Yes | Yes | Yes | Yes | Yes |
Types of interventions incorporated in the tool | Not limited to pharmaceuticals | Focus on pharmaceuticals | Not limited to pharmaceuticals | Not limited to pharmaceuticals | Not limited to pharmaceuticals |
Applied PICOTS to specify questions | Not named as such but includes diseases/conditions, comparators, treatment regimens, and patient characteristics | Included PICOTS, allows user to select one at a time | Yes PICOTS considered |
Yes PICOTS considered |
Yes PICOTS considered |
Allows specification of primary and secondary outcomes | Yes | No | Yes | Yes | Yes |
Allows users to design and tailor research questions of interest to guide literature search | No | No | No | No | Yes, PICOTS developed first, followed by a list of detailed areas of interest to help users tailor their questions |
Provides a comprehensive list of study designs | No | Yes | No | No (focused on prospective and retrospective observational studies, (NMAs, CEAs) | Yes |
Guides users to appropriate study designs | No | No | No | No | Yes |
Provides customized PubMed search based on defined questions of interest | No | No | No | No | Yes |
Level of user interactivity (low, moderate, good) | Low | Low | Good | Good | Good |
Domain 5: Assess internal and external validity from evidence | |||||
Provides systematic method to assess internal validity (e.g., specific risk of bias/quality evaluation) | Yes, lists various types of biases including selection, misclassification, detection, performance bias, and attrition biases | Yes, listed different checklists for quality assessment | Assesses data integrity, potential for bias, precision; 1 bias tool for RCTs, 1 for non-RCTs | Yes, assesses credibility using checklists corresponding to design, data, analysis, reporting, and interpretation domains | Yes, uses a wide collection of publicly available tools39-46 |
Provides systematic method to assess external validity | No | Yes | Yes, assesses relevance | Yes, assesses relevance | Yes, assesses relevance |
Domain 6: Features to summarize the body of evidence | |||||
Methods for summarizing the body of RWE | No (used for individual studies but not the body of evidence) | Mentions GRADE47 criteria but does not build it in | Uses 3 dimensions (rigorous, relevance, effect size); provides 3-dimensional graphic representation of summary | Two dimensions (magnitude of benefit and certainty of benefit); provides a graphic representation on an illustrative evidence rating matrix | Uses GRADE47 criteria to summarize the body of evidence |
Domain 7: Features to assist decision making | |||||
Provides a structured framework for decision making | Yes | Yes | Yes | Yes | Yes |
Provides recommendations for decision making | No | No | No | No | Yes |
Provides documentation of tool usage | Yes, pdf | No | Yes, Excel | Yes, Excel, Word, and pdf | Yes, print screen and save multiple projects |
Domain 8: Ability to share results with others and collect data to facilitate iterative improvements | |||||
Capability of tool to share results | Yes | Yes | Yes | Yes | Yes |
Capability of tool designer to log, collect, and analyze user inputs to facilitate tool improvements | No | No | No | No | Yes |
CEA = cost-effectiveness analysis; CER = comparative effectiveness research; CMTP = Center for Medical Technology Policy; GRACE = Good Research for Comparative Effectiveness; GRADE = Grading of Recommendations, Assessment, Development and Evaluations; HTA = health technology adoption; IMI = Innovative Medicines Initiative; ISPE = International Society for Pharmacoepidemiology; NMA = network meta-analysis; NPC = National Pharmaceutical Council; PICOTS = population, intervention, comparator, outcomes, timing, setting; RCT = randomized controlled trial; REAdi = Real-World Evidence Assessments and Needs Guidance; RWE = real-world evidence.
1. GRACE Checklist.22-25
The GRACE Checklist was derived from a set of principles that define the elements of good practice for the design, conduct, analysis, and reporting of observational CER studies. The original GRACE principles were developed by Outcome Sciences (now part of IQVIA) with funding from the NPC and have been endorsed by ISPE. The principles served as the foundation for the 11-item GRACE Checklist that aids stakeholders in the evaluation of the quality and applicability of CER studies. The validated checklist was developed from a review of published literature and was tested globally by volunteers.
2. IMI GetReal RWE Navigator.26,27
Launched in 2018, the GetReal Initiative is a public-private partnership of pharmaceutical companies, academia, HTA agencies, and regulators across the European Union. The goal is to increase the quality of RWE generation in medicines development and regulatory/HTA processes, optimizing and ensuring adoption and sustainability of the tools earlier developed under the “GetReal” Consortium. The online RWE Navigator includes educational resources to guide demonstration of effectiveness, a guidance tool to support development of medicines, and a directory of and links to authoritative resources for evaluation of RWE, including quality and credibility. Navigator maps the landscape of RWE to help users “navigate” to what they need to prioritize and make decisions.
3. CMTP RWE Decoder.28-30
The online CMTP RWE Decoder was developed through a multistakeholder initiative for the purpose of making available an easy-to-use tool to help decision makers confidently and consistently assess RWE for their decision making needs. The tool is an Excel spreadsheet that facilitates user assessment of the relevance and rigor of existing evidence from RCTs and RWE. Finalized in 2017, RWE Decoder is composed of 3 modules. In Module 1, the user articulates the question of interest, framing the question in the PICOTS (population, intervention, comparators, outcomes, timing, setting) format. Module 2a provides the framework for assessing the relevance of each identified study. Module 2b prompts for assessment of the rigor of each individual study, including the quality of the evidence, potential for bias, precision, and data integrity. Module 2c calls for the magnitude and direction of effect. In Module 3, an integrated summary of each of these assessments is presented in graphical format. RWE Decoder is available in the public domain.
4. CER Collaborative Tool.31-34
Developed by the CER Collaborative, a multistakeholder initiative of NPC, AMCP, and ISPOR, the CER Collaborative tool helps users synthesize RWE and assess its credibility and relevance of evidence. The goal is to provide greater uniformity and transparency in the evaluation of RWE to inform HTA decisions. The CER Collaborative tool facilitates the critical appraisal of 4 types of studies (prospective and retrospective observational studies, modeling studies, and indirect treatment comparisons) and is composed of 2 parts. Part 1 involves critical appraisal of individual studies along 2 dimensions: (1) relevance, using the PICOTS framework, and (2) credibility, by critiquing study design, data sources, analyses, reporting, interpretation, and conflicts of interest. In Part 2, evidence from multiple studies of varying designs is synthesized and assessed for reliability along 2 dimensions: (1) magnitude of comparative net health benefit and (2) evidence certainty. A joint rating is generated and presented graphically using the Evidence Rating Matrix from ICER with an available export function for formulary monographs.36,37 The CER Collaborative tool is online and interactive. Training modules and video tutorials are available. A 19-hour CER certificate program accredited by the American Council on Pharmaceutical Education was offered for a fee.38
5. REAdi Tool.35
The REAdi tool was developed by investigators at the University of Washington’s Comparative Health Outcomes, Policy and Economics (CHOICE) Institute. Intended to provide guidance on the use of RWE for HTA decision making for drug and diagnostic interventions, the REAdi framework is comprehensive in leading the user through the decisionmaking process in 5 phases. In Phase 1, the user defines the research question in the PICOTS format. Once defined, the tool automatically synthesizes terms to create a PubMed search strategy; citations of relevant studies are returned for review. In Phase 2, the user reviews and quality-rates the RWE on a per-study basis, having been guided to an embedded qualityrating tool specific to each included study design.39-46 Once completed, in Phase 3, the user is prompted to rate the strength of the body of evidence using GRADEPro (Grading of Recommendations, Assessment, Development and Evaluations).47 In Phase 4, the user assesses the applicability and sufficiency of the evidence for the intended purpose. In Phase 5, questions are posed to facilitate coverage decisions relevant to immediate payer decision need (Table 2). Constructed using an R-Shiny app,48 the publicly available, online REAdi tool uses drop-down menus, branching logic, and piping, such that questions posed in subsequent tasks are based on previous answers. A graphical summary is presented. The tool also harbors the functionality to print screen and save literature reviews, allowing one to work on multiple projects simultaneously.
TABLE 2.
RWE Considerations | Recommendations |
---|---|
|
|
REAdi = Real-World Evidence Assessments and Needs Guidance; RWE = real-world evidence.
Observations
COMPARISON OF THE 5 TOOLS
In October 2018, CHOICE investigators were joined by a collaborator from NPC (Graff) at the AMCP Nexus meeting (Orlando, FL) in leading an invited workshop to compare, contrast, and offer an opportunity to use 3 of these tools, using a case study from the literature.49 In this article, we describe and compare those 3 tools and add comparisons of the GRACE Checklist and the IMI Navigator. In evaluating each tool, we have identified 27 features. For comparison, we informally organized these into 8 domains (Table 1). Domains 1 through 4 provide a description of the features of each tool. Domains 5 through 7 describe the flow of RWE decision making. Domain 8 applies after completion of use and describes the ability of each tool to collect data to facilitate iterative improvements.
Domain 1: Objective, Targeted User, and Context of Use.
The objectives of the GRACE Checklist and IMI Navigator are each unique—the GRACE Checklist provides a simple tool to review study quality and guide aspects of CER studies, while the Navigator aims to increase awareness and understanding of RWE and has multiple uses. The RWE Decoder, CER Collaborative tool, and REAdi tool are each intended to guide formulary decision making. While the target users of the Navigator are a broad group of stakeholders, the target users of the other 4 tools are health care decision makers. The GRACE Checklist and Navigator are designed at a basic level of complexity, whereas the RWE Decoder, CER Collaborative tool, and REAdi tool are increasingly complex, while still being relatively easy to use. Aligned with the objective, the context of use of the RWE Decoder, CER Collaborative tool, and REAdi tool are specified; this is not the case with the GRACE Checklist and Navigator.
Domain 2: Development and Scope.
Three of the five tools are intended for use with RWE only, while the RWE Decoder and CER Collaborative tool allow for consideration of RCTs. Stakeholders were involved in development of all 5 tools, and a framework for decision making is specified for each. The RWE Decoder and REAdi tool have a U.S. focus, whereas the Navigator was developed for use in Europe, and the GRACE Checklist and CER Collaborative tool are intended for international use.
Domain 3: Platform and Presentation.
All tools are available online, in PDF format (GRACE Checklist), webpages (Navigator and the CER Collaborative tool), Microsoft Excel (RWE Decoder), or R Shiny (REAdi). All are publicly available at no cost.
Domain 4: User Design Comparison.
All tools provide definitions of terms. Navigator is focused solely on pharmaceutical interventions; the others are not limited to intervention category. All 5 tools have adopted at least some elements of the PICOTS framework. All but Navigator allow specification of primary and secondary outcomes simultaneously. Neither the GRACE Checklist nor the RWE Decoder provide a list of study designs to which each tool can be applied. The Navigator and REAdi tool accommodate many study designs, while the CER Collaborative focuses on cohort studies, cost-effectiveness analyses, and (network) meta-analyses. The REAdi tool explicitly allows users to design and tailor research questions, guiding them to appropriate study designs, and assists users in constructing key words and search strategies that result in automatic execution of a PubMed search. As a checklist and collection of online resources, neither the GRACE Checklist nor Navigator are digitally interactive, while RWE Decoder, the CER Collaborative tool, and the REAdi tool are.
Domain 5: Assess Internal and External Validity of Evidence.
All tools provide a systematic method to assess internal validity (quality/bias); Navigator provides links to quality rating tools, while the REAdi tool embeds most of these. RWE Decoder provides 1 tool each for assessing the rigor of RCTs and non-RCTs, using the quality of the research questions, potential for bias, precision, and data integrity. The CER Collaborative tool assesses credibility using checklists corresponding to design, data, analysis, reporting, and interpretation. RWE Decoder, IMI Navigator, the CER Collaborative tool, and the REAdi tool assess relevance, that is, external validity, while the GRACE Checklist does not.
Domain 6: Features to Summarize the Body of Evidence.
The tools vary in their methods for summarizing the body of RWE. The GRACE Checklist is applicable only to individual studies. Navigator mentions the GRADE criteria, but GRADE is not a built-in feature. RWE Decoder uses a 3-dimensional graphic to summarize relevance, rigor, and effect size. The CER Collaborative tool has adopted ICER’s Evidence Rating Matrix that illustrates magnitude and certainty in 2 dimensions, which follows the GRADE methods. The REAdi tool has embedded the GRADEPro criteria.
Domain 7: Features to Assist Decision Making.
All 5 tools provide a structured framework for decision making, while the REAdi tool also provides recommendations for decision making. All except the Navigator provide documentation of tool usage, with the GRACE Checklist using PDF format and RWE Decoder and the CER Collaborative tool using Excel; the CER Collaborative tool also uses Word. The REAdi tool prints screen and saves projects.
Domain 8: Ability to Share Results with Others and Collect Data to Facilitate Iterative Improvements.
Each tool allows users to save their ratings and share with other users. Only the REAdi tool is designed to log, collect, and analyze user inputs to facilitate iterative improvements in features.
Implications
We reviewed 5 online tools to synthesize RWE of CER, which vary in their objectives, complexity, and context for use. The simplest to use, the GRACE Checklist, provides a simple quality rating, while the Navigator provides education, guidance, and resources. The 3 remaining tools—RWE Decoder, the CER Collaborative tool, and the REAdi tool are similar to each other in that they integrate quality ratings, education, and guidance resources. They are therefore more complex and are intended to evaluate a body of RWE to enhance formulary decision making. The RWE Decoder and CER Collaborative tool are useful for evaluating already identified evidence, while the REAdi tool spans a broader set of decision-making tasks by beginning upstream in explicitly assisting the user in specifying the research questions and finishing downstream by offering recommendations for formulary decision making. With their varying features, breadth of tasks, and levels of complexity, the RWE Decoder, CER Collaborative tool, and REAdi tool synthesize evidence and were specifically designed to aid formulary decision making.
Conclusions
This study characterizes 5 potentially useful tools for HTA decision making using RWE. Because use of RWE is low, research that explores awareness, usefulness, and barriers to use of these tools may result in their improvement, uptake in their use, and ultimately increased use of RWE for decision making.19-21 Future research could also include a more in-depth comparison of these tools in the context of case studies to determine which features are of greatest value to decision makers. Best practices for tool use could then be developed and existing tools integrated. A discussion could then ensue about strategies to sustain the new tool. In the meantime, this study adds clarity on what the tools provide so that the user can determine which best fits a given purpose.
ACKNOWLEDGMENTS
The authors thank the following individuals who participated in development of the REAdi tool: D. Louden and J. Rich for their assistance in refining REAdi’s PubMed search strategy; D. Barthold and several colleagues from a health plan in the Pacific Northwest for contributing to content development; T. Hopkins, A. Kim, and E. Neuberger for beta testing; and the attendees at the invited workshop delivered at AMCP Nexus 2018 in Orlando, FL, for providing feedback.
REFERENCES
- 1.American Recovery and Reinvestment Act of 2009, HR 1, 111th Cong, 1st Sess (2009). Accessed December 3, 2020. https://www.govinfo.gov/content/pkg/BILLS-111hr1enr/pdf/BILLS-111hr1enr.pdf
- 2.Conway PH, Clancy C. Comparativeeffectiveness research-implications of the Federal Coordinating Council’s report. N Engl J Med. 2009;361(4):328-30. [DOI] [PubMed] [Google Scholar]
- 3.Sox HC, Greenfield S. Comparative effectiveness research: a report from the Institute of Medicine. Ann Intern Med. 2009;151(3):203-W44. [DOI] [PubMed] [Google Scholar]
- 4.Corrigan-Curay J, Sacks L, Woodcock J. Real-world evidence and real-world data for evaluating drug safety and effectiveness. JAMA. 2018;320(9):867-68. [DOI] [PubMed] [Google Scholar]
- 5.U.S. Food and Drug Administration. Real-world evidence. Retrieved May 9, 2019. Updated November 30, 2020. Accessed December 3, 2020. https://www.fda.gov/science-research/science-and-research-special-topics/real-world-evidence
- 6.AMCP. The AMCP Format for Formulary Submissions. Version 4.1. December 2019. Accessed December 3, 2020. https://www.amcp.org/sites/default/files/2019-12/AMCP_Format%204.1_1219_final.pdf
- 7.Garrison L, Neumann PJ, Erickson P, Marshall D, Mullins CD. Using real-world data for coverage and payment decisions: the ISPOR Real-World Data Task Force Report. Value Health. 2007;10(5):326-35. [DOI] [PubMed] [Google Scholar]
- 8.Berger ML, Sox H, Willke RJ, et al. Good practices for real-world data studies of treatment and/or comparative effectiveness: recommendations from the Joint ISPOR-ISPE Special Task Force on Real-World Evidence in Health Care Decision Making. Pharmacoepidemiol Drug Saf. 2017;26(9):1033-39. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Steering Committee of the Real-World Evidence Transparency Initiative Partnership. Improving transparency in non-interventional research for hypothesis testing—why, what, and how: considerations from the Real-World Evidence Transparency Initiative Partnership. Draft white paper. September 18, 2019. Accessed December 3, 2020. https://www.ispor.org/docs/default-source/strategic-initiatives/improving-transparency-in-non-interventional-research-for-hypothesis-testing_final.pdf?sfvrsn=77fb4e97_0
- 10.Hampson G, Towse A, Dreitlein B, et al. Real world evidence for coverage decisions: opportunities and challenges. A report from the 2017 ICER Membership Policy Summit. March 2018. Accessed December 3, 2020. https://icer-review.org/wp-content/uploads/2018/03/ICER-Real-World-Evidence-White-Paper-03282018.pdf [DOI] [PubMed]
- 11.Pearson S, Dreitlein B, Towse A, et al. Understanding the context, selecting the standards: a framework to guide the optimal development and use of real world evidence for coverage and formulary decisions. March 2018. Accessed December 3, 2020. https://icer-review.org/material/rwe-white-paper-companion/ [DOI] [PubMed]
- 12.U.S. Food and Drug Administration. Submitting documents using real-world data and real-world evidence to FDA for drugs and biologics guidance for industry. May 2019. Accessed December 3, 2020. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/submitting-documents-using-real-world-data-and-real-world-evidence-fda-drugs-and-biologics-guidance
- 13.The 21st Century Cures Act. Pub L No. 114-255, 130 Stat. 1033. December 13, 2016. Accessed December 14, 2020. https://www.congress.gov/114/plaws/publ255/PLAW-114publ255.pdf
- 14.U.S. Food and Drug Administration. PDUFA reauthorization performance goals and procedures fiscal years 2018 through 2022. 2017. Accessed December 3, 2020. https://www.fda.gov/downloads/ForIndustry/UserFees/PrescriptionDrugUserFee/UCM511438.pdf
- 15.The National Academies of Sciences, Engineering and Medicine. Clinical Practice Guidelines We Can Trust. The National Academies Press; 2011. Accessed December 3, 2020. http://www.nationalacademies.org/hmd/Reports/2011/Clinical-Practice-Guidelines-We-Can-Trust.aspx [Google Scholar]
- 16.The National Academies of Sciences, Engineering and Medicine. Real-World Evidence Generation and Evaluation of Therapeutics: Proceedings of a Workshop. The National Academies Press; 2017. Accessed December 4, 2020. https://www.nap.edu/catalog/24685/real-world-evidence-generation-and-evaluation-of-therapeutics-proceedings-of [PubMed] [Google Scholar]
- 17.National Institutes of Health. NIH expands program that conducts large-scale clinical trials in real-world settings. July 24, 2018. Accessed December 4, 2020. https://www.nih.gov/news-events/news-releases/nih-expands-program-conducts-large-scale-clinical-trials-real-world-settings
- 18.Patient-Centered Outcomes Research Institute. Pragmatic clinical studies. August 1, 2016. Accessed December 4, 2020. https://www.pcori.org/research-results/pragmatic-clinical-studies
- 19.Malone DC, Brown M, Hurwitz JT, Peters L, Graff JS. Real-world evidence: useful in the real world of US payer decision making? How? When? And what studies? Value Health. 2018;21(3):326-33. [DOI] [PubMed] [Google Scholar]
- 20.Hurwitz JT, Brown M, Graff JS, Peters L, Malone DC. Is real-world evidence used in P & T monographs and therapeutic class reviews? J Manag Care Spec Pharm. 2017;23(6):613-20. doi: 10.18553/jmcp.2017.16368 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Chambers JD. Panzer AD, Pope EF, Graff JS, Neumann PM. Little consistency in evidence cited by commercial plans for specialty drug coverage. Health Aff (Millwood). 2019;38(11):1882-86. [DOI] [PubMed] [Google Scholar]
- 22.Dreyer NA, Schneeweiss S, McNeil B, et al. GRACE principles: recognizing high-quality observational studies of comparative effectiveness. Am J Manag Care. 2010;16(6):467-71. [PubMed] [Google Scholar]
- 23.Dreyer NA. Using observational studies for comparative effectiveness: finding quality with GRACE. J Comp Eff Res. 2013;2(5):413-18. [DOI] [PubMed] [Google Scholar]
- 24.Dreyer NA, Velentgas P, Westrich K, et al. The GRACE Checklist for rating the quality of observational studies of comparative effectiveness: a tale of hope and caution. J Manag Care Pharm. 2014;20(3):301-08. doi: 10.18553/jmcp.2014.20.3.301 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Dreyer NA, Bryant A, Velentgas P. The GRACE Checklist: a validated assessment tool for high quality observational studies of comparative effectiveness. J Manag Care Pharm. 2016;22(10):1107-13. doi: 10.18553/jmcp.2016.22.10.1107 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.IMI-GetReal. RWE Navigator. Accessed December 4, 2020. https://www.imi-get-real.eu/Tools/RWE-Navigator
- 27.IMI-GetReal. RWE Navigator. Assuring quality and credibility of RWE. Accessed December 4, 2020. https://rwe-navigator.eu/use-real-world-evidence/assure-quality-and-credibility-of-rwd/
- 28.Center for Medical Technology Policy. RWE Decoder framework, a practical tool for assessing relevance and rigor of real world evidence. A white paper from the Green Park Collaborative. February 7, 2017. Accessed December 4, 2020. http://www.cmtpnet.org/docs/resources/RWE_Decoder_Framework.pdf
- 29.Center for Medical Technology Policy. RWE Decoder: a practical tool for assessing relevance and rigor of real world evidence. Accessed December 4, 2020. http://www.cmtpnet.org/resource-center/view/rwe-decoder/
- 30.Center for Medical Technology Policy. RWE Decoder framework, a practical tool for assessing relevance and rigor of real world evidence. User’s guide. February 7, 2017. Accessed December 4, 2020. http://www.cmtpnet.org/docs/resources/RWE_Decoder_Users_Guide.pdf
- 31.CER Collaborative. Comparative Effectiveness Research Tool. Accessed December 4, 2020. https://www.cercollaborative.org/global/default.aspx?RedirectURL=%2fhome%2fdefault.aspx
- 32.Berger ML, Martin BC, Husereau D, et al. Questionnaire to assess the relevance and credibility of observational studies to inform health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health. 2014;17(2):143-56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Caro JJ, Eddy DM, Kan H, et al. A modeling study questionnaire to assess study relevance and credibility to inform health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health. 2014;17(2):174-82. [DOI] [PubMed] [Google Scholar]
- 34.Jansen JP, Trikalinos T, Cappelleri JC, et al. Indirect treatment comparison/network meta-analysis study questionnaire to assess relevance and credibility to inform health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health. 2014;17(2):157-73. [DOI] [PubMed] [Google Scholar]
- 35.The Choice Institute. Real-world evidence assessments and needs guidance (REAdi) tool. Accessed December 4, 2020. https://sop.washington.edu/choice/research/research-projects/readi/
- 36.Ollendorf DA, Pearson SD. An integrated evidence rating to frame comparative effectiveness assessments for decision makers. Med Care. 2010;48(6 Suppl):S145-S52. [DOI] [PubMed] [Google Scholar]
- 37.Institute for Clinical and Economic Review. ICER Evidence Rating Matrix. A user guide. Accessed December 4, 2020. http://icer-review.org/wp-content/uploads/2013/04/Rating-Matrix-User-Guide-Exec-Summ-FINAL.pdf
- 38.Perfetto EM, Anyanwu C, Pickering MK, Zaghab RW, Graff JS, Eichelberger B. Got CER? Educating pharmacists for practice in the future: new tools for new challenges. J Manag Care Spec Pharm. 2016;22(6):609-16. doi: 10.18553/jmcp.2016.22.6.609. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Sterne JAC, Hernan MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919. doi: 10.1136/bmj.i4919. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.National Heart, Lung and Blood Institute. Study quality assessment tools. Quality assessment tool for before-after (pre-post) studies with no control group. Accessed December 4, 2020. https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools
- 41.Drummond MF, Jefferson TO. Guidelines for authors and peer reviewers of economic submissions to the BMJ. The BMJ Economic Evaluation Working Party. BMJ. 1996;313(7052):275-83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Sullivan SD, Mauskopf JA, Augustovski F, et al. Budget impact analysis-principles of good practice: report of the ISPOR 2012 Budget Impact Analysis Good Practice II Task Force. Value Health. 2014;7(1):5-14. [DOI] [PubMed] [Google Scholar]
- 43.Critical Appraisal Skills Programme. CASP (qualitative) checklist. 2018. Accessed December 4, 2020. https://casp-uk.net/wp-content/uploads/2018/01/CASP-Qualitative-Checklist-2018.pdf
- 44.Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomized or non-randomized studies of healthcare interventions, or both. BMJ. 2017;358:j4008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Whiting PF, Rutjes AW, Westwood ME, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529-36. [DOI] [PubMed] [Google Scholar]
- 46.Zeng X, Zhang Y, Kwong JS, et al. The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review. J Evid Based Med. 2015;8(1):2-10. [DOI] [PubMed] [Google Scholar]
- 47.Guyatt GH, Oxman AD, Schunemann JH, Tugwell P, Knottnerus A. GRADE guidelines: a new series of articles in the Journal of Clinical Epidemiology. J Clin Epidemiol. 2001;64(4):380-82. [DOI] [PubMed] [Google Scholar]
- 48.R Studio. Shiny. Accessed December 4, 2020. https://shiny.rstudio.com/
- 49.Racsa PN, Meah Y, Ellis JJ, et al. Comparative effectiveness of rapid-acting insulins in adults with diabetes. J Manag Care Spec Pharm. 2017;23(3):291-98. doi: 10.18553/jmcp.2017.23.3.291 [DOI] [PMC free article] [PubMed] [Google Scholar]