Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2020 Apr 1;35(6):1830–1835. doi: 10.1007/s11606-020-05783-5

A Narrative Review and Proposed Framework for Using Health System Data with Systematic Reviews to Support Decision-making

Jennifer S Lin 1,2,, M Hassan Murad 3, Brian Leas 4, Jonathan R Treadwell 4, Roger Chou 5, Ilya Ivlev 1, Devan Kansagara 6
PMCID: PMC7280421  PMID: 32239462

Abstract

Systematic reviews are a necessary, but often insufficient, source of information to address the decision-making needs of health systems. In this paper, we address when and how the use of health system data might make systematic reviews more useful to decision-makers. We describe the different ways in which health system data can be used with systematic reviews, identify scenarios in which the addition of health system data may be most helpful (i.e., to improve the strength of evidence, to improve the applicability of evidence, and to inform the implementation of evidence), and discuss the importance of framing the limitations and considerations when using unpublished health system data in reviews. We developed a framework to guide the use of health system data alongside systematic reviews based on a narrative review of the literature and empirical experience. We also offer recommendations to improve the transparency of reporting when using health system data alongside systematic reviews including providing rationale for employing additional data, details on the data source, critical appraisal to understand study design biases as well as limitations in data and information quality, and how the unpublished data compares to the systematically reviewed data. Future methodological work on how best to handle internal and external validity concerns of health system data in the context of systematically reviewed data and work on developing infrastructure to do this type of work is needed.

KEY WORDS: systematic review(s), learning health system(s), health system data, unpublished data

INTRODUCTION

From the health system perspective, even well-conducted systematic reviews may be insufficient for informing decisions to improve the delivery of care (i.e., what to do and how to do it).1, 2 Often, findings of systematic reviews are not clinically actionable due to low certainty in the evidence from published research, leaving decision-makers without a clear path forward. Even when an evidence base provides high certainty regarding the effectiveness of an intervention, reviews generally lack key contextual details that inform successful implementation. Improving clinical operations (and thus patient outcomes) often entails questions other than the effectiveness and harms/safety of a given clinical service, but rather, for example, understanding gaps in uptake or use of a clinical service and questions and considerations of how best to implement a clinical service (e.g., detail of service/intervention, cost and cost-effectiveness, ethical/legal considerations, organizational aspects).3 In addition, answers to questions about clinical operations (e.g., effectiveness, harms, implementation considerations) may be highly dependent on local practice. Therefore, the applicability of systematically reviewed data to any health system (e.g., how similar or different the populations studied are to the health system’s population, or the fidelity of the health system’s intervention to the interventions studied) is critical to decision-making.

Information specific to local health systems that is derived from electronic health records (EHRs), other clinical databases (e.g., clinical registries), or claims and administrative data often is unpublished, but is frequently used in healthcare decision-making. Primary health system data may be used alongside traditional systematic reviews to answer questions addressed but unanswered by reviews, or to provide context for decision-makers to interpret and apply review findings locally. Given that health system decision-making would benefit from both traditional systematic reviews and health system–specific data, this paper investigates when and how to use primary data from health systems with systematic reviews. We have identified numerous examples of this integrated approach, but no guidance exists on when this is important or how to incorporate the data. Thus, this paper articulates a framework for reviewers and decision-makers within health systems on when and how unpublished health system data can be used with systematic reviews to support health system decision-making.

IDENTIFYING THE LITERATURE

We sought to identify relevant examples and existing guidance on how to integrate unpublished data into systematic reviews, and how health systems have used their own data with systematic reviews to inform their decision-making. We searched Ovid Medline (1946–February 2019) as well as systematic review (Cochrane Collaboration) and health technology assessment organizations (European Network for Health Technology Assessment, Health Technology Assessment International, International Network of Agencies for Health Technology Assessment) to identify relevant guidance and examples of evidence synthesis integrating unpublished primary data into systematic reviews. We also asked Evidence-Based Practice Centers (EPC) and persons within our own health systems for additional relevant examples or guidance. Each of our health systems (Kaiser Permanente, Mayo Clinic, Penn Medicine, Veterans Health Administration [VHA]) has experience integrating its own practice data with systematic reviews to support evidence-based decision-making.

We evaluated examples to determine whether they provided examples of unpublished data used before, during, or after a systematic review (respectively for scoping, evidence accumulation, or interpretation/implementation), or method guidance on incorporating unpublished data or health system data into systematic reviews. All articles that provided examples of incorporating health system–relevant data into systematic reviews were evaluated to determine the rationale for using non-systematically obtained data, details regarding the data used, and the impact of the unpublished data on overall review findings. We found no formal guidance on the use of unpublished or health system data in systematic reviews. We used an informal consensus process, based on the examples and our collective experience conducting systematic reviews, to develop the framework and recommendations in this paper. Additional detail about the narrative review process is available in a full report at https://effectivehealthcare.ahrq.gov/products/unpublished-health-data/methods-report.

EXAMPLES OF USING HEALTH SYSTEM DATA BEFORE, DURING, OR AFTER CONDUCTING A SYSTEMATIC REVIEW

Before the onset of a review, health systems can interrogate their data to identify important areas of clinical need; this is commonly done as part of quality improvement activities and not performed by systematic reviewers themselves. Nonetheless, using health system data can, and should, generate and define scope of important clinical or practice question(s) that then serve as the impetus for systematic reviews. For example, a retrospective review of the Mayo Clinic EHR only identified a single case of pouch volvulus (a rare but serious complication after proctocolectomy), which subsequently identified a need for and led to a systematic review of 22 cases of this condition in the published literature providing diagnosis and treatment details.4 Another example illustrates how a propensity-adjusted analyses of health system data generated an unexpected finding (a hyperlipidemia diagnosis was associated with lower mortality in hospitalized patients with acute myocardial infarction or decompensated heart failure) prompted a systematic review of all available similar studies.5

During the conduct of the review itself, unpublished health system data can be formally incorporated into review findings, i.e., to answer systematically reviewed questions. This does not appear to be common practice, perhaps because it requires access to this type of data in real time (e.g., partnering with a healthcare system or collaborative with registry of data to which health systems submit data).

However, we identified several examples when unpublished data were used to address limitations in systematically identified data. In most instances, these examples were explicit about their rationale for incorporating unpublished data, primarily because the published data was sparse (i.e., to increase certainty of findings by addressing strength of evidence), and/or to determine whether the published data were applicable to health system populations (i.e., to increase the certainty of findings by addressing the applicability of evidence).619 We identified several examples from the Mayo Clinic that illustrate different reasons for combining unpublished local with published data. In two such instances, published data for outcomes on uncommon procedures (e.g., total pancreatectomy, endovascular treatment carotid artery bifurcation aneurysms) were sparse, and adding unpublished local data increased the sample size and improved precision.7, 8

In other examples, the procedural expertise (e.g., endovascular procedures) at the Mayo Clinic was thought to be more advanced than published community practice or smaller centers’ experience, so local data were used to determine applicability of published data to the health system.8, 9 We also identified an example from the VHA Evidence Synthesis Program (ESP) regarding the evaluation of anticoagulation strategies after bioprosthetic aortic valve replacement.17 Published studies suggested that aspirin may be as effective as warfarin and was less harmful, but the certainty of evidence was low. Thus, stakeholders advocated for understanding outcomes in VA settings and propensity-adjusted analyses of VA data carried out in parallel with the systematic review. Findings from this analysis of VA data were congruent with the published literature20 and thus helped strengthen the evidence and clinical applicability of the review findings for the population of interest. In addition, VA analyses identified practice-level variation by facility to help tailor dissemination and implementation of evidence by practice location. In other examples, unpublished data were used to provide contextual information (other than effectiveness or harms/safety information), for example, to detail uptake of services, patient characteristics, epidemiology, natural history, or cost or data for cost-effectiveness analyses.2125

After completion of the review, the unpublished health system data can be used as a “data appendage” to help filter, interpret, and/or apply the review findings to an individual health system’s practice. This may not involve the systematic reviewers themselves and could be performed by the health system using the systematic review. For example, at Penn Medicine, when a quality review team recently launched a project to address the high frequency of patients failing to show for a scheduled colonoscopy or arriving at their appointments unprepared for the procedure, CEP was asked to conduct a systematic review of strategies to reduce no-shows and improve patient education. After completing the report, which examined several types of interventions, the quality team reviewed detailed clinic-level data—including patient characteristics and reasons reported for missed appointments—to select optimal improvement strategies from those that were included in the systematic review, and to identify which outpatient sites were best suited for specific interventions. The systematic review and the health system’s patient-level data informed the design and development of new educational materials and outreach strategies.

In another example, Kaiser Permanente used their own internal data to help operationalize the implementation of guidelines on screening for abnormal glucose which was derived from an EPC review to support the USPSTF.26 Based on an analysis of Kaiser Permanente Northwest data showing a differential rate of progression of prediabetes to type 2 diabetes (using HbA1c) across different groups (e.g., baseline HbA1c, BMI, weight gain, use of glucocorticoids), Kaiser Permanente’s national guidelines recommend tailored screening/monitoring intervals based on one of three risk groups using these factors.

PROPOSED FRAMEWORK FOR HOW HEALTH SYSTEM DATA CAN BE USED WITH SYSTEMATIC REVIEWS TO SUPPORT HEALTHCARE DECISION-MAKING

Recognizing that there are limitations to using only health system data to inform decision-making and to traditional systematic review methods that primarily rely on synthesizing published research, we articulate three scenarios that highlight the benefits of using unpublished data from health systems either during the conduct of the review or after the completion of the review (Fig. 1).

Figure 1.

Figure 1

Framework for how health system data can be used with systematic reviews to support healthcare decision-making.

First, it may be important to seek unpublished health system data while the review is being conducted to expand the evidence base and improve the strength of evidence, i.e., when data are sparse or limited. This may occur because data have important methodological limitations (e.g., publication bias or selective outcome reporting bias), are scant or imprecise (e.g., new intervention or technology), are limited to short-term follow-up (e.g., missing longer term data on safety), or do not address important outcomes of interest for decision-makers (e.g., resource use, cost, system outcomes).

Second, it may be important to address uncertainty regarding applicability, by seeking unpublished health system data during the conduct of the review or as a data appendage after completing the review. This may occur when there are signals that the populations (and therefore outcomes) in published data are likely to be different from those within a given health system, i.e., concerns about applicability of studied populations to real-world populations (e.g., highly selected populations), and/or the data do not allow for evaluation of effects in important subgroups (e.g., large heterogeneity of treatment benefit or harms and limited data by important subgroups of interest). In these scenarios, unpublished data may help health systems determine if and in whom to apply review findings, for example, by knowing the absolute risk reduction or risk increase in their own populations.

Third, it may be important to seek unpublished health system data as a data appendage to inform the implementation of evidence from reviews. For example, published data may not provide information or data needed for replication (fidelity) or adaptation of an intervention into a given health setting/system (e.g., how to tailor an intervention within a given health system), important contextual information on patient values and preferences, feasibility or acceptability, or information on direct cost or inputs for health system–relevant cost-effectiveness analyses (e.g., prevalence, adherence, cost). Local health system data may also inform who and where to target in the implementation (e.g., which populations, which sites) depending on population characteristics and practice site performance.

UNDERSTANDING THE LIMITATIONS OF USING UNPUBLISHED PRIMARY DATA FROM HEALTH SYSTEMS IN SYSTEMATIC REVIEWS

Even though the health system data can, in some instances, provide more applicable evidence,27 caution needs to be applied in deriving conclusions from non-systematically collected and non-peer reviewed data. Healthcare decisions that are informed by selective unpublished data need to be considered in the context of the systematically reviewed evidence (i.e., the totality of the evidence base) as well as the potential biases and limitations of the unpublished data analyses.

Most importantly, any analyses of health system data, published or not, must be critically appraised to understand how potential biases might affect the validity of findings. Biases and limitations of non-randomized studies (NRS) are well understood (e.g., confounding, selection, performance, attrition, detection, reporting) and are generally captured in commonly applied critical appraisal tools for these types of study designs; therefore, in this paper, we do not discuss further the critical appraisal of NRS. Even though there are numerous critical appraisal tools for NRS of healthcare interventions, consensus is lacking about which tools are valid and should be preferentially used; additionally, none has been developed specifically with the use of health system data in mind.2729 While most critical appraisal tools for NRS evaluate some components of data quality (e.g., missing data), they may not be robust enough to understand all the important limitations of the data not collected for research purposes, and thus may be more prone to other limitations (e.g., measurement error, misclassification).30 Further, methods may not be sufficiently well reported in unpublished studies for adequate critical appraisal.

Understanding the limitations of the data source, its relevance and integrity, in addition to study design limitations (e.g., confounding, selection bias) is an important part of the critical appraisal process. Limitations and uncertainty regarding different types of health system data (e.g., clinical registries, administrative claims data, clinical data from EHR) are well understood; for example, assessment tools for quality assurance of registry type data31 and guidance on evaluating evidence derived from these types of data in regulatory decisions currently exist.32 Health system data are rarely designed from the outset to support evidence-based decision-making at a population level; therefore, it is important to understand the extent to which the data source can answer the question being asked (sometimes referred to as information quality)—how well does the data source capture the populations, interventions, and comparators and outcomes (PICO) of interest? Information quality also depends on integrity of the data (commonly referred to as data quality).33 Data quality is complex because it touches on multiple dimensions (e.g., data accuracy, data completeness, interpretability and accessibility of data, relevance of the data, timeliness of the data, coherence of the data, and mode of the data collection and how it impacts data quality) and can fluctuate over time and across data sources.21 The issues centered on data quality are not unique to health system data but may be more problematic depending on the data source being used and the questions being asked.

RECOMMENDATIONS FOR SYSTEMATIC REVIEWERS USING PRIMARY DATA FROM HEALTH SYSTEMS IN SYSTEMATIC REVIEWS TO SUPPORT HEALTHCARE DECISION-MAKING

Based on our review of examples and methodologic guidance, as well as our experience conducting systematic reviews for various stakeholders, we recommend four basic principles for when and how to incorporate unpublished health system data (Box 1). First, it is important to explicitly state the rationale for using unpublished data. We suggest that the rationale can usually be articulated as one of the three main scenarios outlined in Figure 1 (i.e., to improve the strength and applicability of evidence and/or to inform its implementation). Second, be explicit about the details of the data source being used and why it was chosen (e.g., how relevant are the data). Because there may be multiple data sources that are relevant to health system decision-making (e.g., single health system versus network of health systems, clinical registry versus electronic health record), it is important to be intentional and explicit about the data source due to quality concerns about non-systematically collected, non-peer reviewed data.

Box 1 Recommendations on incorporating unpublished health system data with systematic reviews

1. Explicitly state the rationale for using unpublished health system data. The rationale could be to improve the strength and/or the applicability of evidence, and/or to inform its implementation.
2. Include details on how the data source was chosen, the relevance of the data source, and the type of data source (e.g., single vs. multisystem, electronic health record vs. clinical registry).
3. Characterize the limitations of any included data using formal critical appraisal criteria, as well as working with health system’s staff/researchers to understand data and information quality limitations.
4. Specify how the findings from unpublished data support, refute, and/or otherwise add to the findings from published data.

Third, characterize the limitations and biases of any included data. We recommend formal critical appraisal of the data analyses using study design–specific criteria, and that reviewers work with a health system’s information systems staff and researchers, if possible, to understand data and information quality limitations. Fourth, specify how the findings from unpublished data support, refute, and/or otherwise add to findings from published data. This is analogous to describing how a new study adds to an existing body of evidence, or how newly identified evidence adds to our understanding of older evidence when updating a systematic review. If the unpublished evidence conflicts with the review’s conclusions, there should be a discussion of possible reasons for the discrepancy (e.g., internal validity, external validity). Based on selected examples, demonstrating concordance can increase the certainty for decision-makers and result in practice change or coverage decisions.13, 18, 19, 24

LIMITATIONS AND FUTURE WORK

Given the focused nature of this paper and a limited time frame and resources, we did not address the methodological guidance on the critical appraisal or synthesis of evidence of NRS, on conducting integrative reviews (i.e., reviews of mixed methods including qualitative data, survey data, and/or gray literature), on integrating local cost data into cost-effectiveness analyses, nor on identification of gray or unpublished literature. Although we advocate for transparency around the internal and external validity of using unpublished health system data, future methodological research on how to deal with such limitations is needed. We also do not address the necessary resources, skills, partnership, and processes required to have real-time access to and ability to utilize health system data to do this type of integrated work. In the VHA ESP experience, the need for health system data was identified at an early phase of the review, and the secondary data analyses of VA data was initiated, funded, and conducted concurrently to the systematic review. This required a close partnership with stakeholders, primary researchers, and network of experts to do develop a proposal, secure supplementary funding (from an internal funder), and start/complete the work for VA data analyses in a short period of time. This model may not be widely reproducible, but at minimum partnerships with health system (researchers) and/or health system collaboratives and flexible funding mechanisms are likely to be required for this type of work to happen. Other models could borrow from resources and processes in place from exemplar learning health systems (e.g., Penn Medicine’s CEP) that have fully actualized processes for generating and analyzing their internal data and subsequently integrating with external data/knowledge for decision-making and capacity to evaluate practice changes in real time. Because individual health systems have varying capacities to interrogate/analyze their own data, a collaborative of health system and their researchers may be a more successful model than (small) individual health systems developing their own processes and resources. However, collaboratives or other like models to facilitate broader use or sharing of unpublished data would require infrastructure (e.g., platforms to share data, data sharing agreements, resources) and funding for maintenance.

CONCLUSIONS

The use of health system data in concert with traditional systematic reviews may help overcome decisional uncertainty for healthcare decision-makers. Incorporation of health system data should be considered when there is uncertainty about using evidence from systematic reviews to improve the strength of evidence, to improve the applicability of evidence, or to support the implementation of the evidence. Reviewers incorporating health system data should be explicit about the rationale for using these data, their information and data quality, the limitations of the study design itself, and the concordance or discordance of health system data compared with systematically obtained data in the review. Ideally, this integrated approach should be conducted in close partnership with health systems. Future methodological work on how best to handle internal and external validity concerns of health system data in the context of systematically reviewed data and work on developing infrastructure to do this type of work is needed.

Acknowledgments

The authors gratefully acknowledge the following individuals for their contributions to this project: Amanda Borsky, Dr.P.H., M.P.P., for project oversight and guidance; Stephanie Chang, M.D., M.P.H. for her feedback on drafts of this report; Robin Paynter, M.L.I.S., for searches; Lucy Savitz, Ph.D., for her contribution and review of the initial content; and Helen Wu, Ph.D., for her contribution of examples, Katie Essick for editing the report, as well as Debra Burch and the Scientific Resource Center for administrative support.

Funding Information

This manuscript based on work conducted by the Kaiser Permanente Research Affiliates, Mayo Clinic, ECRI Institute-Penn Medicine, and Pacific Northwest Evidence-based Practice Centers under contract to the Agency for Healthcare Research and Quality (AHRQ), Rockville, MD (Contract Nos. HHSA 290-2015-00007-I, 290-3200-1T-05, 290-2015-00005-I, 290-2015-00009-I). Dr. Ivlev was additionally supported by grant number K12HS026370 from AHRQ. Dr. Kansagara was supported by grant number 05-225 from the Veterans Health Administration (VHA) Health Services Research Department Evidence Synthesis Program (ESP).

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they do not have a conflict of interest.

Disclaimer

The findings and conclusions in this document are those of the authors, who are responsible for its contents; the findings and conclusions do not necessarily represent the views of AHRQ or the VHA. Therefore, no statement in this report should be construed as an official position of AHRQ, the VHA, or the US Department of Health and Human Services.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.White CM, Sanders Schmidler GD, Butler M, et al. Understanding Health-Systems’ Use of and Need for Evidence To Inform Decisionmaking. Rockville (MD): 2017. 17(18)-EHC035-EF. Available from: https://www.ncbi.nlm.nih.gov/books/NBK488208/. [PubMed]
  • 2.Schoelles K, Umscheid CA, Lin JS, et al. A Framework for Conceptualizing Evidence Needs of Health Systems. Rockville (MD): 2017. 18-EHC004-EF. Available from: https://www.ncbi.nlm.nih.gov/books/NBK493738/. [PubMed]
  • 3.Sampietro-Colom L, Lach K, Cicchetti A, et al. The AdHopHTA handbook: A handbook of hospital-based Health Technology Assessment (HB-HTA); Public deliverable; The AdHopHTA Project (FP7/2007-13 grant agreement nr 305018). 2015. Available from: http://www.adhophta.eu/handbook.
  • 4.Jawoosh M, Haffar S, Deepak P, et al. Volvulus of the ileal pouch-anal anastomosis: a meta-narrative systematic review of frequency, diagnosis, and treatment outcomes. Gastroenterol Rep (Oxf) 2019;7(6):403–10. doi: 10.1093/gastro/goz045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Yousufuddin M, Takahashi PY, Major B, et al. Association between hyperlipidemia and mortality after incident acute myocardial infarction or acute decompensated heart failure: a propensity score matched cohort study and a meta-analysis. BMJ Open. 2019;9(12):e028638. doi: 10.1136/bmjopen-2018-028638. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Barlas RS, Honney K, Loke YK, et al. Impact of Hemoglobin Levels and Anemia on Mortality in Acute Stroke: Analysis of UK Regional Registry Data, Systematic Review, and Meta-Analysis. J Am Heart Assoc. 2016;5(8):17. doi: 10.1161/JAHA.115.003019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Parsaik AK, Murad MH, Sathananthan A, et al. Metabolic and target organ outcomes after total pancreatectomy: Mayo Clinic experience and meta-analysis of the literature. Clin Endocrinol. 2010;73(6):723–31. doi: 10.1111/j.1365-2265.2010.03860.x. [DOI] [PubMed] [Google Scholar]
  • 8.Morales-Valero SF, Brinjikji W, Murad MH, et al. Endovascular treatment of internal carotid artery bifurcation aneurysms: a single-center experience and a systematic review and meta-analysis. AJNR Am J Neuroradiol. 2014;35(10):1948–53. doi: 10.3174/ajnr.A3992. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Sturiale CL, Brinjikji W, Murad MH, et al. Endovascular treatment of distal anterior cerebral artery aneurysms: single-center experience and a systematic review. AJNR Am J Neuroradiol. 2013;34(12):2317–20. doi: 10.3174/ajnr.A3629. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Ferro JM, Crassard I, Coutinho JM, et al. Decompressive surgery in cerebrovenous thrombosis: a multicenter registry and a systematic review of individual patient data. Stroke. 2011;42(10):2825–31. doi: 10.1161/STROKEAHA.111.615393. [DOI] [PubMed] [Google Scholar]
  • 11.Okoli GN, Kostopoulou O, Delaney BC. Is symptom-based diagnosis of lung cancer possible? A systematic review and meta-analysis of symptomatic lung cancer prior to diagnosis for comparison with real-time data from routine general practice. [Erratum appears in PLoS One. 2018 Dec 28;13(12):e0210108; PMID: 30592770] PLoS ONE. 2018;13(11):e0207686. doi: 10.1371/journal.pone.0207686. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Halfpenny NJ, Quigley JM, Thompson JC, et al. Value and usability of unpublished data sources for systematic reviews and network meta-analyses. Evid Based Med. 2016;21(6):208–13. doi: 10.1136/ebmed-2016-110494. [DOI] [PubMed] [Google Scholar]
  • 13.Patrick H, Gallaugher S, Czoski-Murray C, et al. Usefulness of a short-term register for health technology assessment where the evidence base is poor. Int J Technol Assess Health Care. 2010;26(1):95–101. doi: 10.1017/S0266462309990602. [DOI] [PubMed] [Google Scholar]
  • 14.Sadeh-Gonik U, Tau N, Friehmann T, et al. Thrombectomy outcomes for acute stroke patients with anterior circulation tandem lesions: a clinical registry and an update of a systematic review with meta-analysis. Eur J Neurol. 2018;25(4):693–700. doi: 10.1111/ene.13577. [DOI] [PubMed] [Google Scholar]
  • 15.Gutierrez Sanchez LH, Alsawas M, Stephens M, et al. Upper GI involvement in children with familial adenomatous polyposis syndrome: single-center experience and meta-analysis of the literature. Gastrointest Endosc. 2018;87(3):648–56.e3. doi: 10.1016/j.gie.2017.10.043. [DOI] [PubMed] [Google Scholar]
  • 16.Herrmann KH, Meier-Kriesche U, Neubauer AS. Real world data in health technology assessments in kidney transplants in Germany: use of routinely collected data to address epidemiologic questions in kidney transplants in the AMNOG process in Germany. Ger Med Sci. 2018;16:Doc01. doi: 10.3205/000263. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Yung DE, Koulaouzidis A, Fraser C, et al. Double-balloon colonoscopy for failed conventional colonoscopy: the Edinburgh experience and systematic review of existing data. Gastrointest Endosc. 2016;84(5):878–81. doi: 10.1016/j.gie.2016.06.024. [DOI] [PubMed] [Google Scholar]
  • 18.Bravata D, Coffing J, Kansagara D, et al. Antithrombotic Use in the Year After Bioprosthetic Aortic Valve Replacement in the Veterans Health Administration System. Washington, DC: Veterans Affairs Evidence-based Synthesis Program; 2017. VA ESP Project #05–225. Available from: https://www.hsrd.research.va.gov/publications/esp/bavr-prism.pdf.
  • 19.Papak J, Chiovaro J, Noelck N, et al. Comparing Antithrombotic Strategies after Bioprosthetic Aortic Valve Replacement: A Systematic Review. Veterans Affairs Evidence-based Synthesis Program; 2017. VA ESP Project #05–225. [PubMed]
  • 20.Bravata DM, Coffing JM, Kansagara D, et al. Association Between Antithrombotic Medication Use After Bioprosthetic Aortic Valve Replacement and Outcomes in the Veterans Health Administration System. JAMA Surg. 2019;154(2):e184679. doi: 10.1001/jamasurg.2018.4679. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Mandeville KL, Valentic M, Ivankovic D, et al. Quality Assurance of Registries for Health Technology Assessment. Int J Technol Assess Health Care. 2018;34(4):360–7. doi: 10.1017/S0266462318000478. [DOI] [PubMed] [Google Scholar]
  • 22.Robertson C, Ragupathy SA, Boachie C, et al. The clinical effectiveness and cost-effectiveness of different surveillance mammography regimens after the treatment for primary breast cancer: systematic reviews registry database analyses and economic evaluation. Health Technol Assess. 2011;15(34):v. doi: 10.3310/hta15340. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Adam GP, Di M, Cu-Uvin S, et al. Strategies for Improving the Lives of Women Aged 40 and Above Living With HIV/AIDS. Rockville (MD): November 2016. Technical Brief No. 29. Available from: https://www.ncbi.nlm.nih.gov/books/NBK401283/.
  • 24.Scott AM. Health technology assessment in Australia: a role for clinical registries? Aust Health Rev. 2017;41(1):19–25. doi: 10.1071/AH15109. [DOI] [PubMed] [Google Scholar]
  • 25.Makady A, van Veelen A, Jonsson P, et al. Using Real-World Data in Health Technology Assessment (HTA) Practice: A Comparative Study of Five HTA Agencies. Pharmacoeconomics. 2018;36(3):359–68. doi: 10.1007/s40273-017-0596-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Selph S, Dana T, Bougatsos C, et al. Screening for Abnormal Glucose and Type 2 Diabetes Mellitus: A Systematic Review to Update the 2008 U.S. Preventive Services Task Force Recommendation. Rockville (MD): 2015. 13–05190-EF-1. Available from: https://www.uspreventiveservicestaskforce.org/Page/Document/evidence-summary25/screening-for-abnormal-blood-glucose-and-type-2-diabetes.
  • 27.Briere JB, Bowrin K, Taieb V, et al. Meta-analyses using real-world data to generate clinical and epidemiological evidence: a systematic literature review of existing recommendations. Curr Med Res Opin. 2018;34(12):2125–30. doi: 10.1080/03007995.2018.1524751. [DOI] [PubMed] [Google Scholar]
  • 28.Viswanathan M, Patnode CD, Berkman ND, et al. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. Rockville (MD): 2008. Available from: https://www.ncbi.nlm.nih.gov/books/NBK47095/.
  • 29.Quigley JM, Thompson JC, Halfpenny NJ, et al. Critical appraisal of nonrandomized studies-A review of recommended and commonly used tools. J Eval Clin Pract. 2019;25(1):44–52. doi: 10.1111/jep.12889. [DOI] [PubMed] [Google Scholar]
  • 30.Ioannidis JP. Informed consent, big data, and the oxymoron of research that is not research. Am J Bioeth. 2013;13(4):40–2. doi: 10.1080/15265161.2013.768864. [DOI] [PubMed] [Google Scholar]
  • 31.Brkić M, Pleše B, Pajić V, et al. Methodological guidelines and recommendations for efficient and rational governance of patient registries. Ljubljana: National Institute of Public Health; 2015. 978-961-6911-75-7. Available from: http://hdl.handle.net/10147/583633.
  • 32.U.S. Food and Drug Administration. Framework for FDA’s Real-World Evidence Program. Silver Spring, MD: 2018. Available from: https://www.fda.gov/media/120060/download.
  • 33.Weiskopf NG, Bakken S, Hripcsak G, et al. A Data Quality Assessment Guideline for Electronic Health Record Data Reuse. EGEMS (Wash DC) 2017;5(1):14. doi: 10.5334/egems.218. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES