Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2010 Aug 10;25(12):1352–1355. doi: 10.1007/s11606-010-1476-9

Hospital-Based Comparative Effectiveness Centers: Translating Research into Practice to Improve the Quality, Safety and Value of Patient Care

Craig A Umscheid 1,2,3,, Kendal Williams 1,3, Patrick J Brennan 3,4
PMCID: PMC2988155  PMID: 20697961

Abstract

Hospital-based comparative effectiveness (CE) centers provide a model that clinical leaders can use to improve evidence-based practice locally. The model is used by integrated health systems outside the US, but is less recognized in the US. Such centers can identify and adapt national evidence-based policies for the local setting, create local evidence-based policies in the absence of national policies, and implement evidence into practice through health information technology (HIT) and quality initiatives. Given the increasing availability of CE evidence and incentives to meaningfully use HIT, the relevance of this model to US practitioners is increasing. This is especially true in the context of healthcare reform, which will likely reduce reimbursements for care deemed unnecessary by published evidence or guidelines. There are challenges to operating hospital-based CE centers, but many of these challenges can be overcome using solutions developed by those currently leading such centers. In conclusion, these centers have the potential to improve the quality, safety and value of care locally, ultimately translating into higher quality and more cost-effective care nationally. To better understand this potential, the current activity and impact of hospital-based CE centers in the US should be rigorously examined.

KEY WORDS: comparative effectiveness, evidence-based medicine, health technology assessment, quality of health care, cost-effectiveness, health information technology, organizational decision making

Perspective

The recent debate on healthcare reform in the US has drawn attention to national policies to improve the quality, safety and cost-effectiveness of patient care. Yet, the delivery of healthcare is a local phenomenon, dictated by local standards and policies.1 Some industrialized nations have recognized this, and are using hospital-based "health technology assessment" or "comparative effectiveness" centers to improve care locally.2,3 Medical centers in the US, however, have invested less in such infrastructure.2,3 As the passage of healthcare reform prompts us to consider new national strategies to improve patient care, we explore in this Perspective the role that local hospital-based comparative effectiveness centers can play.4,5

The terms "comparative effectiveness" and "health technology assessment" similarly refer to an evaluation of the benefits, harms, and costs of drugs, devices, and clinical practices.6,7 Comparative effectiveness (CE) reviews use scientific evidence to compare one approach to another to estimate incremental benefit. In the US, these reviews are developed by the Evidence-based Practice Centers8 and other Effective Health Care Program partners9 of the Agency for Healthcare Research and Quality (AHRQ), as well as for-profit entities, medical professional societies, and payers. Reviews are subsequently used by payers and purchasers to inform coverage or reimbursement decisions. The national non-profit Patient-Centered Outcomes Research Institute established by healthcare reform may also play a role in the development and dissemination of future CE reviews, as well as funding of CE research to inform clinical practice.10,11

In contrast to reviews created by national bodies or local payers, reviews created by hospital-based CE centers are funded by their home institutions to help inform decision making on the ground, from device purchasing and drug formulary choices to decisions involving clinical practice. These centers can adapt reviews from outside agencies to their local settings and develop new reviews to address their local needs. In addition, they can use local utilization, outcomes, and cost data to fill gaps in the evidence and enhance the relevance of reviews.12 Some hospital-based CE centers have even funded local researchers to address the evidence gaps identified in their own local reviews.13 Most importantly, these centers can play a critical role in implementing report findings, including integrating them into computerized clinical decision support (CDS) or quality improvement (QI) initiatives, and measuring their impact using administrative or clinical data. Such centers thus help to create and foster a culture of evidence-based practice at their local institutions (Table 1).

Table 1.

Comparing and Contrasting National and Hospital-Based Comparative Effectiveness Centers in the United States

Characteristic National center Hospital-based center
Example ECRI Institute AHRQ EPC Penn Medicine Center for Evidence-based Practice
Priorities National priorities set by policymakers and academics (e.g. IOM, PCORI BOG) Local priorities set by clinical and administrative leaders of hospitals
Funding Federal (i.e. DHHS, DOD, AHRQ NIH, CDC) Hospitals, universities, foundations, federal
Scope of CER Broad (e.g. effectiveness of telemedicine in the ICU) More narrow (e.g. effectiveness of telemedicine in the ICUs of academic medical centers) 12
Common topics Drugs, devices, diagnostic tests Drugs, devices, diagnostic tests, and processes of care (e.g. transitions in care)
Authors of CER Federal employees, university faculty and staff, clinical experts University or hospital faculty and staff, key stakeholders responsible for implementing reviews
Turnaround time of CER 12-24 months 2-12 weeks
Evaluate costs in CER Less common, and usually from societal perspective More common, and usually from hospital perspective. Can also include local utilization and cost data
Has capacity to adapt CER for local use Not applicable Commonly done
Methods of CER dissemination and implementation Peer-reviewed publications, postings on national websites, integration into national CPGs (e.g. USPSTF) and coverage policy (e.g. MEDCAC) Presentations to clinical and administrative committees, postings to internal and external websites, peer-reviewed publications, integration into local CPGs, QI initiatives and computerized CDS
Measuring the impact of CER Requires large scale funding (e.g. NIH CTSA and federal dissemination grants) 21,22 Internal funding can support simple evaluations, while external funding can support more robust evaluations

Abbreviations: Agency for Healthcare Research and Quality, AHRQ; Board of Governors, BOG; Centers for Disease Control and Prevention, CDC; Clinical and Translational Sciences Award, CTSA; Clinical Practice Guidelines, CPGs; Comparative Effectiveness Reviews, CER; Clinical Decision Support, CDS; Department of Defense, DOD; Department of Health and Human Services, DHHS; Evidence-based Practice Center, EPC; Institute of Medicine, IOM; Medicare Evidence Development and Coverage Advisory Committee, MEDCAC; National Institutes of Health, NIH; Patient Centered Outcomes Research Institute, PCORI; Quality Improvement, QI; United States Preventative Services Task Force, USPSTF

Few studies have examined the impact of hospital-based CE centers on healthcare practices and costs. The Technology Assessment Unit at McGill University Health Center is one that has been evaluated. Of the 27 reports generated in their first five years, 25 were fully implemented, with 6 (24%) recommending investments in new technologies and 19 (76%) recommending rejection, for a reported net hospital savings of 10 million dollars.14

In the US, integrated health systems and managed care organizations like Kaiser Permanente have clear incentives to establish CE centers, and a few have formally done so. Incentives are also aligned for other hospitals to establish CE centers since most hospital payers reimburse inpatient care through prospective payment systems. Hospital reimbursements based on diagnosis-related groups (DRGs), which provide fixed payments per patient hospitalization 15, coupled with the recent push by payers towards quality improvement via value-based purchasing encourages hospitals to provide the best care at the lowest possible cost. Hospital administrators can use CE centers to maximize the value generated from each dollar the hospital spends, which is especially important as the costs of providing care rise in the face of decreasing reimbursements, such as those resulting from healthcare reform.10

Individual providers practicing in fee-for-service models, however, may have fewer incentives to improve cost-effectiveness. In these models, individual providers get paid based on what they do, not on the value of the service they provide. This is the case in many ambulatory settings and even in many inpatient units of hospitals, where providers get reimbursed based on evaluations and procedures they perform. Hence, in fee-for-service models, the hospital administrators, rather than the individual clinicians practicing in the hospitals, would have the greatest incentive to provide cost-effective care, and thus support a hospital-based CE center.

Yet, even in fee-for-service models, the return on investment (ROI) for a CE center can be significant. This is because CE centers can support evidence-based practice at an organizational level, which can improve pay-for-performance and publicly reported metrics, potentially resulting in higher reimbursements and market share. Hospital-based CE centers that disseminate evidence through CDS can also help their organizations meet certification criteria for "meaningful use" of their electronic health records, resulting in further reimbursement increases.16

Despite the potential economic benefits, most US hospitals do not have a formal CE center. Instead, many rely on outsourced or less formal evaluations to inform a relatively narrow set of decisions regarding formularies, marketing, and large capital purchases. In many cases, CE is the work of individuals or committees who may not have the expertise to appraise or synthesize scientific evidence adequately, and may be at risk for conflicts of interest. This is especially the case when CE is performed at the level of a clinical department, rather than a hospital, where an evaluation may be too narrow in scope and biased towards interventions performed by that department. Such individuals and committees often rely on financial analyses as well as political clout to help them make decisions.17,18

By contrast, more formal hospital-based CE centers are staffed by individuals trained in evidence-based medicine, who use systematic and objective methods to identify and synthesize scientific evidence for the hospital perspective. For example, the Center for Evidence-based Practice at the University of Pennsylvania Health System is staffed by two hospitalist co-directors trained in epidemiology, two research analysts, a librarian, a health economist, and primary care and infection control liaisons, totaling 4.5 full time equivalents (FTE). More than 100 reports have been completed for hospital committees and leaders since the Center was established in 2006. These guidelines and systematic reviews have examined topics ranging from lower-cost practices affecting the quality and safety of care, such as the comparison of heparin versus saline for catheter flushing 19, to higher-cost and emerging technologies, like the use of telemedicine in critical care 12. Descriptions of other hospital-based CE centers in the US is limited, but a survey of hospital-based centers located internationally suggests common characteristics, including: 1) they're often located in public or academic hospitals; 2) they usually consist of more than one member, and most often include clinicians, administrators, economists and epidemiologists; 3) they focus on clinical practice as well as administrative decision making; 4) they assess devices, drugs and procedures; 5) they examine effectiveness, safety and cost data; 6) they use workshops and websites for dissemination; and 7) they target internal users, as well as those in collaborative networks2

Despite their benefits, there are a number of challenges to operating a hospital-based CE center. First, centers need to balance academic rigor with operational efficiency to complete reviews in a timely way so that they can impact decisions. Working with leaders to prioritize projects, limiting the scope of reports to those issues most critical to a decision, and using existing reviews when available can help achieve this balance. Second, it can be challenging to consider costs when published cost analyses are not available, or are not from the hospital perspective. However, when critical to a decision, such analyses can often be performed locally and populated with hospital-specific cost data. Third, CE can be viewed as a threat to innovation, particularly innovations perceived to help medical centers retain or enhance market share. Similarly, providers not educated in evidence evaluation may be resistant to processes informed by CE. By involving key stakeholders up front, and making decisions in a fair and consultative manner, these negative impressions can usually be overcome. Moreover, it's important to explicitly acknowledge that evidence-based decisions are not only informed by evidence, but also by resource and value considerations, such as how important a particular technology might be to a given market. A final challenge is the fear of liability on the behalf of providers, particularly when policies informed by CE are not followed. Yet, we believe that an organizational approach to addressing clinical questions is the best defense against malpractice, fostering a culture of evidence-based practice.

Implementation of CE reviews can also present challenges. Reviews with the most impact usually begin with clearly defined questions and next steps, are developed in a timely way alongside key stakeholders, and are valid and actionable. When reviews focus on drugs or devices, integrating their results into practice is often straightforward. The reviews can be presented to the relevant formulary or device purchasing committees to inform their decisions. However, when reviews involve changing clinical practice, implementation requires collaboration between key stakeholders and clinical and administrative leaders, and often the use of IT tools like CDS. The Veterans Affairs' Quality Enhancement Research Initiative (VA QuERI) is a prime example of how such implementation can occur.20

Measuring the impact of reviews is straightforward for those focusing on drugs or devices. One can simply determine whether reviews have been presented to the appropriate committees, and whether committee decisions were consistent with review recommendations. In contrast, extramural funding and more robust data is often required to adequately measure the effect of reviews informing practice changes. Potential sources of such funding include the National Institutes of Health Clinical and Translational Science Awards21 and evidence dissemination grants such as those fueled by the American Recovery and Reinvestment Act.22

So how does one set up a hospital-based CE center? Based on our experience, a center could start virtually, at the desk of a single clinician trained in systematic reviews and critical appraisal of the literature, who has an interest in quality and patient safety. The hospital would have to support the clinician's time, and depending on the size and complexity of the hospital, a 0.5 FTE commitment may be a reasonable start. Having access to a biomedical library, and a librarian's assistance with developing search strategies, choosing information resources, and retrieving articles, would be crucial. But the critical component of success is the support of high-level clinical leadership. In our case, the chief medical officer of our health system was the driving force behind our CE center, and initially encouraged stakeholders to access the center as a resource. This allowed us to demonstrate the strength and utility of our center early in its development. A hospital-based CE center also needs to identify and build relationships with the multiple leaders making clinical decisions and policy across a hospital. As stakeholders begin to realize an ROI from the center's reviews, the unit may be able to attract further resources to hire analysts to perform systematic reviews, and integrate more closely with QI and IT staff to implement reviews of clinical practice, and measure their impact.

One potential limitation of the hospital-based model is the redundancies or inefficiencies that may result from the independent activities of multiple local centers. Yet, the local nature of this model is also its greatest strength, for it allows centers to address local priorities, take local considerations into account, and use local evidence when gaps in the scientific literature exist. In addition, local centers can complement and strengthen the activities of a national center by providing expertise to adapt, implement and measure the impact of national evidence reports locally.5 Moreover, when evidence from a national center is not available, hospital-based centers can create CE evidence to address their own local questions, and post these reports on a nationally coordinated site for others to adapt and implement.

As a concrete next step, we recommend an in-depth evaluation of the activity and impact of hospital-based CE centers already established across the US. If these centers prove effective, then start-up and maintenance of these local activities could be supported by incentives from national payers like Medicare and accreditors like the Joint Commission, as well as the local value they create at their own institutions.

Acknowledgements

We thank our colleagues Jalpa A. Doshi, PhD, Matthew D. Mitchell, PhD, David Goldmann, MD, and Brian Leas, MA, MS at the University of Pennsylvania for reviewing multiple versions of this manuscript and for their many thoughtful suggestions. No specific sources of funding were used.

Conflict of Interest Drs. Umscheid and Williams co-direct a hospital-based comparative effectiveness center at the University of Pennsylvania Health System. Dr. Umscheid is also a member of the Medicare Evidence Development and Coverage Advisory Committee (MEDCAC), which uses evidence reports developed by the Evidence-based Practice Centers of the Agency for Healthcare Research and Quality. Dr. Brennan established a hospital-based comparative effectiveness center in his role as the Chief Medical Officer of the University of Pennsylvania Health System.

References

  • 1.Gawande A. The Cost Conundrum. The New Yorker. 2009 June 1.
  • 2.Cicchetti A, Marchetti M, Dibidino R, Corio M. Hospital based HTA worldwide: the results of an international survey. Health Technology Assessment International Web site. Available at: http://www.htai.org/fileadmin/HTAi_Files/Conferences/2008/Files/Sessions/Presentation_Marco_Marchetti.pdf Accessed July 21, 2010.
  • 3.Davis K. Slowing the growth of health care costs–learning from international experience. N Engl J Med. 2008;359(17):1751–5. doi: 10.1056/NEJMp0805261. [DOI] [PubMed] [Google Scholar]
  • 4.Fuchs VR. Three "inconvenient truths" about health care. N Engl J Med. 2008;359(17):1749–51. doi: 10.1056/NEJMp0807432. [DOI] [PubMed] [Google Scholar]
  • 5.Clancy CM, Cronin K. Evidence-based decision making: global evidence, local decisions. Health Aff (Millwood). 2005;24(1):151–62. doi: 10.1377/hlthaff.24.1.151. [DOI] [PubMed] [Google Scholar]
  • 6.Health technology assessment. Int J Technol Assess Health Care. 2009;25 Suppl 1:10. [DOI] [PubMed]
  • 7.Federal Coordinating Council for Comparative Effectiveness Research. Report to the President and Congress on Comparative Effectiveness Research. Washington, D.C.; June 2009.
  • 8.Atkins D, Fink K, Slutsky J. Better information for better health care: the Evidence-based Practice Center program and the Agency for Healthcare Research and Quality. Ann Intern Med. 2005;142(12 Pt 2):1035–41. doi: 10.7326/0003-4819-142-12_part_2-200506211-00002. [DOI] [PubMed] [Google Scholar]
  • 9.Slutsky JR, Clancy CM. AHRQ's effective health care program: why comparative effectiveness matters. Am J Med Qual. 2009;24(1):67–70. doi: 10.1177/1062860608328567. [DOI] [PubMed] [Google Scholar]
  • 10.Health Reform. Kaiser Family Foundation Web site. Available at: http://www.kff.org/healthreform/sidebyside.cfm Accessed July 21, 2010.
  • 11.VanLare JM, Conway PH, Sox HC. Five next steps for a new national program for comparative-effectiveness research. N Engl J Med. 2010;362(11):970–3. doi: 10.1056/NEJMp1000096. [DOI] [PubMed] [Google Scholar]
  • 12.Mitchell MD, Williams K, Brennan PJ, Umscheid CA. Integrating local data into hospital-based healthcare technology assessment: two case studies. Int J Technol Assess Health Care. 2010;26(3):294–300. doi: 10.1017/S0266462310000334. [DOI] [PubMed] [Google Scholar]
  • 13.Bodeau-Livinec F, Simon E, Montagnier-Petrissans C, Joel ME, Fery-Lemonnier E. Impact of CEDIT recommendations: An example of health technology assessment in a hospital network. Int J Technol Assess Health Care. 2006;22(2):161–8. doi: 10.1017/S0266462306050975. [DOI] [PubMed] [Google Scholar]
  • 14.McGregor M. Impact of TAU Reports. Vol. 2009: Technology Assessment Unit of the McGill University Health Centre Web site. Available at: http://www.mcgill.ca/tau/publications/2008/. Accessed July 21, 2010.
  • 15.Mayes R. The origins, development, and passage of Medicare's revolutionary prospective payment system. J Hist Med Allied Sci. 2007;62(1):21–55. doi: 10.1093/jhmas/jrj038. [DOI] [PubMed] [Google Scholar]
  • 16.Blumenthal D. Launching HITECH. N Engl J Med. 2010;362(5):382–5. doi: 10.1056/NEJMp0912825. [DOI] [PubMed] [Google Scholar]
  • 17.Luce BR, Brown RE. The use of technology assessment by hospitals, health maintenance organizations, and third-party payers in the United States. Int J Technol Assess Health Care. 1995;11(1):79–92. doi: 10.1017/S0266462300005274. [DOI] [PubMed] [Google Scholar]
  • 18.Weingart SN, Weingart SN. Acquiring advanced technology. Decision-making strategies at twelve medical centers. . Int J Technol Assess Health Care. 1993;9(4):530–8. doi: 10.1017/S0266462300005456. [DOI] [PubMed] [Google Scholar]
  • 19.Mitchell MD, Anderson B, Williams K, Umscheid CA. Systematic review of heparin flushing and other interventions to maintain patency of central venous access devices. J Adv Nurs. 2009;65(10):2007–21. doi: 10.1111/j.1365-2648.2009.05103.x. [DOI] [PubMed] [Google Scholar]
  • 20.Stetler C, Mittman B, Francis J. Overview of the VA Quality Enhancement Research Initiative (QUERI) and QUERI theme articles: QUERI Series. Implement Sci. 2008;3:8. doi: 10.1186/1748-5908-3-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Dougherty D, Conway PH. The "3T's" Road Map to Transform US Health Care: The "How" of High-Quality Care. J Am Med Assoc. 2008;299(19):2319–21. doi: 10.1001/jama.299.19.2319. [DOI] [PubMed] [Google Scholar]
  • 22.Conway PH, Clancy C. Charting a Path From Comparative Effectiveness Funding to Improved Patient-Centered Health Care. J Am Med Assoc. 2010;303(10):985–6. doi: 10.1001/jama.2010.259. [DOI] [PubMed] [Google Scholar]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES