Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Apr 1.
Published in final edited form as: Curr Opin Organ Transplant. 2013 Apr;18(2):10.1097/MOT.0b013e32835f07f8. doi: 10.1097/MOT.0b013e32835f07f8

Program-specific Reports: Implications and Impact on Program Behavior

Lisa B VanWagner 1, Anton I Skaro 1
PMCID: PMC3725805  NIHMSID: NIHMS492461  PMID: 23481412

Abstract

Purpose of review

Measuring and monitoring transplant center performance is vital to ongoing quality assessment and performance improvement initiatives geared toward ensuring optimal care for patients with end-stage organ failure. The impact of regulatory oversight on transplant center behavior and programmatic decision-making is complex.

Recent findings

Program specific reports (PSR) are published by the Scientific Registry for Transplant Recipients (SRTR) and are publically available for use by a variety of stakeholders, including patients, regulators, insurers and care providers. PSRs have been both groundbreaking and controversial. The principal areas of concern relate to (a) potential unintended consequences of PSRs, (b) limitations in both the data collected by the registry and the currently used statistical methodology employed by the SRTR for risk adjustment, and (c) the subsequent impact on transplant program behavior.

Summary

PSRs, which serve the purposes of fueling ongoing performance improvement initiatives, and informing consumers and payers by fostering transparency in the communication of risk, also involve trade-offs due to their unintended use for regulatory oversight and subsequent impact on transplant center behavior. Future research is necessary to improve data integrity and risk adjustment methodologies which will enhance regulation and preserve access to transplantation among vulnerable patient populations.

Keywords: organ transplantation, quality improvement, risk assessment, OPTN, SRTR

Introduction

Monitoring transplant center performance is an important component of ensuring optimal care for patients with end-stage organ failure. The Scientific Registry of Transplant Recipients (SRTR) is charged with data analysis to support the ongoing evaluation of the clinical status of organ transplantation in the United States. Data in the registry are collected from transplant centers and organ procurement organizations (OPOs) across the country by the Organ Procurement and Transplantation Network (OPTN). Every six months the SRTR publishes transplant program-specific reports that are publically available and used by regulatory agencies, payers, patients and their families. These program specific reports (PSRs) are intended to evaluate the efficiency of the US organ transplant system in light of the limited availability of donor organs.

The reports themselves have been both groundbreaking and controversial (1). While the motivation behind the PSRs is laudable, there are areas of concern that need to be addressed if harm is to be avoided. The principal areas of concern relate to: (a) potential unintended consequences of PSRs; (b) limitations in the available data and the statistical methods employed by the SRTR for risk adjustment; and (c) the subsequent impact on transplant program behavior. To address these concerns, the OPTN and SRTR recently co-sponsored a consensus conference to examine the methods used for surveillance of transplant programs (1). The existence of such a conference demonstrates the need for ongoing discussion and review of the current conduct and application of PSRs.

The aim of this review is therefore to evaluate the currently available literature regarding the clinical and policy implications of PSRs and their current impact on program behavior.

What are Program-Specific Reports?

Under the National Organ Transplantation Act of 1984, the Department of Health and Human Services (HHS) awards separate contracts for the administration of the OPTN by the United Network for Organ Sharing (UNOS) and for the SRTR by the Chronic Disease Research Group of the Minneapolis Medical Research Foundation (2). Transplant centers are required to report updated candidate and recipient information to the OPTN. These data are in turn analyzed by the SRTR on an ongoing basis and published twice a year on a public website (http://www.srtr.org/). The Final Rule describes the obligations of the SRTR to “make available to the public timely and accurate program-specific information on the performance of transplant programs” (3).

How are Program Specific Reports currently used?

The publication of PSRs is a contractual requirement of the SRTR. Many different audiences including regulatory oversight committees, public and private sector payers, transplant professionals, the media and patients and their families all use the data contained in PSRs towards distinct objectives (4). (TABLE 1) Moreover, each of these keys take holders has a different understanding of the statistical concepts contained within the PSRs.

Table 1.

Audiences Intended for Program-Specific Reports

Audiences Purpose (and specific measures, if applicable)
Monitoring and process improvement
 HRSA/Division of Transplantation
  • Identify problems with the organ transplantation system

 OPTN Membership and Professional Standards Committee (MPSC)
  • Identify individual transplant centers that may be under-performing or not following allocation policy

  • Upon further investigation of identified centers, review membership in the OPTN

  • Measures: Posttransplant outcomes; others being explored (organ acceptance rates, transplant rates, etc.)


Regulators and other payers
 Center for Medicare and Medicaid Services (CMS)
  • Review qualification for Medicare certification for both OPOs and transplant programs

  • Measures: organs per donor by category, adjusted donation rates; posttransplant outcomes

 Private insurers
  • Qualify transplant programs for preferred-provider plans

  • Identify individual transplant centers that may be under-performing, such that their customers (insured patients) are not well served

  • Measures: Posttransplant outcomes; others as they appear on Standardized Request for Information (RFI)


Others
 Media
  • Identify and publicize problems either with the current system or individual centers, and help to explain the implications to the public

 Transplant centers and clinicians
  • Monitor performance in comparison to other centers

  • Be alerted before problems would arise to either monitoring or regulatory audience

  • Provide information to their patients about the performance of the center

 Patients and families
  • Learn about the performance of the transplant center caring for them

  • If choices exist, learn about the other centers

  • Find out about the general ‘prognosis’ for their disease

Source: SRTR

Taken from: Dickinson DM, Arrington CJ, Fant G, Levine GN, Schaubel DE, Pruett TL, Roberts MS, et al. SRTR program-specific reports on outcomes: a guide for the new reader. Am J Transplant 2008;8:1012–1026

Current Use: Monitoring and Process Improvement

Two agencies in the Department of HHS oversee organ transplant programs: (1) the Centers for Medicare & Medicaid Services (CMS) exerts oversight of transplant programs through accreditation earned by complying with the Conditions of Participation (CoP) which also determine eligibility to receive Medicare reimbursement for transplant services; and (2) the Health Resource and Services Administration (HRSA) oversees the OPTN, which manages the nation’s organ allocation system. The Division of Organ Transplantation (DOT) at HRSA is ultimately accountable for monitoring of the transplantation system.

Transplant center performance is monitored by the OPTN on a quarterly basis. In the PSRs, observed center-specific graft and patient survival are compared to expected risk-adjusted outcomes, using three well-defined criteria adopted by the OPTN (5) (TABLE 2). If a program’s outcomes breach these criteria, the Membership and Professional Standards Committee (MPSC) commissions a group of reviewers to investigate the potential causes of inferior outcomes and outlines a corrective action plan. If serious deficiencies persist, the MPSC may take various actions, including issuing a letter of warning or reprimand, or placing the transplant center on probation. Serious infractions may lead to the loss of the transplant program’s ‘designated status’, denying their access to deceased donor organs for transplant (6, 7).

Current Use: Regulation and Payer Decisions

CMS is the single largest payer in the US for overall health care services, and specifically, for transplantation (6). In addition, CMS acts not only as a payer, but also as a regulator. Most commercial payers follow their lead regarding transplant center regulation.

Media reports in 2005 and 2006 highlighted serious problems at organ transplant programs, calling attention to possible deficits in federal oversight (810). In response to these reports, CMS issued new CoP for transplant centers which came into effect on June 28, 2007 (11). In these CoP, CMS established minimum standards to protect patient health and safety, and implemented oversight mechanisms. Failure to meet the CoP can result in the loss of CMS certification. CMS evaluates transplant center performance based upon the SRTR risk-adjustment contained within the PSRs. Many private insurers also use these performance measurements to certify transplant programs as preferred providers or centers of excellence. It is also important to note that while the PSRs published by the SRTR use a 2-tailed t-test to detect differences between observed and expected outcomes both the OPTN and CMS use a more rigorous 1-tailed t-test. While both the MPSC and CMS foster process improvement, CMS uses these outcomes standards for the more punitive purpose of determining whether a program should be certified for or terminated from participation in the Medicare program(12).

Current Use: Quality Improvement

Many transplant programs use the SRTR PSRs as a tool for self-assessment and continuous quality improvement(1). However, it is noteworthy that lags in data/reporting limit their effectiveness and have fueled efforts to apply real-time techniques such as cumulative summation (CUSUM) charts. The SRTR also provides transplant programs with an Excel spreadsheet tool with the beta coefficients of the risk-adjustment embedded within to facilitate subgroup analysis so that risk factors involved in poor outcomes can be identified and mitigated (https://securesrtr.transplant.hrsa.gov/).

What are the key limitations of PSRs?

To account for differing donor, recipient, and other characteristics, PSRs use risk-adjustment techniques to compare transplant center performance. Risk adjustment is a statistical technique that uses patient variables to ensure valid comparisons. The aim is to level the playing field, but there are pitfalls to risk adjustment. Data collection may be incomplete, models may omit important variables, or the statistical techniques used may be flawed.

Limitations: Data Quality

While high-quality data are essential, resources for data collection are limited and the current OPTN reporting process represents a growing unfunded mandate. The potential errors in data collection are numerous including errors in primary data entry, abstraction, and transfer to OPTN forms, etc. Recently, Gillespie et al., in a comparison of the Adult-to-Adult Living Donor Liver (A2ALL) study to data submitted to OPTN/SRTR showed substantial amounts of either missing or discrepant data from OPTN/SRTR(13).

Limitations: Covariate Risk Adjustment

The risk-adjustment process only considers differences among donors and recipients that are measured across all transplant programs. Some characteristics, such as the number of pre-transplant comorbid illnesses and their severity, which would likely impact post-transplant outcomes, are not collected. This is due to the need to balance maximal adjustment with the burden of data collection. For example, interstitial fibrosis in the biopsy of a donated kidney or the severity of a recipient's coronary artery disease, though they likely impact survival, are not included in risk-adjustment models.

Limitations: Techniques Used

The current SRTR method of risk adjustment is based on calculating observed-to-expected (O/E) ratios for every center. However, it has several limitations. For one, it captures only random variation across patient outcomes, but not variations across transplant centers. In addition, it might also overestimate the number of centers considered to be outliers and therefore exaggerate differences between the centers with the best and worst outcomes (14). As a result, hierarchical and mixed-effects methods are increasingly being considered.

Mixed-effects models consider center effects and improve accuracy at the center of the distribution rather than at the extremes (14). Arguably, they are more useful for public reporting than identifying program underperformance and have been adopted by the Society of Thoracic Surgeons (15).

The SRTR uses repeated measures analysis in order to detect changes in performance. Thus, there is an increased risk of committing a type I error when performing multiple comparisons even when a Bonferroni correction is applied, yielding a reduced likelihood of detection. In addition, there is a substantial delay in detecting rapid changes in performance. Currently the SRTR makes minor changes in the PSR models every 6 months, and until recently, did so with minimal input from the transplant community (1).

What are the potential unintended consequences of PSRs?

The PSRs were never intended to be a bright line test for the purpose of regulatory oversight and therefore could potentially invoke serious unintended consequences. While fear of punitive action might exert a positive influence on transplant outcomes it might also reduce access to transplantation among vulnerable high-risk patient populations.

Unintended consequences: Risk Aversion

Risk-adjustment is inadequate for several “high risk” scenarios which negatively impact outcomes including patients transplanted following desensitization for ABO and cross-match incompatibility, liver transplantation for malignancies (hepatocellular carcinoma and cholangiocarcinoma), and recipients with advanced coronary artery disease (CAD) (12). For example, CAD is associated with inferior outcomes after kidney and liver transplantation (16, 17). Currently, a patient with non-obstructive CAD (e.g. 30% stenosis in one coronary artery) receives the identical risk adjustment to a patient with obstructive CAD causing coronary events requiring intervention such as coronary bypass or angioplasty and stenting. We have demonstrated that transplant centers have responded to implementation of the CoP by avoiding transplant in patients with CAD risks, who previously would have been transplanted (18). If true, clinical and programmatic decision-making is being made to preserve CMS certification of the centers rather than to provide optimal patient care.

Unintended consequences: Threat to innovation

There is legitimate concern that failure to adequately risk-adjust outcomes may discourage centers from developing innovative treatments leading to stagnation of the field. Indeed, there is no provision to waive inferior outcomes in the evaluation of new technologies. As a result, the American Society of Transplant Surgeons (ASTS) Executive Committee emphasized the need to establish a national body to approve unique clinical circumstances and innovative treatment protocols that warrant special consideration (12). The process should be transparent and allow for the exclusion of patients meeting criteria irrespective of the transplant center. Outcomes for patients transplanted under this exclusion should be reported separately to enable program comparison. Thus, although the intent of the PSRs and regulation by CMS were not to stifle innovation, there is concern that this is occurring.

Unintended consequences: Ranking

It is important to note that the PSRs are not intended to demonstrate whether performance at one program is superior to another, but rather to determine if a program’s performance differs from expectations based on risk-adjustment using national data (4). For this reason, the SRTR does not present a ranking of centers and discourages other users from doing so (4).

How have the PSRs impacted program behavior?

Although PSRs have been publicly available for more than a decade, more recently their use by CMS for regulatory oversight under the CoP has garnered a great deal of attention from the transplant community (19). Private payers also utilize the PSRs for contracting decisions with individual transplant centers. Finally, the application of more stringent donor and recipient selection criteria has been proposed as an explanation for a contemporaneous decline in the steady growth of solid organ transplants after implementation of the CoPs(4, 20, 21).

Impact on program behavior: Change in case mix

The use of report cards and other performance assessments is not unique to organ transplantation (22, 23). However, it remains unclear whether provider report cards alter consumer behavior or improves care (24). Similarly, while publically reported outcomes in kidney transplantation influenced the choices of younger, educated patients, there was no impact on kidney transplantation demand overall (25).

In a survey of the 2009 Transplant Management Forum, 55% of respondents indicated that their centers had received low or near-low performance ratings (19). Interestingly, respondents from low-performing centers indicated that they had become more restrictive in selecting transplant candidates (81% vs. 38%, P = 0.001) and donors (77% vs. 31%, P < 0.001). In addition, 70% to 80% of respondents said their programs tolerated less risk as a result of the PSRs. Based on these data, performance evaluations are associated with significant changes in clinical practice at transplant centers, but do not affect patient choice of transplant center.

Impact on program behavior: Role of Quality Improvement and Safety

Performance metrics are intended to help identify centers in which practices could be improved and to identify centers that perform well that are to be emulated. Indeed, it appears that transplant programs are increasingly allocating resources towards internal quality monitoring. (19). To this end, the transplant community has attempted to address the issue of delays in reporting by moving toward methods of real-time process monitoring. The CUSUM method graphically depicts currently collected, risk-adjusted data, and alerts users when an outcome reaches a pre-determined threshold allowing for earlier intervention. CUSUMs are designed to provide continuous, real-time assessment of clinical outcomes. When retrospectively compared to currently available data reporting, the CUSUM method was found to detect clinically significant changes in center performance more rapidly (26). The United Kingdom has already adopted this methodology for use in kidney transplantation (27). Evaluation of its usefulness in a US population is ongoing.

Issues to be addressed which will influence future impact of PSRs

As a result of the recent SRTR consensus conference several key issues have been brought to light which will influence the future impact of PSRs (1). The overarching theme is that PSRs need to be better designed to address the needs of all users, particularly patients. Additional comorbidity variables should be collected and used in risk-adjustment models, but innovation should also be protected by excluding patients who are in approved research protocols from statistical models that identify underperforming centers. Transplant centers should also be provided with tools to facilitate quality assessment and performance improvement via innovative statistical methods and process improvement techniques.

Conclusions

PSRs are here to stay and are used by a wide variety of audiences, with both intended and unintended consequences on transplant center behavior. These reports represent a complex organ transplantation system that often involves trade-offs. Future studies designed to emphasize patient-centered outcomes analysis and data collection via innovative and cost-effective techniques are needed.

Supplementary Material

2

Key points.

  • Public program specific reports of organ transplant program outcomes by the Scientific Registry of Transplant Recipients (SRTR) have been both groundbreaking and controversial.

  • The reports are used by regulatory agencies, private insurance providers, transplant centers and patients.

  • Failure to adequately adjust outcomes for risk may cause programs to avoid performing transplants involving suitable but high-risk candidates and donors and may hamper scientific innovation in solid organ transplantation.

Footnotes

Conflicts of interest:

No conflicts of interest to declare

References

  • 1•.Kasiske BL, McBride MA, Cornell DL, Gaston RS, Henry ML, Irwin FD, Israni AK, et al. Report of a consensus conference on transplant program quality and surveillance. Am J Transplant. 2012;12:1988–1996. doi: 10.1111/j.1600-6143.2012.04130.x. Publication of the efforts of the SRTR to gather key stakeholders in February of 2012 to address the limitations in data collection, statistical analysis and reporting practices. Several key issues are addressed with important recommendations for future policy. [DOI] [PubMed] [Google Scholar]
  • 2.Congress US, editor. National Organ Transplantation Act. 1984. [Google Scholar]
  • 3.Services HaH, editor. The SRTR Final Rule. 1999. p. 21. [Google Scholar]
  • 4.Dickinson DM, Arrington CJ, Fant G, Levine GN, Schaubel DE, Pruett TL, Roberts MS, et al. SRTR program-specific reports on outcomes: a guide for the new reader. Am J Transplant. 2008;8:1012–1026. doi: 10.1111/j.1600-6143.2008.02178.x. [DOI] [PubMed] [Google Scholar]
  • 5.McDiarmid SV, Pruett TL, Graham WK. The oversight of solid organ transplantation in the United States. Am J Transplant. 2008;8:739–744. doi: 10.1111/j.1600-6143.2007.02147.x. [DOI] [PubMed] [Google Scholar]
  • 6.Abecassis MM, Burke R, Cosimi AB, Matas AJ, Merion RM, Millman D, Roberts JP, et al. Transplant center regulations--a mixed blessing? An ASTS Council viewpoint. Am J Transplant. 2008;8:2496–2502. doi: 10.1111/j.1600-6143.2008.02434.x. [DOI] [PubMed] [Google Scholar]
  • 7.Dickinson DM, Shearon TH, O'Keefe J, Wong HH, Berg CL, Rosendale JD, Delmonico FL, et al. SRTR center-specific reporting tools: Posttransplant outcomes. Am J Transplant. 2006;6:1198–1211. doi: 10.1111/j.1600-6143.2006.01275.x. [DOI] [PubMed] [Google Scholar]
  • 8.Ornstein C, Berthelsen C. Los Angeles Times 2006. Mar 24, 2006. UCI Medical Center on Transplant Probation; Regulators impose the lesser penalty. In the hospital’s liver scandal, 32 patients died waiting. [Google Scholar]
  • 9.Ornstein C. Los Angeles Times 2005. Dec 28, 2005. Transplant Program Faulted. [Google Scholar]
  • 10.Weber T, Ornstein C. Los Angeles Times 2006. May 13, 2006. Kaiser Halts Kidney Venture. [Google Scholar]
  • 11.Services HaH, editor. Federal Registrar. Centers for Medicare and Medicaid; 2005. Hospital Conditions of Participation: Requirements for Approval and Re-Approval of Transplant Centers To Perform Organ Transplants; Proposed Rule; pp. 6140–6182. [Google Scholar]
  • 12.Abecassis MM, Burke R, Klintmalm GB, Matas AJ, Merion RM, Millman D, Olthoff K, et al. American Society of Transplant Surgeons transplant center outcomes requirements--a threat to innovation. Am J Transplant. 2009;9:1279–1286. doi: 10.1111/j.1600-6143.2009.02606.x. [DOI] [PubMed] [Google Scholar]
  • 13.Gillespie BW, Merion RM, Ortiz-Rios E, Tong L, Shaked A, Brown RS, Ojo AO, et al. Database comparison of the adult-to-adult living donor liver transplantation cohort study (A2ALL) and the SRTR U.S. Transplant Registry. Am J Transplant. 2010;10:1621–1633. doi: 10.1111/j.1600-6143.2010.03039.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14•.Zenios S, Atias G, McCulloch C, Petrou C. Outcome differences across transplant centers: comparison of two methods for public reporting. Clin J Am Soc Nephrol. 2011;6:2838–2845. doi: 10.2215/CJN.00300111. An important advancement in recognizing the need for more sophisticated methodology in analyzing transplant outcomes to achieve improvement center comparisons. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Shahian DM, Edwards FH, Jacobs JP, Prager RL, Normand SL, Shewan CM, O'Brien SM, et al. Public reporting of cardiac surgery performance: Part 2--implementation. Ann Thorac Surg. 2011;92:S12–23. doi: 10.1016/j.athoracsur.2011.06.101. [DOI] [PubMed] [Google Scholar]
  • 16.Johnston SD, Morris JK, Cramb R, Gunson BK, Neuberger J. Cardiovascular morbidity and mortality after orthotopic liver transplantation. Transplantation. 2002;73:901–906. doi: 10.1097/00007890-200203270-00012. [DOI] [PubMed] [Google Scholar]
  • 17.Jeloka TK, Ross H, Smith R, Huang M, Fenton S, Cattran D, Schiff J, et al. Renal transplant outcome in high-cardiovascular risk recipients. Clin Transplant. 2007;21:609–614. doi: 10.1111/j.1399-0012.2007.00695.x. [DOI] [PubMed] [Google Scholar]
  • 18.Wang E, Lyuksemburg V, Skaro AI, Abecassis M. Donor and Recipient Risk Aversion in Liver Transplantation. Hepatology. 2011;54:363A. [Google Scholar]
  • 19.Schold JD, Arrington CJ, Levine G. Significant alterations in reported clinical practice associated with increased oversight of organ transplant center performance. Prog Transplant. 2010;20:279–287. doi: 10.1177/152692481002000313. [DOI] [PubMed] [Google Scholar]
  • 20.Schold JD, Srinivas TR, Howard RJ, Jamieson IR, Meier-Kriesche HU. The association of candidate mortality rates with kidney transplant outcomes and center performance evaluations. Transplantation. 2008;85:1–6. doi: 10.1097/01.tp.0000297372.51408.c2. [DOI] [PubMed] [Google Scholar]
  • 21.Howard RJ, Cornell DL, Schold JD. CMS oversight, OPOs and transplant centers and the law of unintended consequences. Clin Transplant. 2009;23:778–783. doi: 10.1111/j.1399-0012.2009.01157.x. [DOI] [PubMed] [Google Scholar]
  • 22.Epstein AM. The role of quality measurement in a competitive marketplace. Baxter Health Policy Rev. 1996;2:207–234. [PubMed] [Google Scholar]
  • 23.Ranji SR, Shetty K, Posley KA, Lewis R, Sundaram V, Galvin CM, Winston LG. Prevention of Healthcare-Associated Infections. Vol. 6. Rockville MD: 2007. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies. [PubMed] [Google Scholar]
  • 24•.Ketelaar NA, Faber MJ, Flottorp S, Rygh LH, Deane KH, Eccles MP. Public release of performance data in changing the behaviour of healthcare consumers, professionals or organisations. Cochrane Database Syst Rev. 2011:CD004538. doi: 10.1002/14651858.CD004538.pub2. Provides key insights into the impact of public reporting on healthcare consumers with rigorous review criteria for study inclusion. The authors highlight the lack of evidence available to support public reporting as a way of influencing behaviors. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Howard DH, Kaplan B. Do report cards influence hospital choice? The case of kidney transplantation. Inquiry. 2006;43:150–159. doi: 10.5034/inquiryjrnl_43.2.150. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Axelrod DA, Guidinger MK, Metzger RA, Wiesner RH, Webb RL, Merion RM. Transplant center quality assessment using a continuously updatable, risk-adjusted technique (CUSUM) Am J Transplant. 2006;6:313–323. doi: 10.1111/j.1600-6143.2005.01191.x. [DOI] [PubMed] [Google Scholar]
  • 27.Collett D, Sibanda N, Pioli S, Bradley JA, Rudge C. The UK scheme for mandatory continuous monitoring of early transplant outcome in all kidney transplant centers. Transplantation. 2009;88:970–975. doi: 10.1097/TP.0b013e3181b997de. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

2

RESOURCES