Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Nov 1.
Published in final edited form as: Hepatology. 2014 Oct 2;60(5):1717–1726. doi: 10.1002/hep.27307

Superior Survival Using Living Donors and Donor-Recipient Matching Using a Novel Living Donor Risk Index

David S Goldberg 1,2,3, Benjamin French 2,3, Peter L Abt 4, Kim Olthoff 4, Abraham Shaked 4
PMCID: PMC4211952  NIHMSID: NIHMS615109  PMID: 25042283

Abstract

The deceased-donor organ supply in the U.S. has not been able to keep pace with the increasing demand for liver transplantation. We examined national OPTN/UNOS data from 2002–2012 to assess whether LDLT has surpassed deceased donor liver transplantation (DDLT) as a superior method of transplantation, and used donor and recipient characteristics to develop a risk score to optimize donor and recipient selection for LDLT. From 2002–2012, there were 2,103 LDLTs and 46,674 DDLTs that met the inclusion criteria. The unadjusted 3-year graft survival for DDLTs was 75.5% (95% CI: 75.1–76.0%) compared with 78.9% (95% CI: 76.9–80.8%; p<0.001) for LDLTs that were performed at experienced centers (>15 LDLTs), with substantial improvement in LDLT graft survival over time. In multivariable models, LDLT recipients transplanted at experienced centers with either autoimmune hepatitis or cholestatic liver disease had significantly lower risks of graft failure (HR: 0.56, 95% CI: 0.37–0.84 and HR: 0.76, 95% CI: 0.63–0.92, respectively). An LDLT risk score that included both donor and recipient variables facilitated stratification of LDLT recipients into high, intermediate, and low-risk groups, with predicted 3-year graft survival ranging from >87% in the lowest risk group to <74% in the highest risk group. Current post-transplant outcomes for LDLT are equivalent, if not superior to DDLT when performed at experienced centers. An LDLT risk score can be used to optimize LDLT outcomes and provides objective selection criteria for donor selection in LDLT.

Keywords: deceased donor transplantation, patient survival, outcomes

Introduction

Living donor liver transplantation (LDLT) is potential alternative to bridge the current organ-supply demand mismatch, but accounts for only 3–4% of all adult liver transplants in the U.S.1 Early publications on the entire U.S. experience of LDLT suggested inferior graft outcomes in LDLT recipients, with 1-year graft failure rates of 21.5%. However, these studies identified factors associated with inferior outcomes, which informed strategies for donor and recipient selection, as well as technical modifications..2,3 Since these early publications, there have been several reports of excellent LDLT outcomes among adults receiving an LDLT at an experienced center in the U.S. based on data from the Adult-to-Adult Living Donor Liver Transplant (A2ALL) Consortium, with 1-year graft survival approaching 85%.46 This consortium, however, only includes 10 of the 36 U.S. liver transplant centers performing LDLTs since 2002; a group of centers acutely focused on LDLT.

While there is evidence to suggest comparable post-LDLT outcomes between A2ALL and non-A2ALL centers,7 the last published national analysis that specifically compared LDLT to deceased donor liver transplantation (DDLT) outcomes was over 10 years ago. Since that time, there has been an enhanced appreciation of the donor and recipient qualities, as well as technical aspects, of LDLT. Despite these advances, it is unknown whether national LDLT outcomes have improved relative to DDLT, and recent American Association for the Study of Liver Diseases (AASLD) guidelines refers to LDLT as “controversial.”8 The benefit of LDLT has largely focused on the minimization of waitlist mortality of LDLT recipients who avoid prolonged waiting time9; and it is unknown whether there is a potential long-term benefit to receiving a living versus deceased donor graft, as seen in kidney transplantation.10

The goals of this study were to evaluate national data on all liver transplants performed over the last ten years to: 1) determine if LDLT confers a long-term graft and/or patient survival benefit relative to DDLT; 2) evaluate temporal changes in LDLT outcomes, and potential mechanisms for improved outcomes in LDLTs; and 3) develop a risk score to predict post-LDLT graft outcomes. Such a score could help identify optimal donor and recipient matches (e.g. when a recipient has >1 potential living donor), and be used in counseling waitlisted patients considering LDLT.

Methods

Study Population

All analyses were based on Organ Procurement and Transplantation Network (OPTN)/United Network for Organ Sharing (UNOS) data from February 27, 2002 through December 2, 2012. We restricted the analyses to transplant recipients within the Model for End-Stage Liver Disease (MELD) era because: 1) decision-making and referral for LDLT vs. DDLT changed as a result of MELD-based allocation1; 2) surgical techniques and selection of recipient and donor candidates for LDLT have evolved; and 3) adult LDLTs in adults did not become commonplace in the U.S. until the last decade.7 All adult (≥18 years of age) transplant recipients were included; re-transplant recipients were excluded because the selection process and post-transplant outcomes for such recipients are inherently different,2 as were combined-organ transplant recipients.11

Outcome

The primary outcomes were post-transplant patient and graft survival. Post-transplant deaths included transplant recipients with the post-transplant status code of “died,” or those without this code, but a confirmed Social Security Death Master File (SSDMF) death date in the OPTN/UNOS dataset. Graft failure was defined as post-transplant death, or need for re-transplantation..

Statistical Analysis

Demographic and clinical characteristics of LDLT and DDLT recipients at transplantation were compared using standard descriptive statistics.

DDLT versus LDLT outcomes

Cox regression models were used to compare post-transplant patient and graft survival for recipients of a DDLT vs. LDLT. Models were stratified by transplant center to account for clustering of patients within centers, variable access to LDLT as a function of listing center,1 and heterogeneity in post-transplant outcomes across centers. Models were fit using three separate methods for analyzing clustered survival data, accounting for center differently in each model: stratified Cox model, fixed-effects Cox model, and random-effects (shared frailty) model.12 The data for the model with the best fit, as determined by model convergence and the lowest Akaike information criterion (AIC), are presented.12

We adjusted for recipient characteristics: age at transplant, gender, race/ethnicity (defined by OPTN/UNOS coding), blood type, final laboratory MELD score, primary diagnosis (Table 1), and status prior to transplant (home, hospital, or intensive care unit [ICU]). We did not adjust for donor characteristics except for age because favorable donor characteristics (e.g. decreased steatosis or shorter cold ischemia time) may be in the causal pathway of a survival benefit of LDLT, rather than being potential confounders that should be adjusted for in multivariable models. 13 Given the improved outcomes among LDLT recipients as a function of center volume, LDLT recipients were dichotomized into those that were among the 15 first LDLTs at a center versus those performed once a center had performed 15.7 An interaction term of donor type (LDLT vs. DDLT) and recipient diagnosis was included to determine if there was differential benefit to LDLT by recipient diagnosis. Primary analyses excluded recipients of donation after circulatory determination of death (DCDD) grafts, known to have inferior outcomes2,14,15; secondary analyses included these recipients.

Table 1.

Demographic and clinical characteristics of DDLT vs. LDLT recipients at transplantation*

DDLT, N=44,512 LDLT, N=2,103 P-value
Age at transplant, median (IQR) 55 (49–60) 53 (46–60) <0.001
Male gender, N (%) 31,595 (67.7) 1,194 (56.8) <0.001
Race/ethnicity, N (%) <0.001
 White 32,053 (72.0) 1,750 (83.2)
 Black 4,101 (9.2) 76 (3.6)
 Hispanic 5,709 (12.8) 205 (9.8)
 Asian 2,134 (4.8) 56 (2.7)
 Other 515 (1.2) 16 (0.8)
Final laboratory MELD score, median (IQR) 19 (13–27) 14 (11–18) <0.001
Serum bilirubin at transplant 3.6 (1.8–8.9) 2.5 (1.4–4.5) <0.001
INR at transplant 1.6 (1.3–2.1) 1.3 (1.2–1.6) <0.001
Serum creatinine at transplant 1.0 (0.8–1.5) 0.9 (0.7–1.1) <0.001
Diagnosis <0.001
 Hepatitis C 18,216 (40.9) 653 (31.1)
 Alcohol 6,804 (15.3) 213 (10.1)
 Hepatitis B 1,626 (3.7) 45 (2.1)
 NASH/cryptogenic 6,018 (13.5) 291 (13.8)
 Cholestatic 3,477 (7.8) 532 (25.3)
 Autoimmune 2,215 (5.0) 119 (5.7)
 Other 4,582 (10.3) 238 (11.3)
 Fulminant 1,574 (3.5) 12 (0.6)
Blood type <0.001
 O 19,581 (44.0) 984 (46.8)
 A 16,416 (36.9) 875 (41.6)
 B 6,139 (13.8) 214 (10.2)
 AB 2,362 (5.3) 30 (1.4)
Ascites at transplant <0.001
 None 11,091 (26.0) 606 (28.8)
 Mild 22,199 (52.1) 1,129 (53.7)
 Moderate 9,364 (22.0) 368 (17.5)
Location at time of transplant <0.001
 Intensive care unit (ICU) 4,942 (11.) 49 (2.3)
 Hospitalized, non-ICU 7,347 (16.5) 196 (9.3)
 Home 32,197 (72.4) 1,858 (88.4)
*

Excluding donation after circulatory determination of death (DCDD) recipients

Data missing on 26 DDLT recipients

Potential mechanisms of improved LDLT outcomes

Given observed improved LDLT graft survival from 2002–2012, we evaluated potential mechanisms for improved outcomes. We evaluated variables which have previously been shown to be associated with post-LDLT outcomes: center experience, recipient hospitalization status prior to LDLT, graft lobe (left vs. right), recipient and donor age, cold ischemia time, recipient diagnosis, and recipient non-white race.2,7 LDLT recipients were first categorized by year of transplantation, and subsequent years were grouped together based on similar results for specific years (results section). Unadjusted Cox models with year category as the exposure were fit, and multivariable shared frailty Cox regression models, with center as a random effect, were fit to evaluate factors that accounted for differences in outcomes over time. Specific donor, recipient, or center (volume) factors were deemed as significantly associated with improved graft outcomes over time (and thus a confounder of the relationship between time and graft survival) if inclusion of the factor changed the hazard ratio of year category by 10%. Unadjusted Kaplan-Meier curves were compared to adjusted post-estimation survival curves using Stata’s stcurve function. These analyses were performed using Stata 13.0 (Stata Corp, College Station, TX).

Development of LDLT risk score

In an effort to optimize patient-centered transplant care, we developed an LDLT risk score to predict post-LDLT graft survival using donor and recipient variables at transplantation previously identified to be associated with LDLT and DDLT outcomes27,16. The risk-score development and internal validation included LDLT recipients in the MELD era that were not among the first 15 LDLT transplants at a given center, given the recognized learning curve and change in post-LDLT outcomes with increasing center experience.2,7

Potential predictors were: a) donor: age, gender, race/ethnicity, weight, relatedness to recipients (unrelated, first-, second-, or third-degree), and graft type (right vs. left; graft weight not available in OPTN/UNOS data); and b) recipient: age, gender, race/ethnicity, weight, serum albumin at transplant, history of ascites (yes/no), previous abdominal surgery, hospitalization status at the time of LDLT, and primary diagnosis. Primary diagnosis was categorized as hepatitis C, hepatitis B, alcoholic liver disease, non-alcoholic steatohepatitis/cryptogenic, cholestatic, autoimmune, hepatocellular carcinoma, and other.

For the primary derivation and validation of the LDRI risk score, HCC was categorized as a separate primary diagnostic category for two reasons. First, in UNOS data, HCC can be selected as a primary diagnosis, and providing a secondary diagnosis is optional. As a result, approximately one-third of the LDLT recipients with HCC in this analysis only had HCC listed as a diagnosis, without a secondary diagnosis. Second, in the setting of LDLT, it has been shown that the diagnosis of HCC, independent of primary disease, is a greater predictor of outcomes6, and therefore should be categorized as a separate diagnosis. Patients were coded as HCC based on UNOS coding for diagnosis and/or receipt of HCC MELD exception points. In a sensitivity analysis, HCC was instead coded as a separate dichotomous variable, and not as a primary diagnosis. Those patients with only HCC as a diagnosis were coded as “other.” Laboratory MELD score at transplant was not included because there was no association between laboratory MELD and graft outcomes in LDLT recipients (in multivariable Cox models evaluating graft survival among LDLT recipients, the hazard ratio for laboratory MELD score at transplant was 1.01, 95% CI: 0.99–1.03, p=0.49).

First, graphical summaries were used to explore the functional form for continuous variables. Second, all candidate variables and their appropriate transformations were included in a Cox regression model for time to graft failure or censoring (i.e., the ‘full’ model), and a ‘parsimonious’ model was selected by applying a bi-directional variable-selection algorithm to the full model, for which a variable that decreased the AIC was retained. Risk scores were estimated using leave-one-out cross-validation (or, jackknife), such that the value of the risk score for each patient was calculated as a weighted combination of his/her covariate values, with weights determined by regression coefficients, which were estimated from a Cox regression model fit to the data for all other patients.17 The leave-one-out approach ameliorates the potential for bias when evaluating a risk score in the same dataset from which it was derived and avoids arbitrarily splitting the data into derivation and validation cohorts.

Time-dependent receiver operating characteristic (ROC) curves were used to compare the ability of the risk scores to classify patients with respect to graft failure at 1, 3, and 5 years after transplantation 18. The area under the time-dependent ROC curve (AUC) was used to quantify prediction accuracy. Confidence intervals for the AUC were obtained from 1000 bootstrap resamples, in which rows of the data were sampled with replacement, and the risk scores and corresponding AUCs were estimated at each iteration. Analyses were completed using R 3.0.2 (R Development Core Team, Vienna, Austria), including the survival ROC-extension package.

The study was approved by the Institutional Review Board at the University of Pennsylvania.

Results

From February 27, 2002 through December 2, 2012, there were 48,777 liver transplants meeting the inclusion criteria—2,103 (4.3%) LDLTs and 46,674 (95.7%) DDLTs, of which 44,512 (95.4%) were from donation after neurologic death (DND) donors. There were significant differences in clinical and demographic characteristics of LDLT vs. DDLT transplant recipients (Table 1). The LDLT cohort was significantly more likely to be white and have cholestatic liver disease, while DDLT recipients were more likely to have hepatitis C, have higher laboratory MELD scores at transplantation, and have moderate ascites at transplantation. There was marked regional variability in the number of DDLTs vs. LDLTs (Supplementary Table 1).

Unadjusted DDLT vs. LDLT post-transplant graft and patient survival

Unadjusted post-transplant graft and patient survival was significantly higher in LDLT recipients, compared to DDLT recipients (p<0.001 DDLT vs LDLT patient and graft survival; Figures 1a and 1b; Table 2). However, when LDLT recipients were stratified based on whether they were among the first 15 LDLTs performed at a center, the unadjusted graft survival was not different between DDLT and LDLT recipients among the first 15 at a center (log-rank test p=0.28), while graft survival remained significantly higher in LDLT recipients that were not among the first 15 LDLTs (log-rank test p<0.001). In unadjusted analyses comparing LDLT stratified by center experience, post-transplant graft survival was significantly higher when the LDLT was not among the first 15 at a center (log-rank p=0.006), while patient survival was not significantly different (p=0.07; Table 2).

Figure 1.

Figure 1

Figure 1

Figure 1a-MELD Era Post-Transplant Graft Survival

Figure 1b-MELD Era Post-Transplant Patient Survival

Table 2.

Unadjusted 1-, 3-, and 5-year post-transplant graft and patient survival of DDLTs vs. LDLTs stratified on center LDLT experience

Outcome Transplant type Survival

1-year 3-year 5-year
Graft survival DDLT 85.3 (85.0–85.7) 75.5 (75.1–76.0) 68.5 (68.5–69.0)
LDLT number≤15* 80.1 (75.1–84.2) 73.1 (67.3–78.0) 64.9 (58.3–70.6)
LDLT number>15* 85.8 (84.0–87.3) 78.9 (76.9–80.8) 73.8 (71.5–76.0)
Patient survival DDLT 87.3 (86.9–87.6) 77.9 (77.5–78.3) 71.0 (70.5–71.5)
LDLT number≤15* 85.2 (80.4–88.9) 79.1 (73.5–83.7) 70.7 (64.1–76.4)
LDLT number>15* 89.6 (88.0–90.9) 82.7 (80.8–84.5) 77.8 (75.5–79.9)
*

LDLT number signifies whether the LDLT was among the first 15 LDLTs performed at a given transplant center.

DDLT vs. LDLT post-transplant survival in multivariable models

The best fitting model for both patient and graft survival was the stratified Cox model, with center as the stratifying variable. In multivariable Cox models, without consideration of center experience or diagnosis, LDLT recipients had a numerically, but not statistically significant, decreased risk of graft failure (Table 3). However, when accounting center experience (LDLT among first 15 at a center vs. >15th LDLT at a center) primary diagnosis, and the interaction of these two factors, the survival benefit of LDLT was restricted to patients with certain diagnoses transplanted at centers with increased LDLT experience (Table 3; Supplementary Table 2). Specifically, the greatest graft survival benefit when comparing LDLT vs. DDLT within specific diagnostic categories was seen in patients with cholestatic liver disease or autoimmune hepatitis transplanted at centers with increased LDLT experience: HR for graft failure: 0.76 (0.63–0.92); p=0.004 for cholestatic liver disease; HR for graft failure: 0.56 (0.37–0.84); p=0.004 for autoimmune liver disease. This benefit was not evident when the recipient was among the first 15 performed at a center. Similar results were seen when evaluating patient survival (data not shown).

Table 3.

Within-diagnosis results of multivariable hazard model evaluating graft survival with interaction term of donor category and recipient diagnosis*

Variable Hazard ratio P-value
Hepatitis C
 DDLT 1
 LDLT number ≤15 0.94 (0.66–1.34) 0.73
 LDLT number >15 0.89 (0.74–1.07) 0.22
Alcoholic liver disease
 DDLT 1
 LDLT number ≤15 1.67 (0.88–3.16) 0.12
 LDLT number >15 0.94 (0.69–1.27) 0.68
Hepatitis B
 DDLT 1
 LDLT number ≤15 1.45 (0.32–6.50) 0.63
 LDLT number >15 1.07 (0.65–1.76) 0.79
NASH/Cryptogenic
 DDLT 1
 LDLT number ≤15 1.13 (0.72–1.76) 0.60
 LDLT number >15 1.21 (0.97–1.51) 0.10
Cholestatic liver disease
 DDLT 1
 LDLT number ≤15 1.36 (0.83–2.22) 0.22
 LDLT number >15 0.75 (0.63–0.92) 0.004
Autoimmune liver disease
 DDLT 1
 LDLT number ≤15 0.87 (0.35–2.15) 0.76
 LDLT number >15 0.56 (0.37–0.83) 0.004
Other
 DDLT 1
 LDLT number ≤15 1.15 (0.69–1.92) 0.58
 LDLT number >15 1.00 (0.69–1.45) 0.98
*

Displayed results are based on output of multivariable Cox model for graft survival, specifically comparing within-diagnosis hazard of graft failure based on output of Cox model which included interaction term of donor category (DDLT, LDLT≤15, and LDLT>15) and primary diagnosis. The displayed within-diagnosis hazard ratios are for the hazard of graft failure, based on donor category, with all other covariates being fixed. The other covariates included in the model are shown in Supplementary Table 2. Hazard ratio for fulminant hepatic failure not displayed due to extremely wide, clinically implausible hazard ratios due to small sample sizes.

Temporal changes in LDLT outcomes

Unadjusted post-transplant LDLT graft survival increased significantly over time (Figure 2a), with 3-year unadjusted graft survival increasing from 64% in LDLTs performed in 1999, to 75% in those performed between 2002 and 2004, to 82% in LDLTs performed in 2008. Several donor, recipient, and graft factors changed over time, with these changes accounting for the improved graft survival (Table 4). Beyond increased center experience, fewer grafts had >4.5 cold ischemic time, fewer recipients with hepatitis C or who were in the ICU and/or hospital at the time of transplantation. Although the proportion of LDLT grafts that were right vs. left lobe varied significantly over time (Table 4), this factor likely did not explain improvements in outcomes; the proportion of right lobe grafts was 85% in 1999, peaked at 97% between 2002–2004, and decreased down to 84% by the 2009–2012 era.

Figure 2.

Figure 2

Figure 2

Figure 2a: Unadjusted Post-LDLT Graft Survival Rates Among All U.S. LDLT Recipients Stratified by Year of LDLT

Figure 2b: Adjusted Post-LDLT Graft Survival Rates Among All U.S. LDLT Recipients Stratified by Year of LDLT

Table 4.

Donor, recipient, and graft factors that have changed from 1999–2012*

Risk factors Year of LDLT category P-value

1999 2000–2001 2002–2004 2005–2007 2008 2009–2012
3-year unadjusted graft survival 63.4 (54.1–71.3) 71.4 (67.5–74.9) 75.4 (72.0–78.4) 77.9 (74.3–81.0) 82.2 (75.3–87.4) N/A <0.001
Proportion of LDLTs that were among the first 15 for a center, N (%) 119 (96.8) 309 (54.2) 172 (24.4) 53 (8.7) 11 (7.0) 80 (12.3) <0.001
Cold ischemia time ≤4.5 hours, N (%)** 69 (82.1) 319 (90.9) 397 (91.7) 428 (92.2) 122 (93.9) 567 (95.5) <0.001
Recipients with hepatitis C, N (%) 49 (39.8) 219 (38.4) 241 (34.2) 171 (28.1) 55 (34.8) 180 (27.6) 0.002
White race, N (%) 88 (71.5) 432 (75.8) 565 (80.3) 519 (85.2) 129 (81.7) 552 (84.7) 0.001
Recipient in ICU prior to LDLT 9 (7.3) 24 (4.2) 16 (2.3) 18 (3.0) 1 (0.6) 14 (2.2) 0.004
Recipient hospitalized, not in ICU, prior to LDLT, N (%) 13 (10.6) 91 (16.0) 74 (10.5) 61 (10.0) 10 (6.3) 54 (8.3) <0.001
Right lobe graft, N (%) 105 (85.4) 538 (94.4) 684 (97.2) 576 (94.6) 142 (89.9) 547 (83.9) <0.001
*

No significant change in donor or recipient age over time. Years of transplant for which post-transplant graft outcomes were statistically and numerically similar were grouped together.

**

Data only available in 73.0% (N=2,056) of recipients

In unadjusted Cox models, year of LDLT was significantly associated with post-LDLT graft survival (Table 4). In adjusted multivariable Cox models that accounted for changes in donor, recipient, and graft factors over time, these differences were no longer significant, with the exception of 2009–2012, with all of the hazard ratios attenuated toward the null (1.0), and near overlap of the survival curves (Figure 2b; Supplementary Table 3).

LDLT risk score

The LDLT risk score (hereafter, LDRI) was developed and validated for its ability to predict 1-, 3-, and 5-year graft survival. A full model including all potential donor and recipient covariates, as well as a parsimonious model restricted to the variables significantly associated with graft survival (recipient age, weight, albumin, and diagnosis; and donor age, weight, and graft type) yielded different beta coefficients for individual variables (Table 5), but similar results for the area under the curve (AUC). Specifically, the AUC for 1-, 3-, and 5-year graft survival was 0.59 (95% CI: 0.55–0.63), 0.61 (0.57–0.64), and 0.60 (0.57–0.63), respectively for the full model, and 0.60 (0.56–0.63), 0.61 (0.58–0.64), and 0.62 (0.58–0.63), respectively for the parsimonious model. Similar results were obtained in models that treated HCC as a separate variable, and not a primary diagnosis.

Table 5.

LDLT Risk Score Model

Full model Parsimonious model*
Risk score coefficients
 Recipient age, years
  Linear −0.01390 −0.02157
  Quadratic 0.00025 0.00032
 Recipient gender
  Female 0
  Male 0.08699
 Recipient race
  White 0
  Black 0.20554
  Hispanic −0.07186
  Asian −0.15048
  Other 0.78156
 Recipient weight, kg
  Linear −0.04158 −0.04093
  Quadratic 0.00025 0.00025
 Primary diagnosis
  Hepatitis C 0 0
  Hepatitis B −0.03166 −0.04381
  Alcohol −0.02849 −0.02212
  NASH/Cryptogenic 0.21941 0.20984
  Cholestatic −0.53052 −0.53217
  Autoimmune −0.82212 −0.85707
  HCC 0.25748 0.28949
  Other −0.03211 −0.02658
 Recipient serum albumin, mg/dL −0.19865 −0.19665
 Ascites
  None 0
  Slight −0.02337
  Moderate −0.05336
 Previous abdominal surgery
  No 0
  Yes 0.06019
 Hospitalization status
  No 0
  Yes 0.15267
 Donor age, years
  Linear −0.01590 −0.01914
  Quadratic 0.00040 0.00042
 Donor gender
  Female 0
  Male −0.00160
 Donor race
  White 0
  Black 0.08382
  Hispanic 0.11456
  Asian −0.08967
  Other −0.39978
 Donor weight, kg
  Linear −0.03157 −0.02822
  Quadratic 0.00018 0.00017
 Graft type
  Left 0 0
  Right −0.77881 −0.76107
 Relatedness
  Unrelated 0
  Identical twins or first degree 0.09938
  Second degree 0.06086
  Third degree 0.37820
*

Parsimonious model developed from applying a bi-directional variable-selection algorithm to the full model, in which a variable that decreased the Akaike information criterion was retained.

Risk score coefficients estimated from a Cox regression model for time to graft failure.

LDLT recipients were stratified into three risk groups, with clustering of scores that clearly delineated 1-, 3-, and 5-year graft survival of LDLT recipients. The combined donor and recipient factors that yielded the “best” LDRI scores had predicted 3-year graft survival rates of >87%, and 5-year graft survival of >80%, compared with the “worst” LDRI scores that were associated with 3- and 5-year predicted graft survival rates of <74% and 68%, respectively (Table 6). Supplementary Table 4 demonstrates an example of how the LDRI risk score could be used to evaluate expected graft survival for a given recipient with five potential donors, while Supplementary Figure 1 demonstrates the broad range of predicted probabilities of graft survival based on the parsimonious LDRI score.

Table 6.

Predicted probability of graft survival, calculated based on parsimonious LDRI score*

Outcomes category Score range 1-year 3-year 5-year
Highest graft survival rates LDRI<−0.704 91.8 (86.5–95.0) 87.8 (81.8–92.0) 85.1 (78.3–89.9)
−0.704≤LDRI<−0.414 91.8 (86.5–95.0) 89.2 (83.3–93.0) 88.2 (82.1–92.4)
−0.414≤LDRI<−0.212 91.7 (86.4–95.0) 87.8 (81.7–92.0) 80.3 (72.9–85.8)

Intermediate graft survival rates −0.212≤LDRI<−0.088 90.5 (85.0–94.1) 79.4 (72.1–85.0) 74.7 (66.6–81.2)
−0.088≤LDRI<0.0180 85.9 (79.7–90.3) 80.8 (73.7–86.1) 76.9 (69.0–83.0)
0.0180≤LDRI<0.125 88.8 (83.0–92.7) 80.8 (73.8–86.2) 77.2 (69.5–83.2)

Lowest graft survival rates 0.125≤LDRI<0.253 86.3 (80.1–90.7) 71.2 (63.2–77.7) 65.7 (57.2–72.9)
0.253≤LDRI<0.401 83.4 (76.8–88.2) 73.1 (65.5–79.4) 67.5 (59.0–74.5)
0.401≤LDRI<0.589 85.9 (79.6–90.3) 72.1 (64.3–78.5) 63.5 (54.6–71.0)
LDRI≥0.589 82.1 (75.5–87.2) 68.0 (59.7–74.9) 60.3 (50.2–68.9)
*

Formula to calculate LDRI risk score: (recipient age*−0.02157) + (recipient age2 *0.00032) + (recipient weight in kg * −0.04093) + (recipient weight in kg2*0.00025) + (−0.04381 if hepatitis B; −0.02212 if alcoholic liver disease; 0.20984 if NASH; −0.53217 if cholestatic liver disease; −0.85707 if autoimmune; 0.28949 if HCC; −0.02658 if “other” diagnosis) + (recipient serum albumin * −0.19665) + (donor age*−0.01914) + (donor age2 *0.00042) + (donor weight in kg * −0.02822) + (donor weight in kg2*0.00017) −0.76107 (if right lobe graft)

Score ranges based on grouping scores into deciles, such that each group has approximately 170 recipients.

Discussion

In this analysis of all U.S. living donor liver transplants performed in the United States, we demonstrate that in the current MELD era, LDLT transplantation is associated with significantly superior unadjusted patient and graft survival relative to DDLT. Although these benefits in adjusted models were restricted to recipients with autoimmune or cholestatic liver disease performed at a center with greater LDLT experience, outcomes of LDLTs in more recent years demonstrate continued improvement in post-transplant outcomes. These data clearly demonstrate that the benefit of LDLT relative to DDLT extends to post-transplant outcomes and are not restricted to the benefits of earlier transplantation and its impact on decreasing waitlist mortality. Furthermore, we have developed an LDRI risk score, which, despite modest prediction accuracy, has the potential to impact clinical practice by optimizing donor selection and donor and recipient matching for LDLT. These data suggest that increased utilization of LDLTs at experienced centers may help to bridge the organ supply-demand mismatch and decrease the risk of waitlist mortality without compromising post-transplant outcomes.

While previous studies have reported excellent post-LDLT outcomes in the MELD era7, the current study is the first to specifically compare national LDLT vs. DDLT outcomes since the introduction of MELD-based allocation. By utilizing previous reports of post-LDLT outcomes in the U.S., we were able to identify several factors that likely explain the improved graft survival among LDLT recipients over time. Although the data do not allow us to delineate the contribution of each of these factors, it is likely that the major factor leading to the superior outcomes among LDLT recipients is improved surgical techniques and surgical experience. Additionally, this experience likely extends both in terms of the surgery itself, as well as process-of-care measures in experienced centers that serve to optimize and streamline the entire LDLT process. These data suggest that increased use of right lobe grafts may have accounted for some of the improved outcomes in LDLT recipients over time. Because UNOS does not include data on graft weight, we cannot delineate if this association is related to recipient graft size versus the technical aspects of transplanting right vs left lobe grafts. Continued evaluation of the role of right versus left lobe grafts is important, especially in weighing considerations of optimizing recipient outcomes in the context of minimizing donor risk. Yet these accomplishments have not translated into an increased volume of LDLTs nationally, a necessary step to minimizing the organ supply-demand mismatch. These data clearly demonstrate that superior post-transplant outcomes of LDLT relative to DDLT can be achieved when LDLTs are performed at experienced centers, and suggest that a renewed focus on LDLT should be considered.

These data, coupled with previous reports of excellent LDLT outcomes from the A2ALL consortium, validate LDLT as an excellent alternative to DDLT and not an experimental procedure, and in certain recipient populations, superior to DDLT. The data also do not suggest that the results are attributable to declining donor quality in DDLT recipients. In the DDLT cohort included in this analysis, the average donor quality has been relatively static, with the median DRI of DDLT grafts being 1.35 (IQR: 1.06–1.63) in 2002, compared with 1.36 (IQR: 1.10–1.63) in 2012. Given the persistent gap between the demand for transplantable livers and the currently available organ supply, methods to mitigate this mismatch have been proposed. However, even with optimal use of donation after circulatory determination of death organs, and increased deceased donor consent rates, the organ supply will fall markedly short of current demand.1921 Thus we believe that increasing the number of LDLTs is the most viable way to minimize waitlist mortality while improving post-transplant outcomes.

Our results support the notion of initiating efforts to increase utilization of LDLTs. Such efforts though must focus on maintaining donor safety as a priority and while also focusing on appropriate recipient selection. Furthermore, given the established association between center experience and outcomes, increasing the LDLT volume at experienced centers may be the preferred method to achieve the goal of increasing national LDLT volume without compromising outcomes. These efforts should be coupled with further attempts at improving outcomes while minimizing donor complications. Analyses of detailed data available from the A2ALL consortium demonstrate that complication rates of donors have remained static over time. Although approximately 40% of living liver donors experience at least one complication within the first year of donation, 45.6% of such complications are considered Clavien grade 1 and categorized as “minor,” with an additional 53.6% of complications being Clavien grade 2, which are “potentially life threatening, but does not result in residual disability or persistent disease.”22,23 A recent analysis of national data of all living donor hepatectomies presents comparable results of immediate post-operative complications during the donation hospitalization.24 It must be noted that there have been four confirmed donor deaths following adult-to-adult living donor liver transplants in the U.S. since 1999, however due to the data available in the OPTN/UNOS data used for this analysis, we can only confirm that two of those deaths were included in the transplants we evaluated.25

The LDRI score that was developed for this study may be useful in clinical practice. For a given recipient with several potential donors, predicted graft outcomes will be fixed based on the recipient characteristics, yet vary based on donor variables. Evident in Table 6, however, are clear distinctions of predicted 3- and 5-year graft survival rates based on three LDRI risk score categories. Given these differences in absolute survival, the LDRI score could help identify donor-recipient combinations that would achieve the best LDLT outcomes. The predicted probabilities of graft survival based on the parsimonious LDRI score at 1, 3, and 5 years after LDLT across all LDLT recipients is broad (Supplementary Figure 1), with an IQR for predicted graft survival at 5 years of 67.8–81.2%. Thus the LDRI score’s ability to quantify predicted probabilities of graft survival translates to clinically significant differences in predicted outcomes, and therefore has the potential to change clinical practice. First, for example, if a recipient had several, equally suitable potential donors with equivalent donor risk profiles, then the LDRI score could be used to ascertain which donor would maximize predicted graft survival, as shown in Supplementary Table 4. In practice, we would not necessarily advocate for waiting for multiple donors to come forward in order to employ the LDRI score, rather than evaluating the first acceptable one. However, in the common setting where multiple donors do come forward, the LDRI score can help triage such donors, and prioritize which donor to work up first using objective rather than subjective criteria. Second, although challenging from a logistical and technical perspective, selecting donors that yield the lowest LDRI score (thus lowest risk and highest predicted graft survival) may allow for consideration of paired liver exchanges when the potential to markedly improve the predicted outcomes of two recipients is a possibility. Lastly, a patient with high waitlist priority based on exception points may be counseled about the potential for improved outcomes for an LDLT vs. DDLT depending on the combination of donor and recipient variables. Prior to being utilized in clinical practice, prospective validation of the LDRI should be performed on a different set of patients.

This study has several limitations. First, certain variables that may be associated with graft outcomes are not available in OPTN/UNOS data (e.g. graft weight, number of reconstructed veins, or number of bile duct), and could not be included in the assessment of temporal changes in outcomes, or in the LDRI. While inclusion of such variables may have improved the performance of the LDRI, such variables cannot be definitively known prior to surgery given the inherent limitations of imaging to evaluate graft weight and to plan the exact number of required anastomoses. Second, LDLT center experience was measured at the center, and not the surgeon level. However, the importance of center experience on outcomes rests not only the role of increased experience on surgical technique, but also the team effect, which includes experience of hepatologists, living donor coordinators, and surgical and floor nurses and support staff. Third, we were unable to see an association between MELD score at transplantation and post-transplant outcomes in the LDLT cohort, which is likely related to the current practice of performing LDLTs in patients across a narrow MELD score range. Lastly, the LDRI had an AUC 0.6, consistent with modest predictive ability. Thus we would advocate using it only as a guide, and not a definitive tool to dictate donor and recipient selection. Greater predictive ability may arise with when more granular levels of donor data are available, but at the present time limited donor detail is available from the OPTN.

In conclusion, there has been substantial improvement in graft outcomes among LDLT recipients over the last 10 years. Graft outcomes are superior when performed for specific indications at experienced centers. Use of objective criteria for donor and recipient selection may be a way to further improve outcomes in LDLT transplants. Continued efforts to increase utilization of LDLT grafts, while maintaining donor and recipient safety, will help to increase transplant rates in the U.S. while decreasing waitlist mortality.

Supplementary Material

Supp TableS1-S4

Acknowledgments

Grant Support Information

  1. David Goldberg: NIH K08 DK098272-01A1

  2. David Goldberg, Kim Olthoff, Abraham Shaked: NIH 5U01DK062494-12

  3. This work was supported in part by Health Resources and Services Administration contract 234-2005-37011C.

List of Abbreviations

LDLT

Living Donor Liver Transplantation

A2ALL

Adult-to-Adult Living Donor Liver Transplant

DDLT

Deceased Donor Liver Transplantation

AASLD

American Association for the Study of Liver Diseases

OPTN

Organ Procurement and Transplantation Network

UNOS

United Network for Organ Sharing

MELD

Model for End-Stage Liver Disease

SSDMF

Social Security Death Master File

AIC

Akaike Information Criterion

ICU

Intensive care unit

Footnotes

The content is the responsibility of the authors alone and does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.

Contributor Information

David S. Goldberg, Email: david.goldberg@uphs.upenn.edu.

Benjamin French, Email: bcfrench@mail.med.upenn.edu.

Peter L Abt, Email: peter.l.abt@uphs.upenn.edu.

Kim Olthoff, Email: kim.olthoff@uphs.upenn.edu.

Abraham Shaked, Email: abraham.shaked@uphs.upenn.edu.

References

  • 1.Goldberg DS, French B, Thomasson A, Reddy KR, Halpern SD. Current trends in living donor liver transplantation for primary sclerosing cholangitis. Transplantation. 2011 May 27;91(10):1148–1152. doi: 10.1097/TP.0b013e31821694b3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Abt PL, Mange KC, Olthoff KM, Markmann JF, Reddy KR, Shaked A. Allograft survival following adult-to-adult living donor liver transplantation. Am J Transplant. 2004 Aug;4(8):1302–1307. doi: 10.1111/j.1600-6143.2004.00522.x. [DOI] [PubMed] [Google Scholar]
  • 3.Patt CH, Thuluvath PJ. Adult living donor liver transplantation. Med Gen Med. 2003 Aug 19;5(3):26. [PubMed] [Google Scholar]
  • 4.Berg CL, Gillespie BW, Merion RM, et al. Improvement in survival associated with adult-to-adult living donor liver transplantation. Gastroenterology. 2007 Dec;133(6):1806–1813. doi: 10.1053/j.gastro.2007.09.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Berg CL, Merion RM, Shearon TH, et al. Liver transplant recipient survival benefit with living donation in the model for end stage liver disease allocation era. Hepatology. 2011 Oct;54(4):1313–1321. doi: 10.1002/hep.24494. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Kulik LM, Fisher RA, Rodrigo DR, et al. Outcomes of living and deceased donor liver transplant recipients with hepatocellular carcinoma: results of the A2ALL cohort. Am J Transplant. 2012 Nov;12(11):2997–3007. doi: 10.1111/j.1600-6143.2012.04272.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Olthoff KM, Abecassis MM, Emond JC, et al. Outcomes of adult living donor liver transplantation: comparison of the Adult-to-adult Living Donor Liver Transplantation Cohort Study and the national experience. Liver Transpl. 2011 Jul;17(7):789–797. doi: 10.1002/lt.22288. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Martin PDA, Feng S, Brown R, Fallon M. Evaluation for Liver Transplantation in Adults: 2013 Practice Guideline by the American Association for the Study of Liver Diseases and the American Society of Transplantation. Hepatology. 2014;59(3):1144–1165. doi: 10.1002/hep.26972. [DOI] [PubMed] [Google Scholar]
  • 9.Shah SA, Levy GA, Greig PD, et al. Reduced mortality with right-lobe living donor compared to deceased-donor liver transplantation when analyzed from the time of listing. Am J Transplant. 2007 Apr;7(4):998–1002. doi: 10.1111/j.1600-6143.2006.01692.x. [DOI] [PubMed] [Google Scholar]
  • 10.Axelrod DA, McCullough KP, Brewer ED, Becker BN, Segev DL, Rao PS. Kidney and pancreas transplantation in the United States, 1999–2008: the changing face of living donation. Am J Transplant. 2010 Apr;10(4 Pt 2):987–1002. doi: 10.1111/j.1600-6143.2010.03022.x. [DOI] [PubMed] [Google Scholar]
  • 11.Haberal M, Abbasoglu O, Buyukpamukcu N, et al. Combined liver-kidney transplantation from a living-related donor. Transplant Proc. 1993 Jun;25(3):2211–2213. [PubMed] [Google Scholar]
  • 12.Glidden DV, Vittinghoff E. Modelling clustered survival data from multicentre clinical trials. Stat Med. 2004 Feb 15;23(3):369–388. doi: 10.1002/sim.1599. [DOI] [PubMed] [Google Scholar]
  • 13.Asrani SK, Kim WR, Edwards EB, et al. Impact of the center on graft failure after liver transplantation. Liver Transpl. 2013 Sep;19(9):957–964. doi: 10.1002/lt.23685. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Jay C, Ladner D, Wang E, et al. A comprehensive risk assessment of mortality following donation after cardiac death liver transplant - an analysis of the national registry. J Hepatol. 2011 Oct;55(4):808–813. doi: 10.1016/j.jhep.2011.01.040. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Skaro AI, Jay CL, Baker TB, et al. The impact of ischemic cholangiopathy in liver transplantation using donors after cardiac death: the untold story. Surgery. 2009 Oct;146(4):543–552. doi: 10.1016/j.surg.2009.06.052. discussion 552-543. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Feng S, Goodrich NP, Bragg-Gresham JL, et al. Characteristics associated with liver graft failure: the concept of a donor risk index. Am J Transplant. 2006 Apr;6(4):783–790. doi: 10.1111/j.1600-6143.2006.01242.x. [DOI] [PubMed] [Google Scholar]
  • 17.French B, Saha-Chaudhuri P, Ky B, Cappola TP, Heagerty PJ. Development and evaluation of multi-marker risk scores for clinical prognosis. Stat Methods Med Res. 2012 Jul 5; doi: 10.1177/0962280212451881. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Heagerty PJ, Lumley T, Pepe MS. Time-dependent ROC curves for censored survival data and a diagnostic marker. Biometrics. 2000 Jun;56(2):337–344. doi: 10.1111/j.0006-341x.2000.00337.x. [DOI] [PubMed] [Google Scholar]
  • 19.Halpern SD. Donation After Circulatory Determination of Death: Time for Transparency. Ann Emerg Med. 2013 Nov 7; doi: 10.1016/j.annemergmed.2013.09.020. [DOI] [PubMed] [Google Scholar]
  • 20.Halpern SD, Barnes B, Hasz RD, Abt PL. Estimated supply of organ donors after circulatory determination of death: a population-based cohort study. JAMA. 2010 Dec 15;304(23):2592–2594. doi: 10.1001/jama.2010.1824. [DOI] [PubMed] [Google Scholar]
  • 21.Goldberg DS, Halpern SD, Reese PP. Deceased organ donation consent rates among racial and ethnic minorities and older potential donors. Crit Care Med. 2013 Feb;41(2):496–505. doi: 10.1097/CCM.0b013e318271198c. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Abecassis MM, Fisher RA, Olthoff KM, et al. Complications of living donor hepatic lobectomy--a comprehensive report. Am J Transplant. 2012 May;12(5):1208–1217. doi: 10.1111/j.1600-6143.2011.03972.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ghobrial RM, Freise CE, Trotter JF, et al. Donor morbidity after living donation for liver transplantation. Gastroenterology. 2008 Aug;135(2):468–476. doi: 10.1053/j.gastro.2008.04.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Hall EC, Boyarsky BJ, Deshpande NA, et al. Perioperative complications after live-donor hepatectomy. JAMA Surg. 2014 Mar;149(3):288–291. doi: 10.1001/jamasurg.2013.3835. [DOI] [PubMed] [Google Scholar]
  • 25.Bramstedt KA. Living liver donor mortality: where do we stand? Am J Gastroenterol. 2006 Apr;101(4):755–759. doi: 10.1111/j.1572-0241.2006.00421.x. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supp TableS1-S4

RESOURCES