Trial watch archives

Register here to receive the monthly Transplant Evidence Alert via email

Select a past edition to view

Transplant Trial Watch

The Transplant Trial Watch provides a monthly overview of the 10 most important new clinical trials in organ transplantation, selected and reviewed by the Peter Morris Centre for Evidence in Transplantation (Oxford University)

July 2021

Heart


  1. Everolimus for the Prevention of Calcineurin-Inhibitor-Induced Left Ventricular Hypertrophy After Heart Transplantation (RADTAC Study)
    JACC Heart Failure. 2021;9(4):301-313

    Study Details

    Aims: This study aimed to compare the efficacy and safety of a combination of low-dose tacrolimus and low-dose everolimus with standard-dose tacrolimus in the attenuation of left ventricular hypertrophy (LVH) following orthotopic heart transplantation (OHT).
    Interventions: Patients were randomly assigned to receive either combined low-dose everolimus and low-dose tacrolimus or standard-dose tacrolimus.
    Participants: 40 orthotopic heart transplant recipients.
    Outcomes: The primary outcome was the assessment of change in left ventricular mass. The secondary outcomes were T1 fibrosis mapping, myocardial performance, blood pressure and serum creatinine. Safety outcomes were allograft rejection episodes and infection.

    CET Conclusion

    Reviewer: Mr John O'Callaghan, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
    This paper reports a small but good-quality RCT in heart transplant recipients. Patients over 12 weeks after heart transplant were randomised to standard tacrolimus dosing or low-dose tacrolimus plus everolimus. Both groups received mycophenolate and prednisolone. The primary end-point was left ventricular mass assessed by cardiac MRI and a prior power calculation was performed. Serum monitoring showed a good adherence to the target serum ranges of everolimus and tacrolimus. The study showed a significant reduction in left-ventricular mass in the low-dose tacrolimus arm, and an increase in the standard-dose arm. There were no significant differences in serum creatinine or acute rejection episodes.

    Abstract

    OBJECTIVES This study aimed to determine the safety and efficacy of combined low-dose everolimus and low-dose tacrolimus compared with standard-dose tacrolimus in attenuating left ventricular hypertrophy (LVH) after orthotopic heart transplantation (OHT). BACKGROUND Calcineurin inhibitors (CNIs) such as tactrolimus are important in preventing cardiac allograft rejection and reducing mortality after OHT. However CNIs are causatively linked to the development of LVH, and are associated with nephrotoxicity and vasculopathy. CNI-sparing agents such as everolimus have been hypothesized to inhibit adverse effects of CNIs. METHODS In this prospective, randomized, open-label study, OHT recipients were randomized at 12 weeks after OHT to a combination of low-dose everolimus and tacrolimus (the RADTAC group) or standard-dose tacrolimus (the TAC group), with both groups coadministered mycophenolate and prednisolone. The primary endpoint was LVH indexed as the change in left ventricular mass (DELTALVM) by cardiovascular magnetic resonance (CMR) imaging from 12 to 52 weeks. Secondary endpoints included CMR-based myocardial performance, T1 fibrosis mapping, blood pressure, and renal function. Safety endpoints included episodes of allograft rejection and infection. RESULTS Forty stable OHT recipients were randomized. Recipients in the RADTAC group had significantly lower tacrolimus levels compared with the TAC group (6.5 +/- 3.5 mug/l vs. 8.6 +/- 2.8 mug/l; p = 0.02). The mean everolimus level in the RADTAC group was 4.2 +/- 1.7 mug/l. A significant reduction in LVM was observed in the RADTAC group compared with an increase in LVM in the TAC group (DELTALVM = -13.0 +/- 16.8 g vs. 2.1 +/- 8.4 g; p < 0.001). Significant differences were also noted in secondary endpoints measuring function and fibrosis (DELTA circumferential strain = -2.9 +/- 2.8 vs. 2.1 +/- 2.3; p < 0.001; DELTAT1 mapping values = -32.7 +/- 51.3 ms vs. 26.3 +/- 90.4 ms; p = 0.003). No significant differences were observed in blood pressure (DELTA mean arterial pressure = 4.2 +/- 18.8 mm Hg vs. 2.8 +/- 13.8 mm Hg; p = 0.77), renal function (DELTA creatinine = 3.1 +/- 19.9 mumol/l vs. 9 +/- 21.8 mumol/l; p = 0.31), frequency of rejection episodes (p = 0.69), or frequency of infections (p = 0.67) between groups. CONCLUSIONS The combination of low-dose everolimus and tacrolimus compared with standard-dose tacrolimus safely attenuates LVH in the first year after cardiac transplantation with an observed reduction in CMR-measured fibrosis and an improvement in myocardial strain.

Kidney


  1. Preformed T-cell alloimmunity and HLA eplet mismatch to guide immunosuppression minimization with Tacrolimus monotherapy in Kidney Transplantation. Results of the CELLIMIN trial
    American Journal of Transplantation. 2021;[record in progress]

    Study Details

    Aims: This study reports the findings of the CELLIMIN trial, which aimed to examine whether minimisation of posttransplant immunosuppression with tacrolimus (TAC) monotherapy would be effective enough while also reducing drug-related toxicity in low immunological-risk renal transplant recipients without pretransplant donor-specific alloantibodies (DSA) and donor-specific T cells.
    Interventions: Participants were first allocated into two groups based on the results of their pretransplant donor-specific IFN-γ ELISPOT assessment. ELISPOT negative (E−) patients were then randomised to either the low immunosuppression (LI) group or the standard of care immunosuppression (SOC) group.
    Participants: 167 kidney transplant patients were recruited, out of which 101 ELISPOT negative (E−) patients were randomised.
    Outcomes: The primary outcome was the incidence of biopsy-proven acute rejection BPAR at 6 months posttransplant. The secondary outcomes were the incidence of clinical and subclinical BPAR, estimated glomerular filtration rate (eGFR), de novo DSA, graft survival, patient survival and impact of donor/recipient human leukocyte antigens (HLA) molecular mismatches on BPAR and dnDSA between the groups at 12 months posttransplant.

    CET Conclusion

    Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
    This manuscript reports the findings of the CELLIMIN trial from the BIO-DrIM consortium. The study aimed to stratify renal transplant recipients by immunological risk assessed by pre-transplant donor-specific ELISPOT assay, randomising low-risk recipients to standard immunosuppression or tacrolimus monotherapy. The study was terminated early due to slow recruitment and is therefore underpowered to draw firm conclusions, but rejection rates were numerically higher in the minimization arm leading to concerns that ELISPOT alone may not select patients suitable for monotherapy. Despite the early termination, there are some interesting findings here. Firstly, T-cell ELISPOT was able to differentiate those patients at highest risk of post-transplant rejection, suggesting that it may have a role to play in guiding pre-transplant risk stratification. Secondly, retrospective analysis showed that patients with a negative ELISPOT and good class-II eplet matching demonstrated the lowest post-transplant risk, suggesting that a combination of reactivity and eplet matching may improve patient selection for minimization in future studies.

    Abstract

    Personalizing immunosuppression is a major objective in transplantation. Transplant recipients are heterogenous regarding their immunological memory and primary alloimmune susceptibility. This biomarker-guided trial investigated whether in low immunological-risk kidney transplants without pretransplant DSA and donor-specific T cells assessed by a standardized IFN-gamma ELISPOT, low immunosuppression (LI) with tacrolimus monotherapy would be non-inferior regarding 6-month BPAR than tacrolimus-based standard-of-care (SOC). Due to low recruitment rates, the trial was terminated when 167 patients were enrolled. ELISPOT negatives (E-) were randomized to LI(n=53) or SOC(n=48), E+ received the same SOC. Six and 12-month BPAR was higher among LI than SOC/E- (4/35[13%] vs 1/43[2%], p=0.15 and 12/48[25%] vs 6/53[11.3%], p=0.073, respectively). E+ patients showed similarly high BPAR rates than LI at 6 and 12 months (12/66[18%] and 13/66[20%], respectively). These differences were stronger in per-protocol analyses. Post-hoc analysis revealed that poor class-II eplet matching, especially DQ, discriminated E- patients, notably E-/LI, developing BPAR (4/28[14%] low-risk vs 8/20[40%] high-risk, p=0.043). Eplet mismatch also predicted anti-class-I (p=0.05) and anti-DQ (p=0.001) de novo DSA. Adverse events were similar, but E-/LI developed fewer viral infections, particularly Polyoma-virus associated nephropathy (p=0.021). Preformed T-cell alloreactivity and HLA eplet mismatch assessment may refine current baseline immune-risk stratification and guide immunosuppression decision-making in kidney transplantation.
  2. Economic Consequences of Adult Living Kidney Donation: A Systematic Review
    Value in Health. 2021;24(4):592-601

    Study Details

    Aims: This study aimed to evaluate the magnitude and type of cost that adult living kidney donors have to incur.
    Interventions: A literature search was conducted using the UK National Institute for Health Research Economic Evaluation Database, MEDLINE, PubMed, Scopus, Research Papers in Economics, and EconLit. Study selection and data extraction were performed by two reveiwers. The methodological quality of the included studies was assessed by one reviewer, using the the Joanna Briggs Institute appraisal checklists.
    Participants: 16 studies were included in the review.
    Outcomes: The primary outcome was total donor-borne costs. The secondary outcomes were types of costs and donation-related financial correlates.

    CET Conclusion

    Reviewer: Dr Liset Pengel, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
    The narrative systematic review aimed to assess the donor costs for adult living kidney donors. A comprehensive search identified 16 studies mostly published since 2005 that came from North America. Two reviewers independently selected studies for inclusion and extracted data. One reviewer assessed the risk of bias using the appraisal checklists from the Joanna Briggs Institute and found that most (13/16) studies had low risks of bias. The review included pre-, peri- and post-donation costs incurred by the donor and their caregivers. The average donor would incur a cost between $900 and $19,900 from pre-donation evaluation to the first posttransplant year. Eleven out of 16 studies presented direct, out-of-pocket expenses related to travel, accommodation and health services. All studies reported indirect costs, such as lost income, lost home productivity and new costs including lost income for the donor caregiver and insurance difficulties. Most costs were uncompensated. Donor-perceived financial burden was higher among donors from lower-income households and those who travelled greater distances.

    Abstract

    OBJECTIVES Current guidelines mandate organ donation to be financially neutral such that it neither rewards nor exploits donors. This systematic review was conducted to assess the magnitude and type of costs incurred by adult living kidney donors and to identify those at risk of financial hardship. METHODS We searched English-language journal articles and working papers assessing direct and indirect costs incurred by donors on PubMed, MEDLINE, Scopus, the National Institute for Health Research Economic Evaluation Database, Research Papers in Economics, and EconLit in 2005 and thereafter. Estimates of total costs, types of costs, and characteristics of donors who incurred the financial burden were extracted. RESULTS Sixteen studies were identified involving 6158 donors. Average donor-borne costs ranged from US$900 to US$19 900 (2019 values) over the period from predonation evaluation to the end of the first postoperative year. Less than half of donors sought financial assistance and 80% had financial loss. Out-of-pocket payments for travel and health services were the most reported items where lost income accounted for the largest proportion (23.2%-83.7%) of total costs. New indirect cost items were identified to be insurance difficulty, exercise impairment, and caregiver income loss. Donors from lower-income households and those who traveled long distances reported the greatest financial hardship. CONCLUSIONS Most kidney donors are undercompensated. Our findings highlight gaps in donor compensation for predonation evaluation, long-distance donations, and lifetime insurance protection. Additional studies outside of North America are needed to gain a global prospective on how to provide for financial neutrality for kidney donors.
  3. Diagnostic accuracy of myocardial perfusion imaging in patients evaluated for kidney transplantation: A systematic review and meta-analysis
    Journal of Nuclear Cardiology. 2021;[record in progress]

    Study Details

    Aims: The aim of this study was to compare the diagnostic accuracy of single photon emission computed tomography (SPECT) versus the standards invasive coronary angiography (ICA) and coronary computed tomography angiography (CCTA) for coronary artery disease (CAD) assessment in kidney transplant candidates.
    Interventions: Electronic databases, including the Cochrane Library, PubMed, Web of Science, EMBASE, OvidSP (Medline) and Google Scholar were searched. Study selection was performed by two independent reviewers. Two reviewers independently assessed the risk of bias using the QUADAS-2 tool.
    Participants: 13 studies were included in the review.
    Outcomes: The outcomes of interest were sensitivity, specificity, area under the curve (AUC), likelihood ratio (positive and negative), diagnostic odds ratio (DOR) and pooled CAD prevalence.

    CET Conclusion

    Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
    This systematic review investigates the diagnostic accuracy of myocardial perfusion imaging in the evaluation of kidney transplant candidates. In 13 studies, pooled sensitivity was 0.66, specificity 0.75 and AUC 0.76. Heterogeneity was high. The authors conclude that diagnostic accuracy is moderate, with a high rate of false-negative findings. Methodology and reporting of the review are very good, with comprehensive search strategies and low risk of bias in included studies. Conclusions are limited by the degree of heterogeneity (possibly due to differing thresholds for CAD diagnosis and changes in study protocols) and lack of use of CT coronary angiography for comparison (all studies used invasive coronary angiography).

    Abstract

    BACKGROUND Cardiovascular disease is the most common cause of death after kidney transplantation. Coronary artery disease (CAD) assessment is therefore mandatory in patients evaluated for transplantation. We aimed to assess the diagnostic accuracy for CAD of single-photon emission computed tomography (SPECT) compared to the standards invasive coronary angiography (ICA) and coronary computed tomography angiography (CCTA) in patients evaluated for kidney transplantation. METHODS We performed a systematic literature search in PubMed, EMBASE, Web of Science, OvidSP (Medline), The Cochrane Library and Google Scholar. Studies investigating the diagnostic accuracy of myocardial perfusion imaging (MPI) SPECT in patients evaluated for kidney transplantation were retrieved. After a risk of bias assessment using QUADAS-2, a meta-analysis was conducted. RESULTS Out of 1459 records, 13 MPI SPECT studies were included in the meta-analysis with a total of 1245 MPI SPECT scans. There were no studies available with CCTA as reference. Pooled sensitivity of MPI SPECT for CAD was 0.66 (95% CI 0.53 to 0.77), pooled specificity was 0.75 (95% CI 0.63 to 0.84) and the area under the curve (AUC) was 0.76. Positive likelihood ratio was 2.50 (95% CI 1.78 to 3.51) and negative likelihood ratio was 0.41 (95% CI 0.28 to 0.61). Pooled positive predictive value was 64.9% and pooled negative predictive value was 74.1%. Significant heterogeneity existed across the included studies. CONCLUSIONS MPI SPECT had a moderate diagnostic accuracy in patients evaluated for kidney transplantation, with a high rate of false-negative findings. The use of an anatomical gold standard against a functional imaging test in the included studies is however suboptimal.
  4. Correcting Anemia and Native Vitamin D Supplementation in Kidney Transplant Recipients (CANDLE-KIT): a Multicenter, 2x2 Factorial, Open-label, Randomized Clinical Trial
    Transplant International. 2021;[record in progress]

    Study Details

    Aims: This study aimed to investigate the effect of anemia correction and cholecalciferol supplementation in the preservation of kidney function among kidney transplant recipients.
    Interventions: Patients were randomised to either a high haemoglobin target or a low haemoglobin target, and then to either the cholecalciferol group or the control group.
    Participants: 153 kidney transplant recipients.
    Outcomes: The primary endpoint was change in creatinine-based eGFR (eGFRcr). The secondary endpoints were methoxy polyethylene glycol epoetin beta (MPG-EPO) dose, patient death (all-causes), cancer development or recurrence, blood pressure, acute cellular rejection (biopsy-proven), renal composite outcome consisting of 50% increase in serum creatinine, re-initiation of dialysis and subsequent transplantation.

    CET Conclusion

    Reviewer: Mr John O'Callaghan, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
    This is a well-written report of an interesting and well-conducted study. It is an open-label multicentre RCT in renal transplantation. Patients at least one year after transplant were randomised to a high or low haemoglobin target (>12.5g/dL versus <10.5g/dL), and also to receive cholecalciferol or control. Randomisation was done by a computerised system, stratified by time since transplant and urine protein: creatinine ratio. The primary outcome was change in estimated GFR over the 2-year follow up. The study was terminated early on the basis of the interim analysis, which showed a significantly smaller drop in GFR in the high haemoglobin group. The change in estimated GFR was not related to the use of cholecalciferol in this analysis.

    Abstract

    Anemia and vitamin D deficiency are associated with allograft failure, and hence, are potential therapeutic targets among kidney transplant recipients (KTRs). We conducted a multicenter, two-by-two factorial, open-label, randomized clinical trial to examine the effects of anemia correction and vitamin D supplementation on 2-year change in eGFR among KTRs (CANDLE-KIT). We enrolled 153 patients with anemia and >1-year history of transplantation across 23 facilities in Japan, and randomly assigned them to either a high or low hemoglobin target (>12.5 vs. <10.5 g/dL) and to either cholecalciferol 1000 IU/day or control. This trial was terminated early based on the planned interim intention-to-treat analyses (alpha=0.034). Among 125 patients who completed the study, 2-year decline in eGFR was smaller in the high vs. low hemoglobin group (i.e., -1.6+/-4.5 vs. -4.0+/-6.9 mL/min/1.73 m2 ; P=0.021), but did not differ between the cholecalciferol and control groups. These findings were supported by the fully-adjusted mixed effects model evaluating the rate of eGFR decline among all 153 participants. There were no significant between-group differences in all-cause death or the renal composite outcome in either arm. In conclusion, aggressive anemia correction showed a potential to preserve allograft kidney function.
  5. Impact of a pharmacist-led, mHealth-based intervention on tacrolimus trough variability in kidney transplant recipients: A report from the TRANSAFE Rx randomized controlled trial
    American Journal of Health System Pharmacy. 2021;78(14):1287-1293

    Study Details

    Aims: This was a planned secondary analysis of the TRANSAFE Rx study, which was a randomised controlled trial comparing the efficacy of a mobile health (mHealth)-based intervention versus usual care in improving health outcomes and medication safety among renal transplant patients. The aim of this analysis was to determine the impact of the mHealth intervention on tacrolimus intrapatient variability (IPV).
    Interventions: Participants in the TRANSAFE Rx study were randomised to either the mhealth intervention group or the usual care group.
    Participants: 136 adult kidney transplant recipients.
    Outcomes: Measurement of tacrolimus IPV for each treatment arm.

    CET Conclusion

    Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
    Variability in tacrolimus levels (IPV) is associated with inferior outcomes following renal transplantation and has been suggested to be a surrogate for non-adherent behaviour. This study randomised patients to a pharmacist-led mHealth intervention (including medication reminders, blood pressure and glucose monitoring and pharmacist monitoring) or standard care. Patients receiving the mHealth intervention demonstrated significantly lower tacrolimus IPV at the end of the study. Whilst very interesting, this is a secondary analysis and no correlation with clinical outcome (e.g. rejection, graft loss) is possible. Future studies should target high-risk populations (e.g. high IPV at baseline, known non-adherence), determine whether the effect is sustained outside of a trial environment and investigate the impact of improved IPV on graft outcomes.

    Abstract

    DISCLAIMER In an effort to expedite the publication of articles related to the COVID-19 pandemic, AJHP is posting these manuscripts online as soon as possible after acceptance. Accepted manuscripts have been peer-reviewed and copyedited, but are posted online before technical formatting and author proofing. These manuscripts are not the final version of record and will be replaced with the final article (formatted per AJHP style and proofed by the authors) at a later time. PURPOSE Nonadherence is a leading cause of death-censored allograft loss in kidney transplant recipients. Strong associations have tied tacrolimus intrapatient variability (IPV) to degree of nonadherence and high tacrolimus IPV to clinical endpoints such as rejection and allograft loss. Nonadherence is a dynamic, complex problem best targeted by multidimensional interventions, including mobile health (mHealth) technologies. METHODS This was a secondary planned analysis of a 12-month, parallel, 2-arm, semiblind, 1:1 randomized controlled trial involving 136 adult kidney transplant recipients. The primary aims of the TRANSAFE Rx study were to assess the efficacy of a pharmacist-led, mHealth-based intervention in improving medication safety and health outcomes for kidney transplant recipients as compared to usual care. RESULTS Patients were randomized equally to 68 patients per arm. The intervention arm demonstrated a statistically significant decrease in tacrolimus IPV over time as compared to the control arm (P = 0.0133). When analyzing a clinical goal of tacrolimus IPV of less than 30%, the 2 groups were comparable at baseline (P = 0.765), but significantly more patients in the intervention group met this criterion at month 12 (P = 0.033). In multivariable modeling, variables that independently impacted tacrolimus IPV included time, treatment effect, age, and warm ischemic time. CONCLUSION This secondary planned analysis of an mHealth-based, pharmacist-led intervention demonstrated an association between the active intervention in the trial and improved tacrolimus IPV. Further prospective studies are required to confirm the mutability of tacrolimus IPV and impact of reducing tacrolimus IPV on long-term clinical outcomes.
  6. Oxygenated End-Hypothermic Machine Perfusion in Expanded Criteria Donor Kidney Transplant: A Randomized Clinical Trial
    JAMA Surgery. 2021;156(6):517-525

    Study Details

    Aims: This study aimed to determine whether short-term oxygenated hypothermic machine perfusion preservation (end-HMPO2) following static cold storage (SCS) was more effective in improving kidney transplant outcomes in expanded criteria donor kidneys retrieved from brain dead donors, compared to SCS alone.
    Interventions: Patients were randomly assigned to either end-HMPO2 after SCS or SCS alone.
    Participants: 305 expanded criteria donor kidneys retrieved from brain dead donors.
    Outcomes: The primary outcome was graft survival at 1 year posttransplant. The secondary outcomes were patient survival, primary nonfunction, delayed graft function, acute rejection and estimated glomerular filtration rate.

    CET Conclusion

    Reviewer: Dr Liset Pengel, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
    This good quality randomised, partially-blinded, controlled trial was conducted as part of the Consortium for organ preservation in Europe (COPE). The study compared kidneys from expanded criteria donors that underwent static cold storage (SCS) alone or SCS plus oxygenated hypothermic machine perfusion (end-HMPO2) after arrival in the recipient transplant centre. Kidneys were randomised on arrival at the recipient centre according to a computer-generated randomisation scheme using an online randomisation tool. A sample size analysis was based on data from a previous trial and showed that 262 kidneys were needed to detect an improvement in 1-year graft survival from 80% to 92%. The intention to treat analysis excluded kidneys that were randomised but not transplanted and consisted of 262 kidneys. Fourteen kidneys of the end-HMPO2 group were cold stored because machine perfusion was not possible and six kidneys received machine perfusion <2 hours for logistical reasons. One-year graft survival was similar between groups and the there were no statistically significant differences for any of the secondary outcomes, i.e. delayed graft function, primary nonfunction, estimated glomerular filtration rate, and acute rejection. The authors comment that as the 1-year graft survival rate in the control group exceeded the baseline assumption, the study is statistically underpowered.

    Abstract

    Importance: Continuous hypothermic machine perfusion during organ preservation has a beneficial effect on graft function and survival in kidney transplant when compared with static cold storage (SCS). Objective: To compare the effect of short-term oxygenated hypothermic machine perfusion preservation (end-HMPo2) after SCS vs SCS alone on 1-year graft survival in expanded criteria donor kidneys from donors who are brain dead. Design, Setting, and Participants: In a prospective, randomized, multicenter trial, kidneys from expanded criteria donors were randomized to either SCS alone or SCS followed by end-HMPo2 prior to implantation with a minimum machine perfusion time of 120 minutes. Kidneys were randomized between January 2015 and May 2018, and analysis began May 2019. Analysis was intention to treat. Interventions: On randomization and before implantation, deceased donor kidneys were either kept on SCS or placed on HMPo2. Main Outcome and Measures: Primary end point was 1-year graft survival, with delayed graft function, primary nonfunction, acute rejection, estimated glomerular filtration rate, and patient survival as secondary end points. Results: Centers in 5 European countries randomized 305 kidneys (median [range] donor age, 64 [50-84] years), of which 262 kidneys (127 [48.5%] in the end-HMPo2 group vs 135 [51.5%] in the SCS group) were successfully transplanted. Median (range) cold ischemia time was 13.2 (5.1-28.7) hours in the end-HMPo2 group and 12.9 (4-29.2) hours in the SCS group; median (range) duration in the end-HMPo2 group was 4.7 (0.8-17.1) hours. One-year graft survival was 92.1% (n = 117) in the end-HMPo2 group vs 93.3% (n = 126) in the SCS group (95% CI, -7.5 to 5.1; P = .71). The secondary end point analysis showed no significant between-group differences for delayed graft function, primary nonfunction, estimated glomerular filtration rate, and acute rejection. Conclusions and Relevance: Reconditioning of expanded criteria donor kidneys from donors who are brain dead using end-HMPo2 after SCS does not improve graft survival or function compared with SCS alone. This study is underpowered owing to the high overall graft survival rate, limiting interpretation. Trial Registration: isrctn.org Identifier: ISRCTN63852508.

Liver


  1. Bisphosphonate Therapy after Liver Transplant Improves Bone Mineral Density and Reduces Fracture Rates: An Updated Systematic-Review and Meta-Analysis
    Transplant International. 2021;[record in progress]

    Study Details

    Aims: This study aimed to determine the efficacy of bisphosphonates in reducing the incidence of fracture, and to compare the effects of oral versus intravenous (IV) administration of bisphosphonate therapy on bone mineral density (BMD) and the incidence of fracture in post-orthotopic liver transplant (OLT) recipients.
    Interventions: Electronic databases including Medline and Embase were searched. Two independent reviewers selected studies for inclusion. The methodological quality of the included studies were assessed using the Newcastle-Ottowa Scale and Jadad Scale for cohort studies and randomised controlled trials.
    Participants: 9 studies were included in the review.
    Outcomes: The main outcomes were changes in post-OLT BMD, the incidence of fracture and adverse reactions.

    CET Conclusion

    Reviewer: Dr Liset Pengel, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
    The systematic review and meta-analysis investigates the efficacy of bisphosphonates regarding the incidence of fractures, bone mineral density (BMD) changes and adverse reactions in liver transplant recipients. Medline and Embase were searched and six randomised controlled trials and three cohort studies met the inclusion criteria (n=645). Most studies were considered to be of good methodological quality. Study selection was done by independent reviewers but it was unclear if data extraction and methodological quality assessment was done in duplicate. The majority of patients started on bisphosphonates irrespective of BMD changes. The total and vertebral fracture incidence was significantly lower in patients taking bisphosphonates compared with patients on vitamin D and calcium. BMD of the lumbar spine and neck of femur were also significantly improved in patients taking bisphosphonates. Subgroup analyses of patients taking oral versus intravenous bisphosphonates showed improvements in BMD and the rate of fractures for patients taking oral bisphosphonates but the same effect was not seen in patients who received intravenous bisphosphonates. None of the studies reported serious adverse events.

    Abstract

    AIM: To investigate the efficacy of bisphosphonates and compare oral and IV formulations on bone mineral density (BMD) and fracture incidence in post-orthotopic liver transplant (OLT) patients. METHODS Electronic databases were searched, and six RCTs and three cohort studies were included out of 711 articles. Main outcomes included post-OLT BMD changes, fracture incidence, and treatment adverse reactions. Pairwise meta-analysis was conducted for binary and continuous outcomes, while pooled fracture incidence utilized single-arm meta-analysis. RESULTS Post-OLT fracture incidence was reported in nine studies (n=591). Total fracture incidence was 6.6% (CI: 3.4% to 12.4%) in bisphosphonate group and 19.1% (CI: 14.3% to 25.1%) in calcium and vitamin D group. Total fractures were significantly lower in patients on bisphosphonate, compared to calcium and vitamin D (n = 591; OR = 0.037; CI: 0.18 to 0.77; p=0.008) Overall fractures were significantly lower in the oral group (n= 263; OR = 0.26; CI: 0.08 to 0.85, p=0.02) but not in the IV group (n = 328; OR = 0.45; CI: 0.16 to 1.26, p=0.129). CONCLUSION Both oral and IV bisphosphonates are effective in reducing fracture incidence post-OLT compared to calcium and vitamin D. Oral formulations may also have an advantage over IV in reducing bone loss and fracture incidence post-OLT.
  2. Cost analysis of a long-term randomized controlled study in biliary duct-to-duct anastomotic stricture after liver transplantation
    Transplant International. 2021;34(5):825-834

    Study Details

    Aims: This study aimed to compare the effectiveness, treatment-related costs and adverse events of treating anastomotic stricture (AS) with fully covered self-expandable metal stents (FCSEMS) versus multiple plastic stent (MPS) following liver transplantation
    Interventions: Patients were randomised to either the FCSEMS group or the MPS group.
    Participants: 30 liver transplant patients with duct-to-duct AS.
    Outcomes: The outcomes of interest were treatment success, adverse events and a cost analysis. The cost analysis included the total cost of endoscopic therapy and hospitalisation for procedures and dealing with adverse events.

    CET Conclusion

    Reviewer: Mr John O'Callaghan, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
    This paper reports a cost-analysis from a randomised controlled trial in biliary stenting. Much of the crucial information about the study is written into a previous publication (Tal et al 2017) and not reproduced here. This paper represents the results of a subgroup of patients operated at one centre (30/58 patients). The study was previously assessed as being of good quality by our expert reviewer, although there was no blinding, which is a potential source of bias. Anastomotic strictures recurred in more metal-stent patients, with more stent migration than plastic-stent patients. This meant that, overall, there was a similar cost to achieve resolution of the condition in both arms, but the study was small and possibly underpowered.

    Abstract

    INTRODUCTION Multiple plastic stent (MPS) use for biliary anastomotic stricture (AS) after liver transplantation requires multiple procedures with consequent costs. AIMS To compare the success, adverse events and treatment-related costs of fully covered self-expandable metal stents (FCSEMS) versus MPS. METHODS Thirty liver transplant (LT) patients with clinically relevant naive AS were prospectively randomized to FCSEMS or MPS, with stent numbers increased at 3-month intervals. Treatment costs per patient were calculated for endoscopic retrograde cholangiopancreatography (including all devices and stents) and overall hospital stay. RESULTS Radiological success was achieved in 73% of FCSEMS (median indwelling period of 6 mos) and 93% of MPS patients (P = NS) (median period of 11 mos). AS recurrence occurred in 36% of FCSEMS and 7% of MPS patients (P = NS) and AS re-treatment was needed in 53% and 13% (P < 0.01), respectively, during follow-up of 60 (34-80) months. Stents migrated after 29% and 2.6% of FCSEMS and MPS procedures, respectively (P < 0.01). Including re-treatments, long-term clinical success was achieved in 28/30 (93%) patients. Overall treatment-related costs were similar between groups. In the subgroup of LT patients in clinical remission after first-line treatment, treatment costs were 41% lower per FCSEMS patient compared with MPS patients. CONCLUSIONS FCSEMS did not perform better than MPS. FCSEMS migration increased the rate of re-treatment and costs.
  3. Machine Learning for the Prediction of Red Blood Cell Transfusion in Patients During or After Liver Transplantation Surgery
    Frontiers in Medicine. 2021;8:632210

    Study Details

    Aims: This study aimed to establish critical preoperative risk factors linked to red blood cell (RBC) transfusion, and to develop and validate machine learning algorithms for predicting the RBC transfusion during or after liver transplantation.
    Interventions: The study cohort was randomly divided into two sets: the training set and the validation set.
    Participants: 1193 liver transplant patients.
    Outcomes: The study identified key risk factors linked to RBC transfusion during or after liver transplantation, and developed an RBC transfusion prediction model using machine learning algorithms.

    CET Conclusion

    Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
    This interesting study developed and validated machine learning models for the prediction of transfusion requirements during liver transplantation. The authors used a dataset from 3 Chinese transplant centres to develop and validate their models, and subsequently prospectively validated the resulting prediction tool in a small prospective cohort. The best model derived demonstrated good predictive performance and may have utility in predicting transfusion requirements which may help with pre-operative planning and risk stratification. The authors should be commended for making their tool publicly available on the web for other centres to validate, and for their attempts to create explainable models showing the weighting of the various factors in the decision-making process. It will be interesting to see how well the tool validates in other populations with different disease mixes and surgical techniques, and whether benefits can be measured in prospective use.

    Abstract

    Aim: This study aimed to use machine learning algorithms to identify critical preoperative variables and predict the red blood cell (RBC) transfusion during or after liver transplantation surgery. Study Design and Methods: A total of 1,193 patients undergoing liver transplantation in three large tertiary hospitals in China were examined. Twenty-four preoperative variables were collected, including essential population characteristics, diagnosis, symptoms, and laboratory parameters. The cohort was randomly split into a train set (70%) and a validation set (30%). The Recursive Feature Elimination and eXtreme Gradient Boosting algorithms (XGBOOST) were used to select variables and build machine learning prediction models, respectively. Besides, seven other machine learning models and logistic regression were developed. The area under the receiver operating characteristic (AUROC) was used to compare the prediction performance of different models. The SHapley Additive exPlanations package was applied to interpret the XGBOOST model. Data from 31 patients at one of the hospitals were prospectively collected for model validation.