Transplant Evidence Alert

The Transplant Evidence Alert provides a monthly overview of the 10 most important new clinical trials in organ transplantation, selected and reviewed by the Peter Morris Centre for Evidence in Transplantation (Oxford University).

Register to receive the monthly Transplant Evidence Alert by email.

April 2024

Kidney
  • Natale P
  • Palmer SC
  • Jaure A
  • Saglimbene V
  • Iannone A
  • et al.
J Hypertens. 2024 May 1;42(5):848-855 doi: 10.1097/HJH.0000000000003663.
CET Conclusion
Reviewer: Reshma Rana Magar, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: The aim of this systematic review was to examine the role of blood pressure lowering agents in transplant patients with a functioning kidney allograft. A large number of studies were included (94 studies), including a total of 7547 adults, all of which were randomised controlled studies. The authors found that none of the blood pressure-lowering agents reduced the risk of graft loss nor did they show significant differences in terms of all-cause death, cardiovascular death and withdrawal because of adverse events, in comparison to placebo or other drug class. Although only RCTs were included, some of them were of poor quality and/or were publish over 20 years ago—these factors may have influenced the certainty of the findings. This study also highlights the insufficient reporting of data on important variables such as donor type (living versus deceased), time after transplantation and quality of life, which may have restricted the authors from performing a more granular analyses of the outcomes. Hence, the authors concluded that the evidence basis for this topic is poor that cannot be used to inform clinical decision-making.
Aims: This study aimed to assess the benefits and harms associated with blood pressure lowering agents in renal transplant recipients with a functioning graft.
Interventions: Three electronic databases including MEDLINE, Embase, and the Cochrane Central Register of Controlled Trials (CENTRAL) were searched. Two reviewers independently selected studies for inclusion and extracted data. The Cochrane Risk of Bias Tool was used to assess the risk of bias.
Participants: 94 studies were included in the review.
Outcomes: The primary effectiveness outcome was graft loss, and safety outcome was withdrawal due to adverse events. The secondary outcomes were death (all-cause and cardiovascular), cardiovascular disease, acute rejection, acute kidney injury, acute dialysis, estimated glomerular filtration rate (eGFR), creatinine clearance, systolic blood pressure (SBP) and diastolic blood pressure (DBP), mean arterial pressure (MAP), adverse events and quality of life.
Follow Up: N/A
OBJECTIVE:

Hypertension affects 50-90% of kidney transplant recipients and is associated with cardiovascular disease and graft loss. We aimed to evaluate the comparative benefits and harms of blood pressure lowering agents in people with a functioning kidney transplant.

METHODS:

We conducted a systematic review with network meta-analysis of randomized controlled trials (RCTs). We searched MEDLINE, Embase, and CENTRAL through to October 2023. RCTs evaluating blood pressure lowering agents administered for at least 2 weeks in people with a functioning kidney transplant with and without preexisting hypertension were eligible. Two reviewers independently extracted data. The primary outcome was graft loss. Treatment effects were estimated using random effects network meta-analysis, with treatment effects expressed as an odds ratio (OR) for binary outcomes and mean difference (MD) for continuous outcomes together with their 95% confidence interval (CI). Confidence in the evidence was assessed using GRADE for network meta-analysis.

RESULTS:

Ninety-four studies (7547 adults) were included. Two studies were conducted in children. No blood pressure-lowering agent reduced the risk of graft loss, withdrawal because of adverse events, death, cardiovascular or kidney outcomes compared with placebo/other drug class. Angiotensin-converting enzyme inhibitors and angiotensin receptor blocker therapy may incur greater odds of hyperkalemia compared with calcium channel blockers [odds ratio (OR) 5.48, 95% confidence interval (CI) 2.47-12.16; and OR 8.67, 95% CI 2.65-28.36; low certainty evidence, respectively).

CONCLUSION:

The evidentiary basis for the comparative benefits and safety of blood pressure lowering agents in people with a functioning kidney transplant is limited to guide treatment decision-making.

  • Vincenti F
  • Bromberg J
  • Kim J
  • Faravardeh A
  • Leca N
  • et al.
Am J Transplant. 2024 Feb 20; doi: 10.1016/j.ajt.2024.02.014.
CET Conclusion
Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: Hepatocyte growth factor has cytoprotective and anti-apoptotic effects in renal epithelial cells and has been shown to reduce renal dysfunction in animal models of renal injury. This phase 3, multicentre, double blind RCT investigated the use of ANG-3777 in deceased donor transplant recipients with delayed graft function. Use of the hepatocyte growth factor mimetic did not result in any differences in clinical outcomes, including dialysis requirement, acute rejection or graft function. The study is well designed with adequate blinding and randomisation. It should be noted that the study population was largely (>80%) DBD donors with a mean age of 44, and included around 35% kidneys undergoing hypothermic machine preservation, which may have impacted baseline DGF rates compared to a population with higher DCD or SCS subgroups and potentially represent less injured kidneys. Whether ANG-3777 has a role to play in more injured (e.g. ECD, DCD) kidneys remains unclear.
Aims: The aim of this study was to investigate the effect of ANG-3777 in deceased donor renal transplant recipients with delayed graft function (DGF).
Interventions: Participants were randomised to either the ANG-3777 group or the placebo group.
Participants: 253 deceased kidney transplant recipients.
Outcomes: The original primary outcome was changed from duration of dialysis through day 30 to estimated glomerular filtration rate (eGFR) at day 360 posttransplant. The main secondary outcome was the severity of DGF.
Follow Up: 360 days posttransplantation

In kidney transplant recipients, delayed graft function increases the risk of graft failure and mortality. In a phase 3, randomized, double-blind, placebo-controlled trial, we investigated the hepatocyte growth factor mimetic, ANG-3777 (once daily for 3 consecutive days, starting ≤30 hours posttransplant), in 248 patients receiving a first kidney transplant from a deceased donor. At day 360, estimated glomerular filtration rate (primary endpoint) was not significantly different between the ANG-3777 and placebo groups. There were no significant between-group differences in the duration of dialysis through day 30 or in the percentage of patients with an estimated glomerular filtration rate of >30 mL/min/1.73 m2 at day 360. The incidence of both delayed graft function and acute rejection was similar between ANG-3777 and placebo groups (68.5% vs 69.4% and 8.1% vs 6.5%, respectively). ANG-3777 was well tolerated, and there was a numerically lower incidence of graft failure versus placebo (3.2% vs 8.1%). Although there is insufficient evidence to support an indication of ANG-3777 for patients at risk of renal dysfunction after deceased-donor kidney transplantation, these findings indicate potential biological activity that may warrant further investigation.

  • Raynaud M
  • Al-Awadhi S
  • Louis K
  • Zhang H
  • Su X
  • et al.
J Am Soc Nephrol. 2024 Feb 1;35(2):177-188 doi: 10.1681/ASN.0000000000000260.
CET Conclusion
Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This systematic review represents a comprehensive analysis of all studies reporting association between a biomarker and renal allograft failure published between 2005 and 2022. The authors include 804 individual studies, reporting 1143 biomarkers. The key finding was a lack of methodological quality in the studies identified – very few studies reported external validation or adjusted for key prognostic factors, and many demonstrated misinterpretation of the findings. This important paper highlights the limitations of existing biomarker research in transplantation, and goes a long way to explain why, despite an explosion in biomarker publications, very few have made it to routine clinical practice. The authors highlight a number of areas in which biomarker studies could be improved to increase the chance of clinical translation. Of note, whist the reported methodology is comprehensive, the published PROSPERO protocol registration number does not return a record when searching.
Aims: This systematic review aimed to critically appraise all studies on kidney transplant biomarker identified between 2005 and 2022.
Interventions: A literature search was conducted using Pubmed, Embase, Scopus, Web of Science and the Cochrane library. Titles and abstracts were screened by 11 reviewers. Inconsistencies for 30% of the references were randomly assessed by two independent reviewers for each reviewer. If inconsistencies were >5%, all the references of that reviewer were re-evaluated.
Participants: 804 studies were included in the review.
Outcomes: The main outcomes included the assessment the following: types of biomarkers and biologic categorisation; publications over time and the geographical origin of the biomarker studies; sample size and population characteristics; biomarker study design and methodology; adjustment of biomarkers on the standard-of-care patient monitoring factors; reproducibility, transparency, and citation indicators of the biomarker studies; and interpretation of the biomarker study findings.
Follow Up: N/A
SIGNIFICANCE STATEMENT:

Why are there so few biomarkers accepted by health authorities and implemented in clinical practice, despite the high and growing number of biomaker studies in medical research ? In this meta-epidemiological study, including 804 studies that were critically appraised by expert reviewers, the authors have identified all prognostic kidney transplant biomarkers and showed overall suboptimal study designs, methods, results, interpretation, reproducible research standards, and transparency. The authors also demonstrated for the first time that the limited number of studies challenged the added value of their candidate biomarkers against standard-of-care routine patient monitoring parameters. Most biomarker studies tended to be single-center, retrospective studies with a small number of patients and clinical events. Less than 5% of the studies performed an external validation. The authors also showed the poor transparency reporting and identified a data beautification phenomenon. These findings suggest that there is much wasted research effort in transplant biomarker medical research and highlight the need to produce more rigorous studies so that more biomarkers may be validated and successfully implemented in clinical practice.

BACKGROUND:

Despite the increasing number of biomarker studies published in the transplant literature over the past 20 years, demonstrations of their clinical benefit and their implementation in routine clinical practice are lacking. We hypothesized that suboptimal design, data, methodology, and reporting might contribute to this phenomenon.

METHODS:

We formed a consortium of experts in systematic reviews, nephrologists, methodologists, and epidemiologists. A systematic literature search was performed in PubMed, Embase, Scopus, Web of Science, and Cochrane Library between January 1, 2005, and November 12, 2022 (PROSPERO ID: CRD42020154747). All English language, original studies investigating the association between a biomarker and kidney allograft outcome were included. The final set of publications was assessed by expert reviewers. After data collection, two independent reviewers randomly evaluated the inconsistencies for 30% of the references for each reviewer. If more than 5% of inconsistencies were observed for one given reviewer, a re-evaluation was conducted for all the references of the reviewer. The biomarkers were categorized according to their type and the biological milieu from which they were measured. The study characteristics related to the design, methods, results, and their interpretation were assessed, as well as reproducible research practices and transparency indicators.

RESULTS:

A total of 7372 publications were screened and 804 studies met the inclusion criteria. A total of 1143 biomarkers were assessed among the included studies from blood ( n =821, 71.8%), intragraft ( n =169, 14.8%), or urine ( n =81, 7.1%) compartments. The number of studies significantly increased, with a median, yearly number of 31.5 studies (interquartile range [IQR], 23.8-35.5) between 2005 and 2012 and 57.5 (IQR, 53.3-59.8) between 2013 and 2022 ( P < 0.001). A total of 655 studies (81.5%) were retrospective, while 595 (74.0%) used data from a single center. The median number of patients included was 232 (IQR, 96-629) with a median follow-up post-transplant of 4.8 years (IQR, 3.0-6.2). Only 4.7% of studies were externally validated. A total of 346 studies (43.0%) did not adjust their biomarker for key prognostic factors, while only 3.1% of studies adjusted the biomarker for standard-of-care patient monitoring factors. Data sharing, code sharing, and registration occurred in 8.8%, 1.1%, and 4.6% of studies, respectively. A total of 158 studies (20.0%) emphasized the clinical relevance of the biomarker, despite the reported nonsignificant association of the biomarker with the outcome measure. A total of 288 studies assessed rejection as an outcome. We showed that these rejection studies shared the same characteristics as other studies.

CONCLUSIONS:

Biomarker studies in kidney transplantation lack validation, rigorous design and methodology, accurate interpretation, and transparency. Higher standards are needed in biomarker research to prove the clinical utility and support clinical use.

  • Hsiung CY
  • Chen HY
  • Wang SH
  • Huang CY
Transpl Int. 2024 Jan 23;37:12168 doi: 10.3389/ti.2024.12168.
CET Conclusion
Reviewer: Mr John O'Callaghan, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This is a well-conducted and well-written systematic review. A large number of reports were included (75 studies) although most were case series and a minority were single-arm studies or cohort studies. Overall data from a large number of transplants was included (14,410 transplants). The overall incidence of de novo TMA was 3.2% (95%CI: 1.93-4.77) but the length of follow up is not presented in the primary report. The authors do indicate a significant level of statistical heterogeneity for this outcome. When limited to 5-years follow up, the incidence of de novo TMA was 1.04% (95%CI: 0.16-2.68) but still with a very high level of statistical heterogeneity between studies. Meta-analysis of 8 studies for kidney allograft survival showed approximately 33% risk of graft loss associated with TMA, but this is not presented with a duration of follow up after diagnosis. The paper demonstrates the rare nature of de novo TMA, and gives an idea of the incidence, with some degree of uncertainty. The relationship between de novo TMA and graft loss is suggested but this paper is not able to give a reliable predictive relative risk or odds ratio for example.
Aims: The aim of this study was to determine the incidence of de novo thrombotic microangiopathy (TMA) in renal transplant patients and its impact of graft survival.
Interventions: A literature search was conducted on four electronic databases including PubMed, Cochrane Library, and EMBASE. Study selection and data extraction were performed by three independent reviewers. The risk of bias was assessed using the Newcastle-Ottawa Scale.
Participants: 28 cohorts/single-arm studies and 46 case series/reports were included in the review.
Outcomes: The main outcomes of interest were the incidence of de novo TMA and graft survival.
Follow Up: N/A

De novo thrombotic microangiopathy (TMA) is a rare and challenging condition in kidney transplant recipients, with limited research on its incidence and impact on graft survival. This study conducted a systematic review and meta-analysis of 28 cohorts/single-arm studies and 46 case series/reports from database inception to June 2022. In meta-analysis, among 14,410 kidney allograft recipients, de novo TMA occurred in 3.20% [95% confidence interval (CI): 1.93-4.77], with systemic and renal-limited TMA rates of 1.38% (95% CI: 06.5-2.39) and 2.80% (95% CI: 1.27-4.91), respectively. The overall graft loss rate of de novo TMA was 33.79% (95% CI: 26.14-41.88) in meta-analysis. This study provides valuable insights into the incidence and graft outcomes of de novo TMA in kidney transplant recipients.

Liver
  • Weinberg EM
  • Wong F
  • Vargas HE
  • Curry MP
  • Jamil K
  • et al.
Liver Transpl. 2024 Apr 1;30(4):347-355 doi: 10.1097/LVT.0000000000000277.
CET Conclusion
Reviewer: Mr Keno Mentor, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: Hepatorenal syndrome (HRS) resulting in renal dysfunction results in poorer outcomes following liver transplantation (LT). The efficacy of Terlipressin in reducing HRS in liver failure patients was investigated in the CONFIRM trial, which showed significantly improved rates of HRS, but no difference in mortality at 90 days. This post-hoc analysis of the CONFIRM trial aimed to determine the difference in renal outcomes (pre and post LT) and 1-year survival in patients who had Terlipressin versus those who did not. The analysis found significant improvements in renal outcomes and 1-year survival in the Terlipression group. However, sub-group analysis showed that patients with more severe liver and renal disease showed poorer outcomes with terlipressin use, indicating a need for careful patient selection. Further trials will be required to better define the patient sub-group that will derive the most benefit from Terlipressin therapy.
Aims: This post hoc analysis of the CONFIRM trial aimed to examine whether terlipressin was effective in reducing the need for renal replacement therapy (RRT) and improving posttransplant outcomes in liver transplant recipients.
Interventions: Particpants in the CONFIRM trial were randomised to receive either terlipressin plus albumin or placebo.
Participants: 300 liver transplant recipients from the CONFIRM trial.
Outcomes: The main outcomes of interest were the incidence of hepatorenal syndrome-type 1 (HRS-1) reversal, need for RRT (pretransplant and posttransplant), and overall survival.
Follow Up: 12 months

Hepatorenal syndrome-acute kidney injury (HRS-AKI), a serious complication of decompensated cirrhosis, has limited therapeutic options and significant morbidity and mortality. Terlipressin improves renal function in some patients with HRS-1, while liver transplantation (LT) is a curative treatment for advanced chronic liver disease. Renal failure post-LT requiring renal replacement therapy (RRT) is a major risk factor for graft and patient survival. A post hoc analysis with a 12-month follow-up of LT recipients from a placebo-controlled trial of terlipressin (CONFIRM; NCT02770716) was conducted to evaluate the need for RRT and overall survival. Patients with HRS-1 were treated with terlipressin plus albumin or placebo plus albumin for up to 14 days. RRT was defined as any type of procedure that replaced kidney function. Outcomes compared between groups included the incidence of HRS-1 reversal, the need for RRT (pretransplant and posttransplant), and overall survival. Of the 300 patients in CONFIRM (terlipressin n = 199; placebo, n = 101), 70 (23%) underwent LT alone (terlipressin, n = 43; placebo, n = 27) and 5 had simultaneous liver-kidney transplant (terlipressin, n = 3, placebo, n = 2). The rate of HRS reversal was significantly higher in the terlipressin group compared with the placebo group (37%, n = 16 vs. 15%, n = 4; p = 0.033). The pretransplant need for RRT was significantly lower among those who received terlipressin ( p = 0.007). The posttransplant need for RRT, at 12 months, was significantly lower among those patients who received terlipressin and were alive at Day 365, compared to placebo ( p = 0.009). Pretransplant treatment with terlipressin plus albumin in patients with HRS-1 decreased the need for RRT pretransplant and posttransplant.

  • Czigany Z
  • Uluk D
  • Pavicevic S
  • Lurje I
  • Froněk J
  • et al.
Hepatol Commun. 2024 Feb 3;8(2) doi: 10.1097/HC9.0000000000000376.
CET Conclusion
Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This manuscript reports long-term (48 month) outcomes from the HOPE-ECD-DBD trial, which compared end-ischaemic HOPE with static cold storage in extended criteria DBD livers. The authors report a reduction in late onset complications in the HOPE group with superior graft survival mainly due to a reduction in deaths with a functioning graft. Whilst numbers in the original study were small (23 in each arm) follow-up was complete for all participants still alive. Whilst the overall complication rate was higher in the SCS arm, it is not entirely clear what the main cause of complications was – no individual complication had significantly higher rates, and notably there was no difference in the rate of biliary complications. Ultimately the small sample size and secondary nature of the analysis mean that conclusions are limited due to lack of power, but the paper certainly shows the importance of long-term follow-up when assessing preservation strategies.
Aims: This study aimed to report the long-term outcomes of the HOPE-ECD-DBD randomised controlled trial, which investigated the effect of hypothermic oxygenated machine perfusion (HOPE) versus static cold storage (SCS) in patients who underwent liver transplantation using extended criteria donor-donation after brain death (ECD-DBD) allografts.
Interventions: Participants in the original trial were randomised to either the HOPE group or the SCS group.
Participants: 46 liver transplant recipients that received extended criteria donor donation after brain death allografts.
Outcomes: The main outcomes of interest were incidence of late-onset morbidity, readmissions, long-term graft survival, and patient survival.
Follow Up: 48 months (median)
BACKGROUND:

While 4 randomized controlled clinical trials confirmed the early benefits of hypothermic oxygenated machine perfusion (HOPE), high-level evidence regarding long-term clinical outcomes is lacking. The aim of this follow-up study from the HOPE-ECD-DBD trial was to compare long-term outcomes in patients who underwent liver transplantation using extended criteria donor allografts from donation after brain death (ECD-DBD), randomized to either HOPE or static cold storage (SCS).

METHODS:

Between September 2017 and September 2020, recipients of liver transplantation from 4 European centers receiving extended criteria donor-donation after brain death allografts were randomly assigned to HOPE or SCS (1:1). Follow-up data were available for all patients. Analyzed endpoints included the incidence of late-onset complications (occurring later than 6 months and graded according to the Clavien-Dindo Classification and the Comprehensive Complication Index) and long-term graft survival and patient survival.

RESULTS:

A total of 46 patients were randomized, 23 in both arms. The median follow-up was 48 months (95% CI: 41-55). After excluding early perioperative morbidity, a significant reduction in late-onset morbidity was observed in the HOPE group (median reduction of 23 Comprehensive Complication Index-points [p=0.003] and lower incidence of major complications [Clavien-Dindo ≥3, 43% vs. 85%, p=0.009]). Primary graft loss occurred in 13 patients (HOPE n=3 vs. SCS n=10), resulting in a significantly lower overall graft survival (p=0.029) and adverse 1-, 3-, and 5-year survival probabilities in the SCS group, which did not reach the level of significance (HOPE 0.913, 0.869, 0.869 vs. SCS 0.783, 0.606, 0.519, respectively).

CONCLUSIONS:

Our exploratory findings indicate that HOPE reduces late-onset morbidity and improves long-term graft survival providing clinical evidence to further support the broad implementation of HOPE in human liver transplantation.

  • Jing H
  • Xu R
  • Qian L
  • Yi Z
  • Shi X
  • et al.
Abdom Radiol (NY). 2024 Feb;49(2):604-610 doi: 10.1007/s00261-023-04082-x.
CET Conclusion
Reviewer: Mr John Fallon, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This prospective randomised study assesses a large number of transplant liver biopsies on paediatric patients. The design and outcomes were simple, and the authors aim to add to existing literature that demonstrates using a larger 16G needle is not associated with increased bleeding risk but does potentially offer a higher quality biopsy. They found that there was no difference in bleeding complications between 16G and 18G needles (11 vs 10 p=0.82) with no difference in volume of hemoperitoneum measure on protocol US done 24hours following biopsy (31ml vs 35ml. p=0.70). Using both gauges of needles provides adequate specimens for diagnostic purposes with two inadequate biopsies using 16G and three using 18G (p=1). There was a higher median of complete portal tracts (CPTs) in the 16G compared with the 18G (20 vs 18, p0.029). When assessing the severity of certain types of liver disease having a greater number of CPTs reduces the risk of underestimating the severity. However, given there was no difference in specimen adequacy, it did not prove relevant in the present study. In this cohort they demonstrate both a low and equivalent risk in terms of bleeding complications and it would appear reasonable to use either gauge of needle.
Aims: To assess the impact of needle gauge on specimen quality and haemorrhagic complications in a paediatric population undergoing liver biopsy.
Interventions: 16G or 18G Bard biopsy needle used for US-guided liver biopsy.
Participants: 282 Paediatric patients undergoing liver biopsy.
Outcomes: The outcome measures were haemorrhagic complications and specimen adequacy along with number of complete portal tracts within the specimen.
Follow Up: 24-hours post-procedure
PURPOSE:

The objective of this study was to analyzed the impact of needle gauge (G) on the adequacy of specimens and hemorrhagic complications in pediatric patients undergoing ultrasound (US)-guided transplanted liver biopsies.

METHODS:

The study included 300 consecutive biopsies performed in 282 pediatric patients (mean age 6.75 ± 3.82 years, range 0.84-17.90) between December 2020 and April 2022. All pediatric patients that referred to our institution for US-guided core-needle liver biopsy (CNLB) were randomized to undergo 16-G or 18-G CNLB. Hemorrhagic complications were qualitatively evaluated. The number of complete portal tracts (CPTs) per specimen was counted and specimen adequacy was assessed based on the American Association for the Study of Liver Diseases guidelines.

RESULTS:

The incidence of bleeding was 7.00% (n = 21) and adequate specimens for accurate pathological diagnosis were obtained from 98.33% (n = 295) of patients. There was no significant difference in the incidence or amount of bleeding between the 16-G and 18-G groups (11 vs 10, p = 0.821; 35.0 mL vs 31.3 mL, p = 0.705). Although biopsies obtained using a 16-G needle contained more complete portal tracts than those obtained using an 18-G needle (20.0 vs 18.0, p = 0.029), there was no significant difference in specimen inadequacy according to needle gauge (2 vs 3, p = 1.000).

CONCLUSIONS:

Biopsy with a 16-G needle was associated with a greater number of CPTs but did not increase the adequate specimen rate. There was no significant difference in the complication rate between 16-G biopsy and 18-G biopsy.

  • Skladaný Ľ
  • Líška D
  • Gurín D
  • Molčan P
  • Bednár R
  • et al.
Eur J Phys Rehabil Med. 2024 Feb;60(1):122-129 doi: 10.23736/S1973-9087.23.08130-3.
CET Conclusion
Reviewer: Mr John Fallon, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This small, open-registry randomised trial was unfortunately severely limited by sample size, and to some extent follow-up and design. Having recruited and randomised 80 patients, the majority were either lost to follow-up (n=32), discontinued the intervention (n=17) or hospitalised and excluded from analysis (n=2), therefore only 29 patients were included in the analysis. However, despite the smaller sample size some differences were seen. The prehabilitation group experience an improvement in LFI (4.18+/-0.92 at baseline, down to 3.72+/-0.76 at 30 days, p=0.05), a frailty index derived from performance-based tests on grip, standing and balance, while the standard of care cohort had no improvement. There were no other significant differences in outcome measures within the groups during the 30-day period. There was one difference between the groups after 30 days found, CPS was significantly better in the prehabilitation group (6.70+/-1.83 vs 8.44+/-1.83, p=0.02), though the starting CPS was numerically lower in the prehabilitation group. Conceptually the trial makes good sense, and there is good evidence across many surgical fields that improved physical fitness prior to surgery improves outcomes, and that supervised activity with a trainer or physiotherapist is an effective way of delivering this. However, due to low numbers, poor compliance, and inadequate follow-up the present study has not added good quality evidence to the field. They found some potential positive tendencies in the form of improved frailty and lowering of CPS, but these patients were only assessed over 30 days, and none received a liver transplant and as a result no conclusions can be drawn from its potential benefit to its intended surgical intervention of liver transplantation.
Aims: To assess effects of prehabilitation in a cohort of patients with liver cirrhosis awaiting liver transplantation.
Interventions: Patients received active prehabilitation or standard care. The standard care was oral and written instruction on nutrition and physical exercise which contained a list of safe activity and a recommendation of at least 20minutes of activity 3 times per week. The active prehabilitation group had the standard of care, plus supervised physical exercise with a physiotherapist for a minimum of 20 minutes 3 times per week.
Participants: 80 cirrhotic patients awaiting liver transplant.
Outcomes: The chosen outcome measure were changes in patients: MELD score (model for end-stage liver disease), Child-Pugh score (CPS), liver frailty index (LFI) and quality of life measure by the EQ-5D questionnaire.
Follow Up: 30-days
BACKGROUND:

The high prevalence of liver cirrhosis in Slovakia leads to a great need for transplant treatment. The outcome of liver transplantation is influenced by several factors.

AIM:

The main objective of this study is to test the effectiveness of prehabilitation compared to standard of care.

DESIGN:

Prospective, double-arm, randomized, open-registry study.

SETTING:

Patient in F. D. Roosevelt Teaching Hospital, Slovakia, Banská Bystrica.

POPULATION:

The participants consisted of patients with liver cirrhosis (55 men, 25 women).

METHODS:

The patients were randomized to the active prehabilitation group (N.=39) or the standard of care group (SOC) (N.=41). SOC represents the standard of care for patients prior to liver transplantation, consisting of a formal oral interview lasting 30 minutes. In addition to SOC, each patient with decompensated liver cirrhosis also underwent a prehabilitation intervention that included rehabilitation and nutrition support. Patients completed the exercises under the supervision of a physician during hospitalisation.

RESULTS:

After one month, the liver frailty index improved in the prehabilitation group (P=0.05). No improvement in MELD (Model of End Stage Liver Disease) was found in the group that underwent the prehabilitation program (P=0.28), and no improvement was found in the Child-Pugh score after one month (P=0.13). In the prehabilitation groups compared with the SOC group, differences were not found in the MELD score (P=0.11). Better clinical outcomes according to the Child-Pugh score was found for the prehabilitation group compared with the SOC group (P=0.02). According to LFI, there was no difference between the groups (P=0.26). Very low adherence was found after three months. Only three patients in the SOC group and six patients in the prehabilitation group came to the check-up. Due to low adherence after 3 months in patients with liver cirrhosis, it is not possible to make an adequate comparison between groups after three months.

CONCLUSIONS:

Despite the great effort to maintain adherence, it was not possible to draw a conclusion about the effectiveness of prehabilitation in patients before liver transplantation compared to standard of care because the main problem in Slovak patients with liver cirrhosis is low adherence. More studies are needed to identify the barriers that lead to low adherence in patients with liver cirrhosis.

CLINICAL REHABILITATION IMPACT:

A promising result was found due to improvement of the Liver Frailty Index and the Child-Pugh Score after one month in the prehabilitation group.

  • Oh SY
  • Woo HY
  • Lim L
  • Im H
  • Lee H
  • et al.
Clin Transplant. 2024 Jan;38(1):e15231 doi: 10.1111/ctr.15231.
CET Conclusion
Reviewer: Mr John O'Callaghan, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This is a randomised controlled trial in living donor liver transplantation. Two strategies for ascites replacement were compared. The primary outcome is, unusually, time to first flatus as reported by the patient. Unfortunately, this is a very subjective outcome and the authors do not explain the potential clinical significance of reducing time to first flatus by 24 hours. Other concerns with regards to the design of the trial include the fact that anyone with a history of other abdominal surgery was excluded (which was about 50% of possible patients). Also, the study was not blinded, which is a particular concern given the subjective nature of the primary outcome. The authors found no significant difference in time to first flatus comparing the 2 schemes for ascites replacement. There was also no difference in postoperative GFR or urine output, although there is a suggestion of increased risk of AKI when using Ringer's lactate rather than albumin.
Aims: This study aimed to investigate how replacing postoperative ascites with albumin affected liver transplant recipients following transplantation.
Interventions: Participants were randomised to either the albumin group or the lactated Ringer’s group.
Participants: 72 adult patients who underwent elective living donor liver transplantation.
Outcomes: The primary endpoint was time to first flatus from the end of the liver transplant procedure. The secondary endpoints included postoperative cumulative drain amount, duration of drain in place, incidence of acute kidney injury (AKI) for up to 7 days postoperation and hospital length of stay.
Follow Up: 7 days postoperation
INTRODUCTION:

There is insufficient evidence regarding the optimal regimen for ascites replacement after living donor liver transplantation (LT) and its effectiveness. The aim of this study is to evaluate the impact of replacing postoperative ascites after LT with albumin on time to first flatus during recovery with early ambulation and incidence of acute kidney injury (AKI).

METHODS:

Adult patients who underwent elective living donor LT at Seoul National University Hospital from 2019 to 2021 were randomly assigned to either the albumin group or lactated Ringer's group, based on the ascites replacement regimen. Replacement of postoperative ascites was performed for all patients every 4 h after LT until the patient was transferred to the general ward. Seventy percent of ascites drained during the previous 4 h was replaced over the next 4 h with continuous infusion of fluids with a prescribed regimen according to the assigned group. In the albumin group, 30% of a total of 70% of drained ascites was replaced with 5% albumin solution, and remnant 40% was replaced with lactated Ringer's solution. In the lactated Ringer's group, 70% of drained ascites was replaced with only lactated Ringer's solution. The primary outcome was the time to first flatus from the end of the LT and the secondary outcome was the incidence of AKI for up to postoperative day 7.

RESULTS:

Among the 157 patients who were screened for eligibility, 72 patients were enrolled. The mean age was 63 ± 8.2 years, and 73.0 % (46/63) were male. Time to first flatus was similar between the two groups (66.7 ± 24.1 h vs. 68.5 ± 25.6 h, p = .778). The albumin group showed a higher glomerular filtration rate and lower incidence of AKI until postoperative day 7, compared to the lactated Ringer's group.

CONCLUSIONS:

Using lactated Ringer's solution alone for replacement of ascites after living donor LT did not reduce the time to first flatus and was associated with an increased risk of AKI. Further research on the optimal ascites replacement regimen and the target serum albumin level which should be corrected after LT is required.

Various
  • Schultz BG
  • Bullano M
  • Paratane D
  • Rajagopalan K
Transpl Infect Dis. 2024 Apr;26(2):e14216 doi: 10.1111/tid.14216.
CET Conclusion
Reviewer: Mr Keno Mentor, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: CMV infection which is refractory to standard treatment is a challenging clinical problem, resulting in patient morbidity and increased healthcare costs, mainly due to prolonged and repeat admissions. In the SOLSTICE trail, Maribavir was shown to be more effective than standard treatment protocols for refractory CMV infection in post-transplant patients. This post-hoc analysis of the SOLISTICE trial used trial data to calculate the reduction in healthcare costs that could be achieved by using Maribavir in this patient population. The analysis demonstrated a third to two thirds reduction in costs over an 8-week period when using Maribavir. Healthcare cost analyses are complex and subject to many assumptions, which the authors acknowledge introduces significant bias. However, the most striking omission from the analysis is the cost of the Maribavir treatment itself, which is significantly higher than standard therapy. With the additional limitation of a short duration of study, the reliability and applicability of the reported cost savings cannot be readily determined.
Aims: The aim of this study was to use the data from the randomised controlled trial, SOLSTICE, to estimate the cytomegalovirus (CMV) related health care resource utilization (HCRU) costs of maribavir (MBV) versus investigator-assigned therapy (IAT), among hematopoietic stem cell transplant (HSCT) and solid organ transplant (SOT) recipients.
Interventions: Participants in the SOLSTICE trial were randomised to either receive IAT or MBV therapy.
Participants: 352 patients that had either HSCT (40%) or SOT (60%).
Outcomes: The key outcomes were the cost of hospitalisation with IAT versus MBV therapy, and cost difference (i.e. cost savings) with MBV.
Follow Up: N/A
BACKGROUND:

Cytomegalovirus (CMV) infections among hematopoietic stem cell transplant (HSCT) and solid organ transplant (SOT) recipients impose a significant health care resource utilization (HCRU)-related economic burden. Maribavir (MBV), a novel anti-viral therapy (AVT), approved by the United States Food and Drug Administration for post-transplant CMV infections refractory (with/without resistance) to conventional AVTs has demonstrated lower hospital length of stay (LOS) versus investigator-assigned therapy (IAT; valgancilovir, ganciclovir, foscarnet, or cidofovir) in a phase 3 trial (SOLSTICE). This study estimated the HCRU costs of MBV versus IAT.

METHODS:

An economic model was developed to estimate HCRU costs for patients treated with MBV or IAT. Mean per-patient-per-year (PPPY) HCRU costs were calculated using (i) annualized mean hospital LOS in SOLSTICE, and (ii) CMV-related direct costs from published literature. Probabilistic sensitivity analysis with Monte-Carlo simulations assessed model robustness.

RESULTS:

Of 352 randomized patients receiving MBV (n = 235) or IAT (n = 117) for 8 weeks in SOLSTICE, 40% had HSCT and 60% had SOT. Mean overall PPPY HCRU costs of overall hospital-LOS were $67,205 (95% confidence interval [CI]: $33,767, $231,275) versus $145,501 (95% CI: $62,064, $589,505) for MBV and IAT groups, respectively. Mean PPPY ICU and non-ICU stay costs were: $32,231 (95% CI: $5,248, $184,524) versus $45,307 (95% CI: $3,957, $481,740) for MBV and IAT groups, and $82,237 (95% CI: $40,397, $156,945) MBV versus $228,329 (95% CI: $94,442, $517,476) for MBV and IAT groups, respectively. MBV demonstrated cost savings in over 99.99% of simulations.

CONCLUSIONS:

This analysis suggests that Mean PPPY HCRU costs were 29%-64% lower with MBV versus other-AVTs.