569 results
Filters • 1
Sort By
Results Per Page
Filters
569 results
1
Download the following citations:
Email the following citations:
Print the following citations:
  • Mac Curtain BM
  • Qian W
  • Temperley HC
  • O'Mahony A
  • Ng ZQ
  • et al.
Hernia. 2024 Apr;28(2):301-319 doi: 10.1007/s10029-023-02879-9.
CET Conclusion
Reviewer: Mr Keno Mentor, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This systematic review synthesised the available data reporting on the rate and associated risk factors of incisional hernia (IH) after renal transplantation. The authors report on pooled results which are broadly similar to that of other surgical groups, with obesity, smoking and immunosuppression reported as the main risk factors for the development of IH. There are a number of fundamental errors in the statistical analysis. For example, forest plots are used inappropriately to depict cross-sectional data, and the risk of bias tool was modified by the authors resulting in all included studies being rated as ‘good’ or ‘very good’.
Aims: This study aimed to summarise the current literature on rates, risk factors and outcomes of incisional hernias following renal transplantation.
Interventions: Electronic databases including PubMed, EMBASE and the Cochrane CENTRAL were searched. Studies were selected independently by two reviewers and data were extracted independently by three reviewers. Risk of bias was assessed using a modified Newcastle–Ottawa scale.
Participants: 20 studies were included in the review.
Outcomes: The primary outcome was rates of IH. Secondary outcomes included risk factors for IH, and management and outcomes of IH.
Follow Up: N/A
PURPOSE:

Incisional hernia (IH) post renal transplant (RT) is relatively uncommon and can be challenging to manage clinically due to the presence of the kidney graft and patient immunosuppression. This systematic review and meta-analysis synthesises the current literature in relation to IH rates, risk factors and outcomes post RT.

METHODS:

PubMed, EMBASE, and Cochrane Central Registry of Controlled Trials (CENTRAL) were searched up to July 2023. The most up to date Preferred Reporting Items for Systematic Reviews and Meta Analyses guidelines were followed. Pertinent clinical information was synthesised. A meta-analysis of the pooled proportions of IH rates, the rates of patients requiring surgical repair and the rates of recurrence post RT are reported.

RESULTS:

Twenty studies comprising 16,018 patients were included in this analysis. The pooled rate of IH occurrence post RT was 4% (CI 3-5%). The pooled rate of IH repair post RT was 61% (CI 14-100%). The pooled rate of IH recurrence after repair was 16% (CI 9-23%). Risk factors identified for IH development post RT are BMI, immunosuppression, age, smoking, incision type, reoperation, concurrent abdominal wall hernia, lymphocele formation and pulmonary disease.

CONCLUSIONS:

IH post RT is uncommon and the majority of IH post RT are repaired surgically on an elective basis.

  • Huang HJ
  • Schechtman K
  • Askar M
  • Bernadt C
  • Mitter B
  • et al.
Transplantation. 2024 Mar 1;108(3):777-786 doi: 10.1097/TP.0000000000004841.
CET Conclusion
Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This pilot study recruited lung transplant recipients at 2 sites, and randomised them to standard immunosuppression (Tac, MMF, Pred) or a belatacept-based regimen (Tac, Belatacept and pred). The hypothesis was that belatacept-based immunosuppression might reduce the incidence of donor-specific antibodies (DSA), leading to a reduction in the risk of chronic lung allograft dysfunction (CLAD). The study was stopped after recruitment of 27 patients due to 3 deaths in the belatacept arm. Causes of death varied – 2 patients died from COVID-19 infection, one from CLAD related to infection, one from PTLD, one from pulmonary embolus and one from haemothorax. The authors ascribe 4 of these deaths to viral infections. No differences were seen in incidence of CLAD or development of DSA. It is very difficult to interpret these results given the small numbers, but clearly the authors were correct in stopping the study and switching patients to standard immunosuppression. The relationship of four of the deaths to viral infection would suggest that the immunosuppressive regimen may have contributed, and in the absence of any detectable clinical benefit, the conclusion that this regimen is unsafe in lung transplant recipients seem justified.
Aims: This study aimed to evaluate the feasibility and inform the design of an RCT investigating the efficacy and safety of belatacept following lung transplantation
Interventions: Participants were randomly assigned to either continue standard-of-care immunosuppression or switch to belatacept.
Participants: 27 lung transplant recipients.
Outcomes: The primary outcome was to assess the feasibility of randomising 80% of eligible patients within 4 hours posttransplantation. The primary outcome was later changed to survival following the cessation of treatment with belatacept.
Follow Up: 1 year posttransplantation.
BACKGROUND:

Chronic lung allograft dysfunction (CLAD) is the leading cause of death beyond the first year after lung transplantation. The development of donor-specific antibodies (DSA) is a recognized risk factor for CLAD. Based on experience in kidney transplantation, we hypothesized that belatacept, a selective T-cell costimulatory blocker, would reduce the incidence of DSA after lung transplantation, which may ameliorate the risk of CLAD.

METHODS:

We conducted a pilot randomized controlled trial (RCT) at 2 sites to assess the feasibility and inform the design of a large-scale RCT. All participants were treated with rabbit antithymocyte globulin for induction immunosuppression. Participants in the control arm were treated with tacrolimus, mycophenolate mofetil, and prednisone, and participants in the belatacept arm were treated with tacrolimus, belatacept, and prednisone through day 89 after transplant then converted to belatacept, mycophenolate mofetil, and prednisone for the remainder of year 1.

RESULTS:

After randomizing 27 participants, 3 in the belatacept arm died compared with none in the control arm. As a result, we stopped enrollment and treatment with belatacept, and all participants were treated with standard-of-care immunosuppression. Overall, 6 participants in the belatacept arm died compared with none in the control arm (log rank P  = 0.008). We did not observe any differences in the incidence of DSA, acute cellular rejection, antibody-mediated rejection, CLAD, or infections between the 2 groups.

CONCLUSIONS:

We conclude that the investigational regimen used in this pilot RCT is associated with increased mortality after lung transplantation.

  • Klein A
  • Toll A
  • Stewart D
  • Fitzsimmons WE
Am J Transplant. 2024 Feb;24(2):250-259 doi: 10.1016/j.ajt.2023.09.019.
CET Conclusion
Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This interesting study evaluates the use of real-world registry data as a control cohort, combined with propensity matching, for evaluation of long-term outcomes in a clinical trial. The authors use the BENEFIT study as an example, comparing outcomes from the belatacept and original control arms to two registry-based arms – cyclosporine-treated controls, and a tacrolimus-based control group. They demonstrate that the registry-based controls have very similar 5-year graft survival to the original control cohort, resulting in similar conclusions to the original trial. Registry-based approaches to clinical trials may help to make trials more efficient, particularly when the patient cohort is rare or long-term follow-up is required. There are some obvious limitations – outcomes are limited to those included in the registry data, and in order to apply study inclusion/exclusion criteria to the registry cohort, the criteria must be reliably recorded in the registry. A natural extension to this approach may be use of synthetic data, which would circumvent some of the data privacy concerns of using real patient data.
Aims: The aim of this study was to test the feasibility and value of using real-world registry data as a control cohort to compare drug treatment effects to those observed in the BENEFIT study.
Interventions: Participants in the BENEFIT study were randomised to receive either more intensive or less intensive regimens of BELA-based immunosuppressive therapy, or to cyclosporine.
Participants: 1443 kidney transplant recipients.
Outcomes: The main outcomes of interest were covariate-adjusted overall and death-censored graft survival and patient survival at 3- and 5-years posttransplant.
Follow Up: 5 years posttransplantation

To address the challenges of assessing the impact of a reasonably likely surrogate endpoint on long-term graft survival in prospective kidney transplant clinical trials, the Transplant Therapeutics Consortium established a real-world evidence workgroup evaluating the scientific value of using transplant registry data as an external control to supplement the internal control group. The United Network for Organ Sharing retrospectively simulated the use of several distinct contemporaneous external control groups, applied multiple cause inference methods, and compared treatment effects to those observed in the BENEFIT study. Applying BENEFIT study enrollment criteria produced a smaller historical cyclosporine control arm (n = 153) and a larger, alternative (tacrolimus) historical control arm (n = 1069). Following covariate-balanced propensity scoring, Kaplan-Meier 5-year all-cause graft survivals were 81.3% and 81.7% in the Organ Procurement and Transplantation Network (OPTN) tacrolimus and cyclosporine external control arms, similar to 80.3% observed in the BENEFIT cyclosporine treatment arm. Five-year graft survival in the belatacept-less intensive arm was significantly higher than the OPTN controls using propensity scoring for comparing cyclosporine and tacrolimus. Propensity weighting using OPTN controls closely mirrored the BENEFIT study's long-term control (cyclosporine) arm's survival rate and the less intensive arm's treatment effect (significantly higher survival vs control). This study supports the feasibility and validity of using supplemental external registry controls for long-term survival in kidney transplant clinical trials.

  • van den Born JC
  • Meziyerh S
  • Vart P
  • Bakker SJL
  • Berger SP
  • et al.
Transplantation. 2024 Feb 1;108(2):556-566 doi: 10.1097/TP.0000000000004776.
CET Conclusion
Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This multicentre trial from the Netherlands randomised de novo renal transplant recipients to one of three immunosuppression strategies – standard care, early steroid withdrawal or tacrolimus minimisation at 6 months. The study was designed to demonstrate non-inferiority in renal function at 24 months, and met the primary endpoint, with no difference seen between the three groups. There was a higher incidence of acute rejection in the early steroid withdrawal group, but no increase in DSA formation. In general, study design is good although unblinded, with centralised randomisation and intent-to-treat analysis. Withdrawal rate was around 25% in each arm at 24 months. Inclusion criteria are fairly broad for an immunosuppression minimisation study, allowing recipients up to 80 years of age, PRA up to 75% and first or second transplants. One notable exclusion criteria was for type 1 diabetic recipients; the authors do not provide a rationale for this.
Aims: The aim of this trial was to compare standard immunosuppression with two immunosuppression minimisation strategies in de novo kidney transplant recipients.
Interventions: Participants were randomised to one of three groups: the early steroid withdrawal arm, the standard-dose tacrolimus arm, and the tacrolimus minimisation arm.
Participants: 295 de novo kidney transplant recipients.
Outcomes: The primary outcome was kidney function at 24 months posttransplantation. The secondary outcomes were patient survival, treated rejection, kidney failure, discontinuation of study medication for more than 6 weeks, and treatment failure.
Follow Up: 24 months
BACKGROUND:

Evidence on the optimal maintenance of immunosuppressive regimen in kidney transplantation recipients is limited.

METHODS:

The Amsterdam, LEiden, GROningen trial is a randomized, multicenter, investigator-driven, noninferiority, open-label trial in de novo kidney transplant recipients, in which 2 immunosuppression minimization strategies were compared with standard immunosuppression with basiliximab, corticosteroids, tacrolimus, and mycophenolic acid. In the minimization groups, either steroids were withdrawn from day 3, or tacrolimus exposure was reduced from 6 mo after transplantation. The primary endpoint was kidney transplant function at 24 mo.

RESULTS:

A total of 295 participants were included in the intention-to-treat analysis. Noninferiority was shown for the primary endpoint; estimated glomerular filtration rate at 24 mo was 45.3 mL/min/1.73 m 2 in the early steroid withdrawal group, 49.0 mL/min/1.73 m 2 in the standard immunosuppression group, and 44.7 mL/min/1.73 m 2 in the tacrolimus minimization group. Participants in the early steroid withdrawal group were significantly more often treated for rejection ( P = 0.04). However, in this group, the number of participants with diabetes mellitus during follow-up and total cholesterol at 24 mo were significantly lower.

CONCLUSIONS:

Tacrolimus minimization can be considered in kidney transplant recipients who do not have an increased immunological risk. Before withdrawing steroids the risk of rejection should be weighed against the potential metabolic advantages.

  • Helmick RA
  • Eymard CM
  • Naik S
  • Eason JD
  • Nezakatgoo N
  • et al.
Clin Transplant. 2024 Jan;38(1):e15172 doi: 10.1111/ctr.15172.
CET Conclusion
Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This single centre study randomised 110 liver transplant recipients to receive standard release or extended-release tacrolimus in conjunction with rabbit ATG following liver transplantation. Renal function did not differ between groups at any point post-transplant. There were also no statistically significant differences in quality of life, rejection rates or adverse event rates between groups. Overall, the study appears well-conducted and reported. It demonstrates that extended-release tacrolimus provides an alternative to standard release tacrolimus in this patient population, although does not provide any evidence of measurable clinical benefit within the first year.
Aims: This study aimed to investigate the safety and effectiveness of extended-release tacrolimus (XRT) in comparison to immediate release tacrolimus (IRT) following liver transplantation in a steroid-free protocol with rabbit anti-thymocyte globulin (RATG).
Interventions: Participants were randomised to either the XRT group or the IRT group.
Participants: 110 liver transplant recipients.
Outcomes: The main outcomes were the assessment of creatinine and eGFR, adverse events, rejection and quality of life.
Follow Up: 12 months
PURPOSE:

Our study hypothesis was that once daily dosing of extended-release tacrolimus (XRT) would be a safe and effective immunosuppression (IS) with the potential to decrease adverse events (AEs) associated with immediate release tacrolimus (IRT) after liver transplantation (LT).

METHODS:

All patients receiving LT at our center received rabbit anti-thymocyte globulin (RATG) induction therapy. Eligible patients were randomized in a 1:1 fashion to receive either XRT or IRT. Antimicrobial prophylaxis was the same between arms, and both groups received an antimetabolite for the first 6 months following LT. Patients were then followed at pre-determined study intervals for the following year after LT. We administered the RAND-36SF survey to assess patient's health-related quality of life at pre-determined intervals. All analysis was performed with an intention to treat basis.

RESULTS:

We screened 194 consecutive patients and enrolled 110. Our control and study arms were well matched. Transplant characteristics were similar between groups. At all timepoints, both arms had similar serum creatinine and estimated glomerular filtration rate (eGFR), calculated by MDRD6 equation, with post-transplant GFRs between 60 and 70 mL/min/1.73 m2 . Tacrolimus trough levels were similar between arms. The XRT arm had fewer AEs (166) and fewer serious AEs (70) compared to IRT (201 and 99, respectively). AEs most commonly were renal, infectious, or gastrointestinal in nature. While not statistically significant, XRT was held temporarily (25 vs. 35 cases) or discontinued (3 vs. 11 cases) less frequently than IRT and had fewer instances of rejection (7 vs. 12 cases).

CONCLUSION:

This analysis showed that XRT is safe and effective as de novo maintenance IS in a steroid-free protocol with RATG.

  • Dellgren G
  • Lund TK
  • Raivio P
  • Leuckfeld I
  • Svahn J
  • et al.
Lancet Respir Med. 2024 Jan;12(1):34-44 doi: 10.1016/S2213-2600(23)00293-X.
CET Conclusion
Reviewer: Mr Keno Mentor, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: Tacrolimus has largely replaced ciclosporin as the calcineurin inhibitor of choice for immunosuppression therapy in lung transplant patients in most countries. This is based on evidence, which the Cochrane group graded as low quality, suggesting reduced rates of chronic rejection associated with tacrolimus-based protocols. This randomised controlled trial was based in Scandinavia, where ciclosporin use is still preferred. 249 patients were randomised to either tacrolimus or ciclosporin-based immunosuppression regimes. The primary outcome was rates of chronic lung allograft dysfunction (CLAD), an umbrella term which encompasses a range of syndromes which cause lung graft failure and is predominantly a result of chronic rejection. The authors argue that this is a more clinically relevant measure than the pathology-based assessments used in other studies. Importantly, the clinical assessment of CLAD was based on committee consensus who were blinded to the treatment groups. The rate of CLAD in the tacrolimus group was significantly lower than the ciclosporin group, with lower rates of acute rejection and graft survival at 3 years. As is true in other organ groups, this well-designed study supports the existing evidence that tacrolimus should be the calcineurin inhibitor of choice in immunosuppression therapy after lung transplantation.
Aims: This study aimed to examine the effect of using tacrolimus once per day versus ciclosporin twice per day on the cumulative incidence of chronic lung allograft dysfunction (CLAD) at 36 months following transplantation in de novo lung transplant recipients.
Interventions: Participants were randomised to either the tacrolimus group or the ciclosporin group.
Participants: 264 adult patients (18–70 years) planning to to undergo a de novo double lung transplantation
Outcomes: The primary endpoint was the cumulative incidence of CLAD at 36 months post transplantation. The secondary endpoints were the composite measure of freedom from treated acute rejection and CLAD, and patient and graft survival following transplantation.
Follow Up: 36 months
BACKGROUND:

Evidence is low regarding the choice of calcineurin inhibitor for immunosuppression after lung transplantation. We aimed to compare the use of tacrolimus once per day with ciclosporin twice per day according to the current definition of chronic lung allograft dysfunction (CLAD) after lung transplantation.

METHODS:

ScanCLAD is an investigator-initiated, open-label, multicentre, randomised, controlled trial in Scandinavia evaluating whether an immunosuppressive protocol based on anti-thymocyte globulin induction followed by tacrolimus (once per day), mycophenolate mofetil, and corticosteroids reduces the incidence of CLAD after de novo lung transplantation compared with a protocol using ciclosporin (twice per day), mycophenolate mofetil, and corticosteroids. Patients aged 18-70 years who were scheduled to undergo double lung transplantation were randomly allocated (1:1) to receive either oral ciclosporin (2-3 mg/kg before transplantation and 3 mg/kg [twice per day] from postoperative day 1) or oral tacrolimus (0·05-0·1 mg/kg before transplantation and 0·1-0·2 mg/kg from postoperative day 1). The primary endpoint was CLAD at 36 months post transplantation, determined by repeated lung function tests and adjudicated by an independent committee, and was assessed with a competing-risks analysis with death and re-transplantation as competing events. The primary outcome was assessed in the modified intention-to-treat (mITT) population, defined as those who underwent transplantation and received at least one dose of study drug. This study is registered at ClinicalTrials.gov (NCT02936505) and EudraCT (2015-004137-27).

FINDINGS:

Between Oct 21, 2016, and July 10, 2019, 383 patients were screened for eligibility. 249 patients underwent double lung transplantation and received at least one dose of study drug, and were thus included in the mITT population: 125 (50%) in the ciclosporin group and 124 (50%) in the tacrolimus group. The mITT population consisted of 138 (55%) men and 111 (45%) women, with a mean age of 55·2 years (SD 10·2), and no patients were lost to follow-up. In the mITT population, CLAD occurred in 48 patients (cumulative incidence 39% [95% CI 31-48]) in the ciclosporin group and 16 patients (13% [8-21]) in the tacrolimus group at 36 months post transplantation (hazard ratio [HR] 0·28 [95% CI 0·15-0·52], log-rank p<0·0001). Overall survival did not differ between groups at 3 years in the mITT population (74% [65-81] for ciclosporin vs 79% [70-85] for tacrolimus; HR 0·72 [95% CI 0·41-1·27], log-rank p=0·25). However, in the per protocol CLAD population (those in the mITT population who also had at least one post-baseline lung function test allowing assessment of CLAD), allograft survival was significantly better in the tacrolimus group (HR 0·49 [95% CI 0·26-0·91], log-rank p=0·021). Adverse events totalled 1516 in the ciclosporin group and 1459 in the tacrolimus group. The most frequent adverse events were infection (453 events), acute rejection (165 events), and anaemia (129 events) in the ciclosporin group, and infection (568 events), anaemia (108 events), and acute rejection (98 events) in the tacrolimus group. 112 (90%) patients in the ciclosporin group and 108 (87%) in the tacrolimus group had at least one serious adverse event.

INTERPRETATION:

Immunosuppression based on use of tacrolimus once per day significantly reduced the incidence of CLAD compared with use of ciclosporin twice per day. These findings support the use of tacrolimus as the first choice of calcineurin inhibitor after lung transplantation.

FUNDING:

Astellas, the ALF-agreement, Scandiatransplant Organization, and Heart Centre Research Committee, Rigshospitalet, Denmark.

  • Toniato de Rezende Freschi J
  • Cristelli MP
  • Viana LA
  • Ficher KN
  • Nakamura MR
  • et al.
Transplantation. 2024 Jan 1;108(1):261-275 doi: 10.1097/TP.0000000000004749.
CET Conclusion
Reviewer: Mr Keno Mentor, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: Mammalian target of rapamycin inhibitors (mTORi) have been increasingly investigated as alternate immunosuppression agents for kidney transplant patients to reduce the adverse effects of the current standard combination of mycophenolate and calcineurin inhibitor. Sirolimus and everolimus have distinct pharmacokinetic and pharmacodynamic properties and this trial sought to compare outcomes in kidney transplant patients utilising immunosuppression regimens based on each of these mTORi agents. A standard regimen group (mycophenolate/tacrolimus) was also included as a control. The primary outcome was the rate of CMV infection in the first year after transplantation. There were no significant differences in the rate of CMV infection nor in the secondary outcomes including BK virus infection, acute rejection and delayed graft function between the two mTORi agents. However, the study was limited by a small sample size and short follow-up period.
Aims: This study aimed to compare the outcomes of de novo sirolimus (SRL) versus everolimus (EVR) plus reduced-dose tacrolimus among renal transplant patients.
Interventions: Participants were randomised to one of three groups including the SRL group, the EVR group and the mycophenolate sodium (MPS) group.
Participants: 268 first kidney transplant recipients.
Outcomes: The primary outcome was the incidence of the first CMV infection/disease at 12 months posttransplantation. The secondary outcomes included BK polyomavirus (BKPyV) viremia, treatment failure, de novo donor-specific antibodies, 12-month protocol biopsies, kidney function and drug discontinuation.
Follow Up: 12 months
BACKGROUND:

Mammalian target of rapamycin inhibitors (mTORi), sirolimus (SRL) and everolimus (EVR), have distinct pharmacokinetic/pharmacodynamics properties. There are no studies comparing the efficacy and safety of de novo use of SRL versus EVR in combination with reduced-dose calcineurin inhibitor.

METHODS:

This single-center prospective, randomized study included first kidney transplant recipients receiving a single 3 mg/kg antithymocyte globulin dose, tacrolimus, and prednisone, without cytomegalovirus (CMV) pharmacological prophylaxis. Patients were randomized into 3 groups: SRL, EVR, or mycophenolate sodium (MPS). Doses of SRL and EVR were adjusted to maintain whole blood concentrations between 4 and 8 ng/mL. The primary endpoint was the 12-mo incidence of the first CMV infection/disease.

RESULTS:

There were 266 patients (SRL, n = 86; EVR, n = 90; MPS, n = 90). The incidence of the first CMV event was lower in the mTORi versus MPS groups (10.5% versus 7.8% versus 43.3%, P  < 0.0001). There were no differences in the incidence of BK polyomavirus viremia (8.2% versus 10.1% versus 15.1%, P  = 0.360). There were no differences in survival-free from treatment failure (87.8% versus 88.8% versus 93.3%, P  = 0.421) and incidence of donor-specific antibodies. At 12 mo, there were no differences in kidney function (75 ± 23 versus 78 ± 24 versus 77 ± 24 mL/min/1.73 m 2 , P  = 0.736), proteinuria, and histology in protocol biopsies. Treatment discontinuation was higher among patients receiving SRL or EVR (18.6% versus 15.6% versus 6.7%, P  = 0.054).

CONCLUSIONS:

De novo use of SRL or EVR, targeting similar therapeutic blood concentrations, shows comparable efficacy and safety. The reduced incidence of CMV infection/disease and distinct safety profile of mTORi versus mycophenolate were confirmed in this study.

  • Ciancio G
  • Gaynor JJ
  • Guerra G
  • Tabbara MM
  • Roth D
  • et al.
Clin Transl Sci. 2023 Nov;16(11):2382-2393 doi: 10.1111/cts.13639.
CET Conclusion
Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This posthoc analysis of a trial of low-dose CNI followed patients for up to 20 years. The authors perform a multivariate analysis to look for factors predicting graft loss, particularly focussing on average trough CNI levels over time. Rejection and older donor age were associated with graft failure, and use of cyclosporine vs tacrolimus was associated with lower eGFR. Trough CNI levels showed no association with graft survival or function. All patients were on a low-dose CNI regimen, and the authors conclude that with such regimens trough levels (within the expected range) do not correlate with outcome. Although long-term studies such as this should be applauded, there are some limitations. Tacrolimus levels were only documented every 6-12 months – given the visit-to-visit variability, it is not clear whether averaging random levels from such disparate timepoints is a valid measure of average exposure over time. No consort diagram is included so it is difficult to see crossovers and dropouts, but it appears that only 42/150 patients achieved 15-year follow-up, and 29/150 were on their originally assigned CNI at this time.
Aims: This study aimed to report the long-term effects of reduced- calcineurin inhibitor (CNI) on renal function in primary kidney transplant recipients with up to 20 years of post-transplant follow-up.
Interventions: Participants were randomised to one of three groups: tacrolimus (TAC)/sirolimus (SRL), TAC/mycophenolate mofetil (MMF), and cyclosporine microemulsion (CSA)/SRL.
Participants: 150 primary kidney transplant recipients (14 to 78 years of age).
Outcomes: The main outcomes of interest were graft survival, mean eGFR ± SE (mL/min per 1.73 m2), percentages of patients alive with a functioning graft at 60, 120, and 180 months post-transplant, and distributions of the average CNI trough levels (ng/mL) over 0–60, 0–120, and 0–180 months post-transplant, and graft failure-due-to chronic CNI toxicity.
Follow Up: 180 months post-transplant.

More favorable clinical outcomes with medium-term follow-up have been reported among kidney transplant recipients receiving maintenance therapy consisting of "reduced-tacrolimus (TAC) dosing," mycophenolate mofetil (MMF), and low-dose corticosteroids. However, it is not clear whether long-term maintenance therapy with reduced-calcineurin inhibitor (CNI) dosing still leads to reduced renal function. A prospectively followed cohort of 150 kidney transplant recipients randomized to receive TAC/sirolimus (SRL) versus TAC/MMF versus cyclosporine microemulsion (CSA)/SRL, plus low-dose maintenance corticosteroids, now has 20 years of post-transplant follow-up. Average CNI trough levels over time among patients who were still alive with functioning grafts at 60, 120, and 180 months post-transplant were determined and ranked from smallest-to-largest for both TAC and CSA. Stepwise linear regression was used to determine whether these ranked average trough levels were associated with the patient's estimated glomerular filtration rate (eGFR) at those times, particularly after controlling for other significant multivariable predictors. Experiencing biopsy-proven acute rejection (BPAR) and older donor age were the two most significant multivariable predictors of poorer eGFR at 60, 120, and 180 months post-transplant (p < 000001 and 0.000003 for older donor age at 60 and 120 months; p = 0.00008 and <0.000001 for previous BPAR at 60 and 120 months). Assignment to CSA also implied a significantly poorer eGFR (but with less magnitudes of effect) in multivariable analysis at 60 and 120 months (p = 0.01 and 0.002). Higher ranked average CNI trough levels had no association with eGFR at any timepoint in either univariable or multivariable analysis (p > 0.70). Long-term maintenance therapy with reduced-CNI dosing does not appear to cause reduced renal function.

  • Rasaei N
  • Malekmakan L
  • Gholamabbas G
  • Abdizadeh P
Exp Clin Transplant. 2023 Oct;21(10):814-819 doi: 10.6002/ect.2023.0071.
CET Conclusion
Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This small single centre RCT randomised 16 kidney transplant recipients with BK viral infection to receive either IvIG alone, or IvIG in combination with leflunomide. BK viral loads decreased in both groups, with significantly lower viral levels in the leflunomide group. No difference in graft function was seen between groups. The main limitation of this study is the small sample size, with just 8 patients in each group. No a priori sample size calculation is presented. No adverse events/safety data are reported – only BK virus levels and serum creatinine data. Whilst this paper provides some rationale for a larger study, it is far from conclusive.
Aims: The aim of this study was to examine the therapeutic effect of the combination of intravenous immunoglobulin (IVIG) and leflunomide in comparison to IVIG alone for treating BK virus infection in kidney transplant recipients.
Interventions: Participants were randomised to receive either IVIG + leflunomide or IVIG alone.
Participants: 16 kidney transplant recipients with BK virus infection.
Outcomes: The main outcomes of interest were serum levels of the BK virus and creatinine levels.
Follow Up: 3 months
OBJECTIVES:

Nephropathy due to BK virus infection is a major cause of graft dysfunction and loss. No specific treatment has been developed for the BK virus. Here, we compared the combination of intravenous immunoglobulin and leflunomide versus intravenous immunoglobulin to treat BK virus nephropathy after renal transplant.

MATERIALS AND METHODS:

This study was a randomized controlled clinical trial. Sixteen kidney transplant patients with BK virus infection were randomly divided into 2 groups; 1 group received intravenous immunoglobulin, and another group received leflunomide and intravenous immunoglobulin. P < .05 was considered statistically significant.

RESULTS:

Results of a polymerase chain reaction test for BK virus after 2 months of treatment were negative in 3 patients in the intravenous immunoglobulin group and in 7 patients in the intravenous immunoglobulin + leflunomide group. The amount of BK virus decreased significantly in each group, and a significant difference was observed between the 2 groups after 3 months (P = .014). The average level of creatinine in the intravenous immunoglobulin group at 1, 2, and 3 months after treatment was 1.7 ± 0.23, 1.8 ± 0.5, and 1.5 ± 0.3, respectively, and in the intravenous immunoglobulin + leflunomide group was 2.1 ± 0.75, 1.76 ± 0.37, and 1.4 ± 0.18, respectively (P > .05).

CONCLUSIONS:

Although BK viral load decreased significantly in both groups, there was a significant difference between patients who received intravenous immunoglobulin versus those who received the combination of intravenous immunoglobulin + leflunomide after 3 months. The addition of leflunomide to the intravenous immunoglobulin treatment seems to have a better effect in reducing BK viral load. However, further studies with a larger sample and longer duration are needed.

  • Lloberas N
  • Grinyó JM
  • Colom H
  • Vidal-Alabró A
  • Fontova P
  • et al.
Kidney Int. 2023 Oct;104(4):840-850 doi: 10.1016/j.kint.2023.06.021.
CET Conclusion
Reviewer: Mr Simon Knight, Centre for Evidence in Transplantation, Nuffield Department of Surgical Sciences University of Oxford
Conclusion: This single-centre randomised study compared initial tacrolimus dosing by body weight (control), or by Bayesian prediction (study), following renal transplantation. Patients in the study group had their tacrolimus dosing guided by a Bayesian model incorporating age, haematocrit and CYP3A genotype. The authors demonstrate that a significantly higher proportion of patients in the study arm achieved therapeutic target, with lower interpatient variability, shorter time to target trough concentrations and fewer dose modifications. Whilst no differences in clinical outcomes were seen, there was a trend towards lower incidence and shorter duration of DGF in the study group. These results are very promising and appear to demonstrate the benefit of personalised dosing using the Bayesian model. The population in this study are from a single centre, and predominantly male and Caucasian. Future studies should confirm these findings in populations with a greater mix of ethnicity, and confirm potential clinical benefit in a larger sample.
Aims: The aim of this study was to evaluate the clinical applicability of a Population pharmacokinetic (PPK) model for achieving Tac Co (therapeutic trough Tac concentration) versus the manufacturer’s labelling dosage.
Interventions: Participants were randomised to either the PPK group or the control group with patients receiving Tac adjustment according to the manufacturer’s labeling.
Participants: 96 adult renal transplant recipients
Outcomes: The primary outcome was the percentage of patients reaching the Tac Co target (6 and 10 ng/ml) after the first steady state. The secondary outcomes were the timing needed to reach the therapeutic target, the number of dose modifications needed to reach the target, and the clinical outcome.
Follow Up: 90 days posttransplantation

For three decades, tacrolimus (Tac) dose adjustment in clinical practice has been calculated empirically according to the manufacturer's labeling based on a patient's body weight. Here, we developed and validated a Population pharmacokinetic (PPK) model including pharmacogenetics (cluster CYP3A4/CYP3A5), age, and hematocrit. Our study aimed to assess the clinical applicability of this PPK model in the achievement of Tac Co (therapeutic trough Tac concentration) compared to the manufacturer's labelling dosage. A prospective two-arm, randomized, clinical trial was conducted to determine Tac starting and subsequent dose adjustments in 90 kidney transplant recipients. Patients were randomized to a control group with Tac adjustment according to the manufacturer's labeling or the PPK group adjusted to reach target Co (6-10 ng/ml) after the first steady state (primary endpoint) using a Bayesian prediction model (NONMEM). A significantly higher percentage of patients from the PPK group (54.8%) compared with the control group (20.8%) achieved the therapeutic target fulfilling 30% of the established superiority margin defined. Patients receiving PPK showed significantly less intra-patient variability compared to the control group, reached the Tac Co target sooner (5 days vs 10 days), and required significantly fewer Tac dose modifications compared to the control group within 90 days following kidney transplant. No statistically significant differences occurred in clinical outcomes. Thus, PPK-based Tac dosing offers significant superiority for starting Tac prescription over classical labeling-based dosing according to the body weight, which may optimize Tac-based therapy in the first days following transplantation.