Preview: Nephrology Dialysis Transplantation - current issue
Nephrology Dialysis Transplantation Current Issue
Published: Sat, 01 Apr 2017 00:00:00 GMT
Last Build Date: Thu, 30 Mar 2017 10:47:07 GMT
Transplant as a competing risk in the analysis of dialysis patients
AbstractTime-to-event analyses are frequently used in nephrology research, for instance, when recording time to death or time to peritonitis in dialysis patients. Many papers have pointed out the important issue of competing events (or competing risks) in such analyses. For example, when studying one particular cause of death it can be noted that patients also die from other causes. Such competing events preclude the event of interest from occurring and thereby complicate the statistical analysis. The Kaplan–Meier approach to calculating the cumulative probability of the event of interest yields invalid results in the presence of competing risks, thus the alternative cumulative incidence competing risk (CICR) approach has become the standard. However, when kidney transplant is the competing event that prevents observing the outcome of interest, CICR may not always be the matter of interest. We discuss situations where both the Kaplan–Meier and the CICR approach are not suitable for the purpose and point out alternative analysis methods for such situations. We also look at the suitability and interpretation of different estimators for relative risks. In the presence of transplant as a competing risk, one should very clearly state the research question and use an analysis method that targets this question.
Why overestimate or underestimate chronic kidney disease when correct estimation is possible?
AbstractThere is no doubt that the introduction of the Kidney Disease: Improving Global Outcomes (KDIGO) guidelines 14 years ago, and their subsequent updates, have substantially contributed to the early detection of different stages of chronic kidney disease (CKD). Several recent studies from different parts of the world mention a CKD prevalence of 8–13%. However, some editorials and reviews have begun to describe the weaknesses of a substantial number of studies. Maremar (maladies rénales chroniques au Maroc) is a recently published prevalence study of CKD, hypertension, diabetes and obesity in a randomized, representative and high response rate (85%) sample of the adult population of Morocco that strictly applied the KDIGO guidelines. When adjusted to the actual adult population of Morocco (2015), a rather low prevalence of CKD (2.9%) was found. Several reasons for this low prevalence were identified; the tagine-like population pyramid of the Maremar population was a factor, but even more important were the confirmation of proteinuria found at first screening and the proof of chronicity of decreased estimated glomerular filtration rate (eGFR), eliminating false positive results. In addition, it was found that when an arbitrary single threshold of eGFR (<60 mL/min/1.73 m2) was used to classify CKD stages 3, 4 and 5, it lead to substantial ‘overdiagnosis’ (false positives) in the elderly (>55 years of age), particularly in those without proteinuria, haematuria or hypertension. It also resulted in a significant ‘underdiagnosis’ (false negatives) in younger individuals with an eGFR >60 mL/min/1.73 m2 and below the third percentile of their age-/gender-category. The use of the third percentile eGFR level as a cut-off, based on age-gender-specific reference values of eGFR, allows the detection of these false positives and negatives. There is an urgent need for additional quality studies of the prevalence of CKD using the recent KDIGO guidelines in the correct way, to avoid overestimation of the true disease state of CKD by ≥50% with potentially dramatic consequences.
When is a meta-analysis conclusive? A guide to Trial Sequential Analysis with an example of remote ischemic preconditioning for renoprotection in patients undergoing cardiac surgery
Regardless of whether a randomized trial finds a statistically significant effect for an intervention or not, readers often wonder if the trial was large enough to be conclusive. To answer this question, we can estimate the required sample size for a trial by considering how commonly the outcome occurs, the smallest effect of clinical importance and the acceptable risk of falsely detecting or rejecting that effect. But when is a meta-analysis conclusive? We explain and illustrate the interpretation of Trial Sequential Analysis (TSA), a method increasingly used to answer this question. We conducted a conventional meta-analysis which suggested that, in adults undergoing cardiac surgery, remote ischemic preconditioning does not provide a statistically significant reduction in acute kidney injury (AKI) [12 trials, 4230 patients; relative risk 0.87 (95% confidence interval 0.74–1.02); P = 0.08; I2= 35%] or the risk of receiving acute dialysis [5 trials, 2111 patients; relative risk 1.15 (95% confidence interval 0.42–3.19); P = 0.78; I2 = 59%]. TSA demonstrates that as little as a 20% relative risk reduction in AKI is unlikely. Reliably finding effects on acute dialysis and smaller effects on AKI would require much more evidence. Notably, conventional meta-analyses conducted at one of the two earlier time points may have prematurely declared a statistically significant reduction in AKI, even though at no point in the TSA was there sufficient evidence to support such an effect. With this and other examples, we demonstrate that the TSA can prevent premature conclusions from meta-analyses.
Modeling longitudinal data and its impact on survival in observational nephrology studies: tools and considerations
AbstractNephrologists and kidney disease researchers are often interested in monitoring how patients’ clinical and laboratory measures change over time, what factors may impact these changes, and how these changes may lead to differences in morbidity, mortality, and other outcomes. When longitudinal data with repeated measures over time in the same patients are available, there are a number of analytical approaches that could be employed to describe the trends and changes in these measures, and to explore the associations of these changes with outcomes. Researchers may choose a streamlined and simplified analytic approach to examine trajectories with subsequent outcomes such as estimating deltas (subtraction of the last observation from the first observation) or estimating per patient slopes with linear regression. Conversely, they could more fully address the data complexity by using a longitudinal mixed model to estimate change as a predictor or employ a joint model, which can simultaneously model the longitudinal effect and its impact on an outcome such as survival. In this review, we aim to assist nephrologists and clinical researchers by reviewing these approaches in modeling the association of longitudinal change in a marker with outcomes, while appropriately considering the data complexity. Namely, we will discuss the use of simplified approaches for creating predictor variables representing change in measurements including deltas and patient slopes, as well more sophisticated longitudinal models including joint models, which can be used in addition to simplified models based on the indications and objectives of the study as warranted.
Non-proteinuric rather than proteinuric renal diseases are the leading cause of end-stage kidney disease
AbstractProteinuria is a distinguishing feature in primary and secondary forms of chronic glomerulonephritis, which contribute to no more than the 20% of the end-stage kidney disease (ESKD) population. The contribution of non-proteinuric nephropathies to the global ESKD burden is still poorly focused and scarce research efforts are dedicated to the elucidation of risk factors and mechanistic pathways triggering ESKD in these diseases. We abstracted information on proteinuria in the main renal diseases other than glomerulonephritides that may evolve into ESKD. In type 2 diabetes, non-proteinuric diabetic kidney disease (DKD) is more frequent than proteinuric DKD, and risk factors for non-proteinuric forms of DKD now receive increasing attention. Similarly, proteinuria is most often inconspicuous or absent in the most frequent cause of ESKD, i.e. hypertension-related chronic kidney disease (CKD), as well as in progressive cystic diseases like autosomal dominant polycystic kidney disease and in pyelonephritis/tubulo-interstitial diseases. Maintaining a high degree of attention in the care of CKD patients with proteinuria is fundamental to effectively retard progression toward kidney failure. However, substantial research efforts are still needed to develop treatment strategies that may help the vast majority of CKD patients who eventually develop ESKD via mechanistic pathways other than proteinuria.
Prediction versus aetiology: common pitfalls and how to avoid them
AbstractPrediction research is a distinct field of epidemiologic research, which should be clearly separated from aetiological research. Both prediction and aetiology make use of multivariable modelling, but the underlying research aim and interpretation of results are very different. Aetiology aims at uncovering the causal effect of a specific risk factor on an outcome, adjusting for confounding factors that are selected based on pre-existing knowledge of causal relations. In contrast, prediction aims at accurately predicting the risk of an outcome using multiple predictors collectively, where the final prediction model is usually based on statistically significant, but not necessarily causal, associations in the data at hand.In both scientific and clinical practice, however, the two are often confused, resulting in poor-quality publications with limited interpretability and applicability. A major problem is the frequently encountered aetiological interpretation of prediction results, where individual variables in a prediction model are attributed causal meaning. This article stresses the differences in use and interpretation of aetiological and prediction studies, and gives examples of common pitfalls.
Relative risk versus absolute risk: one cannot be interpreted without the other
AbstractFor the presentation of risk, both relative and absolute measures can be used. The relative risk is most often used, especially in studies showing the effects of a treatment. Relative risks have the appealing feature of summarizing two numbers (the risk in one group and the risk in the other) into one. However, this feature also represents their major weakness, that the underlying absolute risks are concealed and readers tend to overestimate the effect when it is presented in relative terms. In many situations, the absolute risk gives a better representation of the actual situation and also from the patient’s point of view absolute risks often give more relevant information. In this article, we explain the concepts of both relative and absolute risk measures. Using examples from nephrology literature we illustrate that unless ratio measures are reported with the underlying absolute risks, readers cannot judge the clinical relevance of the effect. We therefore recommend to report both the relative risk and the absolute risk with their 95% confidence intervals, as together they provide a complete picture of the effect and its implications.
The importance of considering competing treatment affecting prognosis in the evaluation of therapy in trials: the example of renal transplantation in hemodialysis trials
AbstractBackground. During the follow-up in a randomized controlled trial (RCT), participants may receive additional (non-randomly allocated) treatment that affects the outcome. Typically such additional treatment is not taken into account in evaluation of the results. Two pivotal trials of the effects of hemodiafiltration (HDF) versus hemodialysis (HD) on mortality in patients with end-stage renal disease reported differing results. We set out to evaluate to what extent methods to take other treatments (i.e. renal transplantation) into account may explain the difference in findings between RCTs. This is illustrated using a clinical example of two RCTs estimating the effect of HDF versus HD on mortality.Methods. Using individual patient data from the Estudio de Supervivencia de Hemodiafiltración On-Line (ESHOL; n = 902) and The Dutch CONvective TRAnsport STudy (CONTRAST; n = 714) trials, five methods for estimating the effect of HDF versus HD on all-cause mortality were compared: intention-to-treat (ITT) analysis (i.e. not taking renal transplantation into account), per protocol exclusion (PPexcl; exclusion of patients who receive transplantation), PPcens (censoring patients at the time of transplantation), transplantation-adjusted (TA) analysis and an extension of the TA analysis (TAext) with additional adjustment for variables related to both the risk of receiving a transplant and the risk of an outcome (transplantation–outcome confounders). Cox proportional hazards models were applied.Results. Unadjusted ITT analysis of all-cause mortality led to differing results between CONTRAST and ESHOL: hazard ratio (HR) 0.95 (95% CI 0.75–1.20) and HR 0.76 (95% CI 0.59–0.97), respectively; difference between 5 and 24% risk reductions. Similar differences between the two trials were observed for the other unadjusted analytical methods (PPcens, PPexcl, TA) The HRs of HDF versus HD treatment became more similar after adding transplantation as a time-varying covariate and including transplantation–outcome confounders: HR 0.89 (95% CI 0.69–1.13) in CONTRAST and HR 0.80 (95% CI 0.62–1.02) in ESHOL.Conclusions. The apparent differences in estimated treatment effects between two dialysis trials were to a large extent attributable to differences in applied methodology for taking renal transplantation into account in their final analyses. Our results exemplify the necessity of careful consideration of the treatment effect of interest when estimating the therapeutic effect in RCTs in which participants may receive additional treatments.
Routinely measured iohexol glomerular filtration rate versus creatinine-based estimated glomerular filtration rate as predictors of mortality in patients with advanced chronic kidney disease: a Swedish Chronic Kidney Disease Registry cohort study
AbstractBackground. Estimated glomerular filtration rate (eGFR) becomes less reliable in patients with advanced chronic kidney disease (CKD).Methods. Using the Swedish CKD Registry (2005–11), linked to the national inpatient, dialysis and death registers, we compared the performance of plasma-iohexol measured GFR (mGFR) and urinary clearance measures versus eGFR to predict death in adults with CKD stages 4/5. Performance was assessed using survival and prognostic models.Results. Of the 2705 patients, 1517 had mGFR performed, with the remainder providing 24-h urine clearances. Median eGFR (CKD-EPIcreatinine) was 20 mL/min/1.73 m2 [interquartile range (IQR) 14–26], mGFR 18 mL/min/1.73 m2 (IQR 13–23) and creatinine clearance 23 mL/min (IQR 15–31). Median follow-up was 45 months (IQR 26–59), registering 968 deaths (36%). In fully adjusted Cox models, a rise in mGFR of 1 mL/min/1.73 m2 was associated with a 5.3% fall in all-cause mortality compared with a 1.7% corresponding fall for eGFR [adjusted hazard ratio (aHR) 0.947 (95% CI, 0.930–0.964) versus aHR 0.983 (95% CI, 0.970–0.996)]. mGFR was also statistically superior in prognostic models (discrimination using logistic regression and integrated discrimination improvement). Urinary clearance measures showed a stronger aetiological relationship with death than eGFR, but were not statistically superior in the prognostic models.Conclusions. The performance of mGFR was superior to eGFR, in both aetiological and prognostic models, in predicting mortality in adults with CKD stage 4/5, demonstrating the importance of GFR per se versus non-GFR determinants of outcome. However, the relatively modest enhancement suggests that eGFR may be sufficient to use in everyday clinical practice while mGFR adds important prognostic information for those where eGFR is believed to be biased.
Accounting for overdispersion when determining primary care outliers for the identification of chronic kidney disease: learning from the National Chronic Kidney Disease Audit
AbstractBackground. Early diagnosis of chronic kidney disease (CKD) facilitates best management in primary care. Testing coverage of those at risk and translation into subsequent diagnostic coding will impact on observed CKD prevalence. Using initial data from 915 general practitioner (GP) practices taking part in a UK national audit, we seek to apply appropriate methods to identify outlying practices in terms of CKD stages 3–5 prevalence and diagnostic coding.Methods. We estimate expected numbers of CKD stages 3–5 cases in each practice, adjusted for key practice characteristics, and further inflate the control limits to account for overdispersion related to unobserved factors (including unobserved risk factors for CKD, and between-practice differences in coding and testing).Results. GP practice prevalence of coded CKD stages 3–5 ranges from 0.04 to 7.8%. Practices differ considerably in coding of CKD in individuals where CKD is indicated following testing (ranging from 0 to 97% of those with and glomerular filtration rate <60 mL/min/1.73 m2). After adjusting for risk factors and overdispersion, the number of ‘extreme’ practices is reduced from 29 to 2.6% for the low-coded CKD prevalence outcome, from 21 to 1% for high-uncoded CKD stage and from 22 to 2.4% for low total (coded and uncoded) CKD prevalence. Thirty-one practices are identified as outliers for at least one of these outcomes. These can then be categorized into practices needing to address testing, coding or data storage/transfer issues.Conclusions. GP practice prevalence of coded CKD shows wide variation. Accounting for overdispersion is crucial in providing useful information about outlying practices for CKD prevalence.
The public health dimension of chronic kidney disease: what we have learnt over the past decade
AbstractMuch progress has been made in chronic kidney disease (CKD) epidemiology in the last decade to establish CKD as a condition that is common, harmful and treatable. The introduction of the new equations for estimating glomerular filtration rate (GFR) and the publication of international reference standards for creatinine and cystatin measurement paved the way for improved global estimates of CKD prevalence. The addition of albuminuria categories to the staging of CKD paved the way for research linking albuminuria and GFR to a wide range of renal and cardiovascular adverse outcomes. The advent of genome-wide association studies ushered in insights into genetic polymorphisms underpinning some types of CKD. Finally, a number of new randomized clinical trials and meta-analyses have informed evidence-based guidelines for the treatment and prevention of CKD. In this review, we discuss the lessons learnt from epidemiological investigations of the staging, etiology, prevalence and prognosis of CKD between 2007 and 2016.
International differences in chronic kidney disease prevalence: a key public health and epidemiologic research issue
AbstractIn this narrative review, we studied the association of risk factors for chronic kidney disease (CKD) and CKD prevalence at an ecological level and describe potential reasons for international differences in estimated CKD prevalence across European countries. We found substantial variation in risk factors for CKD such as in the prevalence of diabetes mellitus, obesity, raised blood pressure, physical inactivity, current smoking and salt intake per day. In general, the countries with a higher CKD prevalence also had a higher average score on CKD risk factors, and vice versa. There was no association between cardiovascular mortality rates and CKD prevalence. In countries with a high CKD prevalence, the prevention of noncommunicable diseases may be considered important, and, therefore, all five national response systems (e.g. an operational national policy, strategy or action plan to reduce physical inactivity and/or promote physical activity) have been implemented. Furthermore, both the heterogeneity in study methods to assess CKD prevalence as well as the international differences in the implementation of lifestyle measures will contribute to the observed variation in CKD prevalence. A robust public health approach to reduce risk factors in order to prevent CKD and reduce CKD progression risk is needed and will have co-benefits for other noncommunicable diseases.
Why does quality of life remain an under-investigated issue in chronic kidney disease and why is it rarely set as an outcome measure in trials in this population?
AbstractThe growing importance of quality of life (QoL) measures in health care is reflected by the increased volume and rigor of published research on this topic. The ability to measure and assess patients’ experience of symptoms and functions has transformed the development of disease treatments and interventions. However, QoL remains an under-investigated issue in chronic kidney disease (CKD) and is seldom set as an outcome measure in trials in this population. In this article, we present various challenges in using patient-reported outcome (PRO) end points in CKD trials. We outline the need for additional research to examine more closely patient experiences with specific kidney disease symptoms and conditions, as well as caregiver perspectives of patients’ symptom burden and end-of-life experiences. These efforts will better guide the development or enhancement of PRO instruments that can be used in clinical trials to more effectively assess treatment benefit, and improve therapy and care. Better understanding of health-related QoL issues would enable providers to deliver more patient-centered care and improve the overall well-being of patients. Even small improvements in QoL could have a large impact on the population’s overall health and disease burden.
Risk prediction models for graft failure in kidney transplantation: a systematic review
AbstractRisk prediction models are useful for identifying kidney recipients at high risk of graft failure, thus optimizing clinical care. Our objective was to systematically review the models that have been recently developed and validated to predict graft failure in kidney transplantation recipients. We used PubMed and Scopus to search for English, German and French language articles published in 2005–15. We selected studies that developed and validated a new risk prediction model for graft failure after kidney transplantation, or validated an existing model with or without updating the model. Data on recipient characteristics and predictors, as well as modelling and validation methods were extracted. In total, 39 articles met the inclusion criteria. Of these, 34 developed and validated a new risk prediction model and 5 validated an existing one with or without updating the model. The most frequently predicted outcome was graft failure, defined as dialysis, re-transplantation or death with functioning graft. Most studies used the Cox model. There was substantial variability in predictors used. In total, 25 studies used predictors measured at transplantation only, and 14 studies used predictors also measured after transplantation. Discrimination performance was reported in 87% of studies, while calibration was reported in 56%. Performance indicators were estimated using both internal and external validation in 13 studies, and using external validation only in 6 studies. Several prediction models for kidney graft failure in adults have been published. Our study highlights the need to better account for competing risks when applicable in such studies, and to adequately account for post-transplant measures of predictors in studies aiming at improving monitoring of kidney transplant recipients.
The ascending rank of chronic kidney disease in the global burden of disease study
AbstractGeneral population-based studies, the chronic kidney disease (CKD) prognosis consortium and renal registries worldwide have contributed to the description of the scale of CKD as a public health problem. Since 1990, CKD has been included in the list of non-communicable diseases investigated by the Global Burden of Disease (GBD) study. The GBD represents a systematic, high-quality, scientific effort to quantify the comparative magnitude of health loss from all major diseases, injuries and risk factors. This article provides an outline of the place of CKD in the ranking of these diseases and the change over time. Whereas age-standardized death and disability-adjusted life years (DALYs) rates due to non-communicable diseases in general have been declining, such favourable trends do not exist for CKD. Altogether the GBD reports indicate increasing rates for death and DALYs due to CKD with huge variation across the globe. A substantial component of the observed increase in mortality attributable to CKD relates to that caused by diabetes mellitus and hypertension. For the increase in DALYs, CKD due to diabetes mellitus appears to be the main contributor. It is possible that these trends are in part due to new data becoming available or different coding behaviour over time, including greater specificity of coding. Although some feel there is evidence of overdiagnosis, it seems clear that in many regions CKD and its risk factors are a growing public health problem and in some of them rank very high as cause of years of life lost and DALYs. Therefore, public health policies to address this problem as well as secondary prevention in high-risk groups remain greatly needed.
Validity of estimated prevalence of decreased kidney function and renal replacement therapy from primary care electronic health records compared with national survey and registry data in the United Kingdom
BackgroundAnonymous primary care records are an important resource for observational studies. However, their external validity is unknown in identifying the prevalence of decreased kidney function and renal replacement therapy (RRT). We thus compared the prevalence of decreased kidney function and RRT in the Clinical Practice Research Datalink (CPRD) with a nationally representative survey and national registry.
MethodsAmong all people ≥25 years of age registered in the CPRD for ≥1 year on 31 March 2014, we identified patients with an estimated glomerular filtration rate (eGFR) <60 mL/min/1.73 m2, according to their most recent serum creatinine in the past 5 years using the Chronic Kidney Disease Epidemiology Collaboration equation and patients with recorded diagnoses of RRT. Denominators were the entire population in each age–sex band irrespective of creatinine measurement. The prevalence of eGFR <60 mL/min/1.73 m2 was compared with that in the Health Survey for England (HSE) 2009/2010 and the prevalence of RRT was compared with that in the UK Renal Registry (UKRR) 2014.
ResultsWe analysed 2 761 755 people in CPRD [mean age 53 (SD 17) years, men 49%], of whom 189 581 (6.86%) had an eGFR <60 mL/min/1.73 m2 and 3293 (0.12%) were on RRT. The prevalence of eGFR <60 mL/min/1.73 m2 in CPRD was similar to that in the HSE and the prevalence of RRT was close to that in the UKRR across all age groups in men and women, although the small number of younger patients with an eGFR <60 mL/min/1.73 m2 in the HSE might have hampered precise comparison.
ConclusionsUK primary care data have good external validity for the prevalence of decreased kidney function and RRT.
Genetic epidemiology in kidney disease
AbstractFamilial aggregation of chronic kidney disease and its component phenotypes—reduced glomerular filtration rate, proteinuria and renal histologic changes—has long been recognized. Rates of severe kidney disease are also known to differ markedly between populations based on ancestry. These epidemiologic observations support the existence of nephropathy susceptibility genes. Several molecular genetic technologies are now available to identify causative loci. The present article summarizes available strategies useful for identifying nephropathy susceptibility genes, including candidate gene association, family-based linkage, genome-wide association and admixture mapping (mapping by admixture linkage disequilibrium) approaches. Examples of loci detected using these techniques are provided. Epigenetic studies and future directions are also discussed. The identification of nephropathy susceptibility genes, coupled with modifiable environmental triggers impacting their function, is likely to improve risk prediction and transform care. Development of novel therapies to prevent progression of kidney disease will follow.
Identifying subgroups of renal function trajectories
AbstractBackground. Renal function in patients with chronic kidney disease (CKD) may follow different trajectory profiles. The aim of this study was to evaluate and illustrate the ability of the latent class linear mixed model (LCMM) to identify clinically relevant subgroups of renal function trajectories within a multicenter hospital-based cohort of CKD patients.Methods. We analysed data from the NephroTest cohort including 1967 patients with all-stage CKD at baseline who had glomerular filtration rate (GFR) both measured by 51Cr-EDTA renal clearance (mGFR) and estimated by the CKD-EPI equation (eGFR); 1103 patients had at least two measurements. The LCMM was used to identify subgroups of GFR trajectories, and patients’ characteristics at baseline were compared between the subgroups identified.Results. Five classes of mGFR trajectories were identified. Three had a slow linear decline of mGFR over time at different levels. In the two others, patients had a high level of mGFR at baseline with either a strong nonlinear decline over time (n = 11) or a nonlinear improvement (n = 94) of mGFR. Higher levels of proteinuria and blood pressure at baseline were observed in classes with either severely decreased mGFR or strong mGFR decline over time. Using eGFR provided similar findings.Conclusion. The LCMM allowed us to identify in our cohort five clinically relevant subgroups of renal function trajectories. It could be used in other CKD cohorts to better characterize their different profiles of disease progression, as well as to investigate specific risk factors associated with each profile.
Clinicopathologic correlations of renal pathology in the adult population of Poland
BackgroundThis is the first report on the epidemiology of biopsy-proven kidney diseases in Poland.
MethodsThe Polish Registry of Renal Biopsies has collected information on all (n = 9394) native renal biopsies performed in Poland from 2009 to 2014. Patients' clinical data collected at the time of biopsy, and histopathological diagnoses were used for epidemiological and clinicopathologic analysis.
ResultsThere was a gradual increase in the number of native renal biopsies performed per million people (PMP) per year in Poland in 2009–14, starting from 36 PMP in 2009 to 44 PMP in 2014. A considerable variability between provinces in the mean number of biopsies performed in the period covered was found, ranging from 5 to 77 PMP/year. The most common renal biopsy diagnoses in adults were immunoglobulin A nephropathy (IgAN) (20%), focal segmental glomerulosclerosis (FSGS) (15%) and membranous glomerulonephritis (MGN) (11%), whereas in children, minimal change disease (22%), IgAN (20%) and FSGS (10%) were dominant. Due to insufficient data on the paediatric population, the clinicopathologic analysis was limited to patients ≥18 years of age. At the time of renal biopsy, the majority of adult patients presented nephrotic-range proteinuria (45.2%), followed by urinary abnormalities (38.3%), nephritic syndrome (13.8%) and isolated haematuria (1.7%). Among nephrotic patients, primary glomerulopathies dominated (67.6% in those 18–64 years of age and 62.4% in elderly patients) with leading diagnoses being MGN (17.1%), FSGS (16.2%) and IgAN (13.0%) in the younger cohort and MGN (23.5%), amyloidosis (18.8%) and FSGS (16.8%) in the elderly cohort. Among nephritic patients 18–64 years of age, the majority (55.9%) suffered from primary glomerulopathies, with a predominance of IgAN (31.3%), FSGS (12.7%) and crescentic GN (CGN) (11.1%). Among elderly nephritic patients, primary and secondary glomerulopathies were equally common (41.9% each) and pauci-immune GN (24.7%), CGN (20.4%) and IgAN (14.0%) were predominant. In both adult cohorts, urinary abnormalities were mostly related to primary glomerulopathies (66.8% in younger and 50% in elderly patients) and the leading diagnoses were IgAN (31.4%), FSGS (15.9%), lupus nephritis (10.7%) and FSGS (19.2%), MGN (15.1%) and pauci-immune GN (12.3%), respectively. There were significant differences in clinical characteristics and renal biopsy findings between male and female adult patients.
ConclusionsThe registry data focused new light on the epidemiology of kidney diseases in Poland. These data should be used in future follow-up and prospective studies.
Hypertension in chronic kidney disease after the Systolic Blood Pressure Intervention Trial: targets, treatment and current uncertainties
AbstractHypertension is the number one cardiovascular (CV) risk factor, and its treatment represents one of the most important interventions in patients at high risk for CV events. Patients with chronic kidney disease (CKD) are at high CV risk, yet as a group they have been excluded from most major blood pressure (BP)-lowering trials examining CV and mortality end points. The paucity of randomized clinical trial evidence for BP lowering in CKD patients is compounded by the fact that the association between BP levels and clinical outcomes in patients with CKD suggests the presence of a J-curve, which makes extrapolations from general population studies especially difficult. The recent completion of the Systolic Blood Pressure Intervention Trial (SPRINT), which enrolled a large number of patients with mild to moderate CKD, has raised hope for much-needed clarity about the ideal systolic BP target in this patient population. This review discusses the epidemiology of hypertension in CKD and the pathophysiologic underpinnings of the distinct associations between BP levels and clinical outcomes in patients with CKD, and it examines the applicability of the SPRINT results to the general CKD population.
The association between Dietary Approaches to Stop Hypertension and incidence of chronic kidney disease in adults: the Tehran Lipid and Glucose Study
BackgroundThis study was conducted to examine the association of adherence to the Dietary Approaches to Stop Hypertension (DASH)-style diet with incident chronic kidney disease (CKD) among an Iranian population.
MethodsWe followed-up 1630 participants (50.5% women, mean age: 42.8 years) of the Tehran Lipid and Glucose Study for 6.1 years, who were initially free of CKD. Baseline diet was assessed using a valid and reliable 168-item food frequency questionnaire. A DASH-style diet, based on scoring eight components (fruits, vegetables, whole grains, nuts and legumes, low-fat dairy, red and processed meats, sweetened beverages and sodium) was used. Estimated glomerular filtration rate (eGFR) was calculated using the Modification of Diet in Renal Disease Study equation and CKD was defined as eGFR <60 mL/min/1.73 m2. Odds ratio (OR) using multivariable logistic regression was reported for the association of incident CKD with DASH-style diet score.
ResultsThe incidence of CKD among those in the top quintile of the DASH-style diet was 30%, 18% lower than those in the bottom quintile. After controlling for age, sex, smoking, total energy intake, body mass index, eGFR, triglycerides, physical activity, hypertension and diabetes, adherence to the DASH-style diet was found to be inversely associated with incident CKD (OR: 0.41; 95% confidence interval: 0.24–0.70). In addition, higher scores of fruits, whole grains, nuts and legumes, sweetened beverages and sodium were inversely associated with incidence of CKD.
ConclusionResults revealed that after 6.1 years of follow-up, adherence to the DASH-style diet was associated with a lower risk of incident CKD among adults.
Pharmacoepidemiology for nephrologists: do proton pump inhibitors cause chronic kidney disease?
AbstractPharmacoepidemiology studies are increasingly used for research into safe prescribing in chronic kidney disease (CKD). Typically, patients prescribed a drug are compared with patients who are not on the drug and outcomes are compared to draw conclusions about the drug effects. This review article aims to provide the reader with a framework to critically appraise such research. A key concern in pharmacoepidemiology studies is confounding, in that patients who have worse health status are prescribed more drugs or different agents and their worse outcomes are attributed to the drugs not the health status. It may be challenging to adjust for this using statistical methods unless a comparison group with a similar health status but who are prescribed a different (comparison) drug(s) is identified. Another challenge in pharmacoepidemiology is outcome misclassification, as people who are more ill engage more often with the health service, leading to earlier diagnosis in people who are frequent attenders. Finally, using replication cohorts with the same methodology in the same type of health system does not ensure that findings are more robust. We use two recent papers that investigated the association of proton pump inhibitor drugs with CKD as a device to review the main pitfalls of pharmacoepidemiology studies and how to attempt to mitigate against potential biases that can occur.
Marginal structural models in clinical research: when and how to use them?
AbstractMarginal structural models are a multi-step estimation procedure designed to control for the effect of confounding variables that change over time, and are affected by previous treatment. When a time-varying confounder is affected by prior treatment standard methods for confounding control are inappropriate, because over time the covariate plays both the role of confounder and mediator of the effect of treatment on outcome. Marginal structural models first calculate a weight to assign to each observation. These weights reflect the extent to which observations with certain characteristics (covariate values) are under-represented or over-represented in the sample with the respect to a target population in which these characteristics are balanced across treatment groups. Then, marginal structural models estimate the outcome of interest taking into account these weights. Marginal structural models are a powerful method for confounding control in longitudinal study designs that collect time-varying information on exposure, outcome and other covariates.
Transition of care from pre-dialysis prelude to renal replacement therapy: the blueprints of emerging research in advanced chronic kidney disease
AbstractIn patients with advanced (estimated glomerular filtration rate <25 mL/min/1.73 m2) non-dialysis-dependent chronic kidney disease (CKD) the optimal transition of care to renal replacement therapy (RRT), i.e. dialysis or transplantation, is not known. Mortality and hospitalization risk are extremely high upon transition and in the first months following the transition to dialysis. Major knowledge gaps persist pertaining to differential or individualized transitions across different demographics and clinical measures during the ‘prelude’ period prior to the transition, particularly in several key areas: (i) the best timing for RRT transition; (ii) the optimal RRT type (dialysis versus transplant), and in the case of dialysis, the best modality (hemodialysis versus peritoneal dialysis), format (in-center versus home), frequency (infrequent versus thrice-weekly versus more frequent) and vascular access preparation; (iii) the post-RRT impact of pre-RRT prelude conditions and events such as blood pressure and glycemic control, acute kidney injury episodes, and management of CKD-specific conditions such as anemia and mineral disorders; and (iv) the impact of the above prelude conditions on end-of-life care and RRT decision-making versus conservative management of CKD. Given the enormous changes occurring in the global CKD healthcare landscape, as well as the high costs of transitioning to dialysis therapy with persistently poor outcomes, there is an urgent need to answer these important questions. This review describes the key concepts and questions related to the emerging field of ‘Transition of Care in CKD’, systematically defines six main categories of CKD transition, and reviews approaches to data linkage and novel prelude analyses along with clinical applications of these studies.
Seasonal variations in transition, mortality and kidney transplantation among patients with end-stage renal disease in the USA
BackgroundSeasonal variations may exist in transitioning to dialysis, kidney transplantation and related outcomes among end-stage renal disease (ESRD) patients. Elucidating these variations may have major clinical and healthcare policy implications for better resource allocation across seasons.
MethodsUsing the United States Renal Data System database from 1 January 2000 to 31 December 2013, we calculated monthly counts of transitioning to dialysis or first transplantation and deaths. Crude monthly transition fraction was defined as the number of new ESRD patients divided by all ESRD patients on the first day of each month. Similar fractions were calculated for all-cause and cause-specific mortality and transplantation.
ResultsThe increasing trend of the annual transition to ESRD plateaued during 2009–2012 (n = 126 264), and dropped drastically in 2013 (n = 117 372). Independent of secular trends, monthly transition to ESRD was lowest in July (1.65%) and highest in January (1.97%) of each year. All-cause, cardiovascular and infectious mortalities were lowest in July or August (1.32, 0.58 and 0.15%, respectively) and highest in January (1.56, 0.71 and 0.19%, respectively). Kidney transplantation was highest in June (0.33%), and this peak was mainly attributed to living kidney transplantation in summer months. Transplant failure showed a similar seasonal variation to naïve transition, peaking in January (0.65%) and nadiring in September (0.56%).
ConclusionsTransitioning to ESRD and adverse events among ESRD people were more frequent in winter and less frequent in summer, whereas kidney transplantation showed the reverse trend. The potential causes and implications of these consistent seasonal variations warrant more investigation.
Do we still need cross-sectional studies in Nephrology? Yes we do!
AbstractCross-sectional studies represent the second line of evidence (after case reports) in the ladder of evidence aimed at defining disease aetiology. This study design is used to generate hypotheses about the determinants of a given disease but also to investigate the accuracy of diagnostic tests and to assess the burden of a given disease in a population. The intrinsic limitation of cross-sectional studies, when applied to generate aetiological hypotheses, is that both the exposure under investigation and the disease of interest are measured at the same point in time. For this reason, generally the cross-sectional design does not provide definitive proofs about cause-and-effect relationships. An advantage of cross-sectional studies in aetiological and diagnostic research is that they allow researchers to consider many different putative risk factors/diagnostic markers at the same time. For example, in a hypothetical study aimed at generating hypotheses about the risk factors for left ventricular hypertrophy (LVH) in patients with chronic kidney disease, investigators could look at several risk factors as potential determinants of LVH (age, gender, cholesterol, blood pressure, inflammation, etc.) with minimal or no additional costs. In this article, we make examples derived from the nephrology literature to show the usefulness of cross-sectional studies in clinical and epidemiological research.
Statistical significance versus clinical relevance
AbstractIn March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs.
Restricted mean survival time over 15 years for patients starting renal replacement therapy
AbstractBackground. The restricted mean survival time (RMST) estimates life expectancy up to a given time horizon and can thus express the impact of a disease. The aim of this study was to estimate the 15-year RMST of a hypothetical cohort of incident patients starting renal replacement therapy (RRT), according to their age, gender and diabetes status, and to compare it with the expected RMST of the general population.Methods. Using data from 67 258 adult patients in the French Renal Epidemiology and Information Network (REIN) registry, we estimated the RMST of a hypothetical patient cohort (and its subgroups) for the first 15 years after starting RRT (cRMST) and used the general population mortality tables to estimate the expected RMST (pRMST). Results were expressed in three different ways: the cRMST, which calculates the years of life gained under the hypothesis of 100% death without RRT treatment, the difference between the pRMST and the cRMST (the years lost), and a ratio expressing the percentage reduction of the expected RMST: (pRMST − cRMST)/pRMST.Results. Over their first 15 years of RRT, the RMST of end-stage renal disease (ESRD) patients decreased with age, ranging from 14.3 years in patients without diabetes aged 18 years at ESRD to 1.8 years for those aged 90 years, and from 12.7 to 1.6 years, respectively, for those with diabetes; expected RMST varied from 15.0 to 4.1 years between 18 and 90 years. The number of years lost in all subgroups followed a bell curve that was highest for patients aged 70 years. After the age of 55 years in patients with and 70 years in patients without diabetes, the reduction of the expected RMST was >50%.Conclusion. While neither a clinician nor a survival curve can predict with absolute certainty how long a patient will live, providing estimates on years gained or lost, or percentage reduction of expected RMST, may improve the accuracy of the prognostic estimates that influence clinical decisions and information given to patients.
Screening for elevated albuminuria and subsequently hypertension identifies subjects in which treatment may be warranted to prevent renal function decline
AbstractBackground. We investigated whether initial population screening for elevated albuminuria with subsequent screening for hypertension in case albuminuria is elevated may be of help to identify subjects at risk for accelerated decline in kidney function.Methods. We included subjects who participate in the PREVEND observational, general population-based cohort study and had two or more glomerular filtration rate (eGFR) measurements available during follow-up. Elevated albuminuria was defined as an albumin concentration ≥20 mg/L in a first morning urine sample confirmed by an albumin excretion ≥30 mg/day in two 24-h urines. Hypertension was defined as systolic blood pressure ≥140 mmHg, diastolic blood pressure ≥90 mmHg or use of blood pressure-lowering drugs. eGFR was estimated with the CKD-EPI creatinine–cystatin C equation.Results. Overall, 6471 subjects were included with a median of 4 [95% confidence interval (CI) 2–5] eGFR measurements during a follow-up of 11.3 (95% CI 4.0–13.7) years. Decline in eGFR was greater in the subgroups with elevated albuminuria. This held true, not only in subjects with known hypertension (−1.84 ± 2.27 versus −1.16 ± 1.45 mL/min/1.73 m2 per year, P < 0.05), but also in subjects with newly diagnosed hypertension (−1.59 ± 1.55 versus −1.14 ± 1.38 mL/min/1.73 m2 per year, P < 0.05) and in subjects with normal blood pressure (−1.18 ± 1.85 versus −0.81 ± 1.02 mL/min/1.73 m2 per year in subjects, P < 0.05). This effect was most pronounced in the population ≥55 years of age and male subjects. In addition, subjects with elevated albuminuria had higher blood pressure than subjects with normoalbuminuria, and in subjects with elevated albuminuria as yet undiagnosed hypertension was twice as prevalent as diagnosed hypertension.Conclusions. Initial screening for elevated albuminuria followed by screening for hypertension may help to detect subjects with increased risk for a steeper decline in kidney function.