Abstract
Prognostic models can strongly support individualized care provision and well-informed shared decision making. There has been an upsurge of prognostic research in the field of nephrology, but the uptake of prognostic models in clinical practice remains limited. Therefore, we map out the research field of prognostic models for kidney patients and provide directions on how to proceed from here. We performed a scoping review of studies developing, validating, or updating a prognostic model for patients with CKD. We searched all published models in PubMed and Embase and report predicted outcomes, methodological quality, and validation and/or updating efforts. We found 602 studies, of which 30.1% concerned CKD populations, 31.6% dialysis populations, and 38.4% kidney transplantation populations. The most frequently predicted outcomes were mortality (n=129), kidney disease progression (n=75), and kidney graft survival (n=54). Most studies provided discrimination measures (80.4%), but much less showed calibration results (43.4%). Of the 415 development studies, 28.0% did not perform any validation and 57.6% performed only internal validation. Moreover, only 111 models (26.7%) were externally validated either in the development study itself or in an independent external validation study. Finally, in 45.8% of development studies no useable version of the model was reported. To conclude, many prognostic models have been developed for patients with CKD, mainly for outcomes related to kidney disease progression and patient/graft survival. To bridge the gap between prediction research and kidney patient care, patient-reported outcomes, methodological rigor, complete reporting of prognostic models, external validation, updating, and impact assessment urgently need more attention.
Keywords: CKD, clinical epidemiology, dialysis, epidemiology and outcomes, kidney transplantation
Introduction
Individuals with CKD and those receiving RRT experience numerous symptoms and are confronted with an increased risk of mortality and a multitude of comorbidities, such as cardiovascular disease and kidney failure.1–3 Unsurprisingly, quality of life (QoL) is reported to be lower in patients with CKD compared with that of the general population, and treatment-related burden is often high.4–6 The disease trajectory varies across individuals, causing patients to go through many phases of uncertainty regarding their prognosis.
A personalized approach is needed to tailor care to the variable disease trajectory and personal preferences. Prognostic models can be powerful tools to enhance patient-centered care provision in nephrology because they can provide individualized prognostic information. Previous research has shown that most patients with CKD and RRT patients have an explicit wish to receive more information about their future with the disease. Prognostic models can help patients gain more knowledge on the course of their disease, which in turn may help them regain a sense of control and help them cope with living with the disease.7,8
Moreover, patients with CKD and their health care professionals have to make many decisions over the course of the disease. Prognostic models can be used to support this process of shared decision making and its timing by identifying patients at high risk of certain outcomes and by providing both the clinician and the patient with more information on the individual disease trajectory.8–10
Despite the potential of these prognostic models, their clinical use and impact in nephrology lag behind other medical fields.11–13 Only a handful of models are actually implemented in clinical practice, and it is unclear how many prognostic models exist for the CKD population, which outcomes they predict, and what the status of these models is in terms of methodological robustness. Several systematic reviews of prognostic studies have previously been conducted, but all were aimed at specific outcomes, such as mortality and kidney failure or specific populations, and a broad overview of all studies dedicated to developing, validating, or updating a prognostic model has not yet been presented.14–17 The aim of this scoping review was to map out existing studies that develop, validate, or update a prognostic model for patients with CKD or RRT patients, detailing their methodological rigor and their range in populations and outcomes. This comprehensive overview of the entire field is needed to see where we currently stand, where knowledge gaps remain, and how to progress from here to bridge the gap between promising prognostic models and their implementation in clinical care.
Methods
This study was conducted in line with the methodological framework for scoping reviews as proposed by Arksey and O'Malley in 2005.18 In addition, the Preferred Reporting Items for Systematic Reviews and Meta-analyses extension for Scoping Reviews was adhered to, to ensure transparent reporting (Supplemental Table 1).19,20 Scoping reviews are aimed at mapping the body of literature on a particular research topic and generally concern a broader research question than those that are used in systematic reviews.21
Study Selection
To identify studies that describe prognostic prediction models for all outcomes relevant for patients with CKD at any stage and RRT populations, a literature search was performed in PubMed and Embase. All studies published before January 1, 2022, were considered. Studies were included if they met the following criteria: (1) The study must aim to develop, validate, or update at least one formal multivariable prognostic model and (2) the study population must consist of adults with CKD, patients receiving chronic dialysis, or kidney transplantation patients (waitlisted patients, recipients, and donors). Studies concerning patients undergoing a combined transplantation procedure (e.g., combined liver–kidney transplantation or pancreas–kidney transplantation) or patients with AKI were excluded. Furthermore, studies with a methodological or statistical aim, diagnostic models, pharmacokinetic explanatory models, studies solely testing for associations, and prognostic finding studies were excluded. Only English language studies were included, and no date limit was set. The complete search strategy is presented in the Supplemental Material. All identified studies were screened for relevance by J. Milders and independently by a second reviewer (C.L. Ramspek, R.J. Janse, or M. van Diepen). Discrepancies on study inclusion were resolved by consulting a third reviewer (M. van Diepen). Reference lists of the included studies and relevant reviews were manually screened to identify any additional studies for inclusion.
Data Extraction
Relevant data were extracted from the included studies by J. Milders using a data extraction form. First, general characteristics of all studies were extracted, including year of publication, continent of publication, study type (development, validation, update, or a combination), the prognostic model of interest, the study population (CKD, dialysis, or kidney transplantation), and the predicted outcome. Second, data were extracted on sample size, whether the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) guideline was referenced, and whether discrimination, calibration, or reclassification measures were reported.22,23 In addition, for the studies in which a prognostic model was developed, data on how the prognostic model was presented (e.g., full regression formula or risk score), whether internal and/or external validation was performed, and model derivation method (conventional regression modeling (e.g., linear regression, logistic regression, or Cox proportional hazards regression) or machine learning) were extracted. In the Supplemental Material, elaboration on extracted data and choices made regarding classification of studies is available.
Analysis
All findings were summarized using descriptive statistics. Continuous variables are presented as mean values with SDs or median values with interquartile ranges, depending on their distribution. Categorical variables are presented as numbers with percentages. R version 4.2.2 (R Foundation for Statistical Computing, Vienna, Austria) was used for all analyses and figures included in this study.
Results
Study Selection
In total, 4169 potentially relevant studies were identified through PubMed and Embase, and their titles and abstracts were screened. Of these, 776 studies were selected for full-text review. Through screening of the references of included studies, 36 additional studies were identified. Finally, 602 studies met the inclusion criteria and were included in the scoping review. A Preferred Reporting Items for Systematic Reviews and Meta-analyses flow diagram of the study selection process is presented in Figure 1, and an overview of all included studies and their extracted data is included in the Supplemental Material.
Figure 1.

PRISMA flow diagram of the study selection process. PRISMA, preferred reporting items for systematic reviews and meta-analyses.
General Characteristics of Included Studies
Of the included studies, the first study was published in 1982.24 Since then, there has been a steep increase in the amount of prognostic modeling studies published per year, with a record of 96 studies in the year 2021 (Figure 2). Moreover, there has been an increase in all types of prediction studies (development, validation, and update).
Figure 2.

Number of studies published per year. Development=studies of which the main aim is to develop a novel prognostic model. This category may also include studies that, besides developing a model, also externally validate an existing model. External validation=studies that solely externally validate an already existing model. Update=studies of which the main aim is to update an existing model. This category may also include studies that, besides updating a model, also externally validate that same model.
Studies included mainly patients from Europe (n=221, 34.2%), North America (n=210, 32.5%), and Asia (n=181, 28.0%). Very few studies included patients from South America (n=17, 2.6%) and Australia (n=15, 2.3%), and only three studies (0.5%) included patients from Africa. The world map in Supplemental Figure 1 shows the distribution of the included studies over the continents. Of the 602 included studies, 181 (30.1%) concerned a CKD population, 190 (31.6%) a dialysis population, and 231 (38.4%) a kidney transplantation population. Figure 3 shows how many development, validation, update, or combination studies were published. In short, 415 studies developed a prognostic model, 205 studies externally validated an already existing model, and 62 studies updated a model. Studies with multiple aims were counted in all applicable study type categories (e.g., a study in which a model was developed and also an existing model was externally validated was counted in both the development and the validation category).
Figure 3.

Euler diagram of study types (development, validation, and update). Development studies in which a model was also externally validated are only counted in the development group. The external validation category consists of independent external validation studies in which an existing model is validated.
In the 415 development studies, 447 prognostic models were proposed, covering a wide variety of predicted outcomes. To provide a comprehensible overview, the outcomes were divided into 22 categories. In Supplemental Table 2, an explanation of the outcome categories can be found. Overall, outcomes that were most often predicted were mortality (n=129), kidney disease progression (n=75), and kidney graft survival (n=54) (Figure 4).
Figure 4.
Number of prognostic models per outcome category. Studies in which multiple outcomes were predicted were counted in all applicable outcome categories. For example, a study in which both kidney disease progression and mortality were predicted was included in both outcome categories. Therefore, the total number of models adds up to more than the total number of development studies. *New-onset diabetes mellitus after transplantation. KTx, kidney transplantation.
Methodology and Reporting of Studies
Sample sizes of the included studies varied highly, with a median (interquartile range [IQR]) of 746 (262–3180) (Figure 5). A total of 141 (23.4%) studies used a sample size of <250 participants, and 112 studies (18.6%) included more than 5000 participants.
Figure 5.

Sample size of included studies.
Of the studies published after the TRIPOD publication date in 2015 (n=399), only 56 (14.0%) referred to the reporting guideline. In 2021, there was a steep increase, and nearly 30% of the studies referenced it (Supplemental Figure 2).
In our sample of studies, 484 (80.4%) studies presented a measure of discrimination, and only 261 (43.4%) studies presented a measure of calibration. Over time, more studies reported measures of discrimination while reporting of calibration did not increase (Supplemental Figure 3). Examples of alternative performance measures that were presented include the likelihood ratio test, mean absolute/squared error, and sensitivity and specificity for a specific cutoff (mainly for artificial intelligence-based models). Of the 62 studies that reported on the updating of a model, 27 (43.5%) reported the net reclassification index and/or the integrated discrimination improvement. Update studies that did not include a net reclassification index and/or integrated discrimination improvement usually did include performance measures, such as c-statistics, calibration plots, and likelihood ratio tests.
Most development studies (n=312, 75.2%) used conventional regression modeling techniques, and 59 (14.2%) studies used a form of machine learning. The remaining studies (n=44, 10.6%) developed both types of models and generally aimed to compare the performance of the newer machine learning techniques with that of a conventional logistic regression model.
Finally, of the 415 development studies, only 120 (28.9%) reported their full model formulas so that absolute risks could be calculated from the model. Furthermore, 71 (17.1%) studies presented a risk score, and 34 (8.2%) presented both. The remainder studies (n=190, 45.8%) did not report a useable version of their models, meaning that they neither reported an intercept with the regression coefficients nor any information on how to calculate risks on the basis of the developed model. Of the 59 studies in which machine learning techniques were used to develop a model, only 10 (16.9%) studies provided a useable format of the model. The data of this section are summarized in Table 1.
Table 1.
Methodological and reporting quality of all studies
| Measures Pertaining to Methodological and Reporting Quality | All (N=602) | Development (n=415) | Validation (n=205) | Update (n=62) |
|---|---|---|---|---|
| Sample size, No. (%) | ||||
| Median (IQR) | 746 (262–3180) | 830 (276–3702) | 633 (261–2328) | 669 (345–3763) |
| 0–250 | 141 (23.4) | 97 (23.4) | 46 (22.4) | 8 (12.9) |
| 251–1500 | 242 (40.2) | 155 (37.3) | 90 (43.9) | 34 (54.8) |
| 1501–5000 | 107 (17.8) | 76 (18.3) | 40 (19.5) | 8 (12.9) |
| >5000 | 112 (18.6) | 87 (21.0) | 29 (14.1) | 12 (19.4) |
| Reference to TRIPOD statement, No. (%) | ||||
| Published before TRIPOD | 203 (33.7) | 150 (36.1) | 58 (28.3) | 18 (29.0) |
| Yes | 56 (9.3) | 36 (8.7) | 24 (11.7) | 0 (0) |
| No | 343 (57.0) | 229 (55.2) | 123 (60.0) | 44 (71.0) |
| Performance aspects, No. (%) | ||||
| Discrimination | 484 (80.4) | 335 (80.7) | 170 (82.9) | 50 (80.6) |
| Calibration | 261 (43.4) | 186 (44.8) | 90 (43.9) | 27 (43.5) |
| Reclassification measures | N.A. | N.A. | N.A. | |
| NRI | 8 (12.9) | |||
| IDI | 4 (6.5) | |||
| Both | 15 (24.2) | |||
| Model derivation method, No. (%) | N.A. | N.A. | N.A. | |
| Conventional regression model | 312 (75.2) | |||
| Machine learning | 59 (14.2) | |||
| Both | 44 (10.6) | |||
| Model presentation, No. (%) | N.A. | N.A. | N.A. | |
| Full regression formula | 120 (28.9) | |||
| Risk score | 71 (17.1) | |||
| Both | 34 (8.2) | |||
| Neither | 190 (45.8) |
Total No. of development, validation, and update studies adds up to more than the original number of included studies (n=602) because studies that combine multiple aims (e.g., development and external validation) were counted in all applicable categories. IDI, integrated discrimination improvement; IQR, interquartile range; N.A., not applicable; NRI, net reclassification index; TRIPOD, Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis.
Data on methodology and reporting of the included studies did not differ between the three populations (CKD, dialysis, and kidney transplantation) (Supplemental Table 3).
In Supplemental Figure 4, we present a flowchart that shows how many models were presented in a useable format, how many were accompanied by a measure of discrimination and calibration, how many models were validated internally and externally, and how many models were developed in a sample size larger than 250 patients. In the end, only 49 (11.0%) models remain for which all the abovementioned steps were completed.
Validation and Updating
Within-Study Validation
In the 415 development studies, most researchers only performed internal validation (n=239, 57.6%), and in more than a quarter of these studies, no validation steps were undertaken at all (n=116, 28.0%). A small number of studies performed external validation of their developed models within the same study (n=60, 14.5%), of which 37 (8.9%) performed both internal and external validations. Over the past 15 years, no clear increase in validation efforts was observed (Supplemental Figure 5).
External Validation and Updating
From the development studies that were included in this review, we identified which models were repeatedly validated and updated. Of the 415 development studies and their models, only 111 (26.7%) were externally validated either in the development study itself or in subsequent independent external validation studies. Of these, 43 (38.7%) were solely validated externally within the development study itself (Supplemental Table 4). If validated at all, most models were externally validated only once (n=70). The remainder (n=41) was validated more often, with a median (IQR) of 3.0 (2.0–4.0). The number of models that were externally validated once, two to five times, and more than five times, stratified by population, is presented in Figure 6. Moreover, 17 development studies and their corresponding models were updated in a separate study. Most models (n=11) were updated only once, and the remainder (n=6) was updated with a median (IQR) of 2.5 (2.0–4.5). By far, the most validated and updated model was the kidney failure risk equation (KFRE) with 34 external validation studies and eight update studies.25
Figure 6.

Number of times models were externally validated.
Validation of Prognostic Models from Other Populations
39.0% of included validation studies and 48.4% of update studies concerned prognostic models that were not originally intended for a CKD/RRT population, such as general comorbidity scores and so on The model that was validated (n=19) and updated (n=14) the most was the Framingham risk score for cardiovascular events.26
Comparative External Validation Studies
We identified 35 comparative external validation studies in which the predictive abilities of more than two prognostic models were compared through validation within the same cohort. Two main types of comparative external validation studies were identified: (1) studies in which a newly developed model is compared with multiple already existing models and (2) studies which aimed to systematically search and externally validate all or the most widely used models for a certain outcome. Three (8.6%) studies were of the first type, and the remaining 32 (91.4%) studies were of the second type. The number of models that were compared per study varied, with a median (IQR) of 4.0 (3.0–5.0) models. In three of the found studies, more than ten models were validated.14,15,27
CKD Models
In total, 124 studies were included that presented models developed in a CKD population; most of these (n=75) predicted kidney disease progression, which encompasses outcomes, such as dialysis initiation, eGFR decline, or an increase in serum creatinine. The second and third largest outcome categories for patients with CKD were mortality (n=18) and cardiovascular events/death (n=13). Two studies included patients receiving conservative care, of which one aimed to predict cardiovascular events and the other mortality.28,29
Of the 75 CKD models predicting kidney disease progression, most (n=47, 62.7%) were never externally validated. Of 18 models predicting mortality, 5 (27.7%) models were externally validated once within the development study itself, and of the cardiovascular event models, 2 (15.4%) were externally validated within the development study. For CKD populations, the most validated and updated model was the aforementioned KFRE with 34 external validations and eight update studies, followed by the Framingham risk score which was externally validated in eight studies and updated in ten.
Dialysis Models
For dialysis populations, we identified 151 models, of which most predicted the outcomes mortality (n=80), cardiovascular events (n=20), and vascular access–related outcomes, such as arteriovenous fistula failure or maturation (n=16).
Of the models predicting mortality, cardiovascular events, and vascular access–related outcomes, 72.2%, 90.0%, and 87.5% were never externally validated, respectively. Furthermore, 6.3%, 5.0%, and 0.0%, respectively, were externally validated solely within the development study itself. Three models predicting mortality were updated, one model predicting cardiovascular events, and one model for vascular access–related outcomes.
In the dialysis population, the Geriatric Nutritional Risk Index (GNRI) and the Charlson Comorbidity Index (CCI) were validated most often. The GNRI and the CCI were validated in ten and nine separate studies, respectively. Both models were originally developed in different populations to predict long-term mortality.30,31 In addition, the GNRI and CCI were both updated once.
Kidney Transplantation Models
Of the 178 models that were made for kidney transplantation populations, the majority predicted graft survival (n=54), recipient mortality (n=31), and delayed graft function (n=24).
Most models for graft survival, recipient mortality, and delayed graft function were never externally validated, 64.2%, 74.2%, and 66.7%, respectively. Moreover, 18.9%, 12.9%, and 8.3%, respectively, were only externally validated within their development study. Four models for graft survival, one model for mortality, and one model for delayed graft function were updated.
The model that was most often validated and updated was the Kidney Donor Risk Index (KDRI) with a total of 26 external validation studies and five update studies. The KDRI was originally developed by Rao et al. in 2009 to predict recipient and graft survival after kidney transplantation.32 A nomogram to predict delayed graft function after kidney transplantation by Irish et al. and its updated version was externally validated in 11 separate external validation studies33,34 Three other models predicting patient and graft survival and delayed graft function were externally validated and updated relatively frequently.35–37
Discussion
This scoping review provides a comprehensive overview of 602 studies describing the development, validation, or updating of prognostic models in patients with CKD and/or RRT, including information on their reporting and methodology, predicted outcome definitions and external validation, and updating of existing models.
Populations
Many prediction models exist for a wide variety of patients with CKD. However, certain patient groups are currently severely underrepresented. For example, only two studies included patients receiving conservative care as opposed to RRT. Moreover, very few studies include patients from South America, Australia, and Africa. Relevant predictors and model performance vary highly on the basis of differences in populations and their health care system. It, therefore, remains unknown whether any of the existing prediction models are generalizable to patients from these underrepresented continents.
Outcomes
From our findings it becomes apparent that current prognostic research mainly focuses on traditional clinical outcomes, such as kidney disease progression, mortality, graft survival, and cardiovascular events. These outcomes are instinctively of great importance to both clinicians and patients themselves, and they have the potential to positively contribute to patient care by identifying high-risk patients and treatment opportunities and by supporting prognostic information provision. However, the patient perspective remains rather unexplored in prognostic research. The importance of patient-reported outcomes (PROs), such as health-related QoL (HRQoL) and symptom burden, and their potential for use in clinical practice is increasingly recognized. Although this is a relatively novel take on prognostic research in all medical fields, some models have been developed for PROs like HRQoL in other populations, proving that it is feasible to predict outcomes other than mortality and treatment initiation.38,39 In 2018, a Kidney Disease Improving Global Outcomes (KDIGO) report also urged prognostic model development for outcomes other than kidney failure, such as functional status and hospitalizations.40 However, on the basis of our findings, we can conclude that methodologically robust, validated models for such outcomes currently do not exist in the field of nephrology. An important barrier to the development of prognostic models to predict PROs is the limited uptake of such outcomes in data sets. Therefore, we recommend increasing the inclusion of PROs in novel data sources.
Methodology and Reporting of Studies
Methodology and reporting of prediction studies is often inadequate, and from this review we can conclude that prognostic research for kidney patients is no exception to this.41,42 First, many studies included a relatively small sample of patients. When developing a prognostic model, it is crucial to use a sufficient sample size to reduce the risk of overfitting and consequently inaccurate risk predictions.43 Recently, new ways of calculating the required sample size for the development of prognostic models were proposed.44–46 Second, in our sample of studies published after the release of the TRIPOD statement in 2015, only about 14% referenced the reporting guideline. The TRIPOD checklist was published to aid researchers in the reporting of their prediction modeling studies and consequently improve the transparency of reporting.22 Besides the reporting aspect, advisory on robust methodology is given in the Explanation and Elaboration document of the TRIPOD.23 Because artificial intelligence–based models are gaining popularity rapidly, a TRIPOD guideline specifically for the reporting of artificial intelligence model studies is currently in development.47 Third, when validating a prognostic model, both discrimination and calibration are essential performance measures to assess the predictive abilities. Although most of studies in our sample reported on the discriminative abilities of the model, far less also reported a form of calibration. This is in line with previous systematic reviews of non-nephrological studies, which also found that calibration often receives little attention.48,49 Not assessing and reporting measures of calibration imposes risks for clinical practice when a model is used for decision making because a poorly calibrated model can lead to misleading risk predictions and misclassification.50 Fourth, only a little more than half of the development studies reported a useable prognostic model, consisting of either a full regression formula or a risk score, making validation or further implementation of the model impossible. Naturally, to combat research waste and to make it possible for a model to be implemented into clinical practice, it is vital to report the necessary information to use the model to estimate risks. Finally, algorithmic bias remains a neglected topic in prediction modeling. Algorithmic bias refers to systematic errors that lead to different prediction performance in different groups of people. Algorithmic bias may arise as a result of systematic and institutional biases represented in data (e.g., people of color are less likely to receive indicated medication) or during the modeling process (e.g., due to biases of the person building the model). Because of reduced human control, artificial intelligence–based models are more prone to algorithmic biases because they may unintentionally represent biases in the data. It is important to take measures to prevent algorithmic bias, for example, by ensuring that the data used to train the algorithm are diverse and representative, to determine whether there is differential model performance between subgroups (e.g., men versus women), and by knowing what variables the model uses to make a prediction.
Over time, an increase in attention for methodology and reporting can be seen: More studies reported measures of discrimination and referenced the TRIPOD guideline. However, reporting of calibration and the performance of external validation still lag behind.
Validation and Updating
To bring prognostic models one step closer to their implementation, external validation is crucial. Not adequately validating prognostic models that are intended to be used in clinical decision making may negatively influence patient outcomes. For example, overpredicting or underpredicting the risk of a certain outcome may lead to erroneous treatment decisions.51
In this review, we have found that of the 447 of models for kidney patients, the majority have never been tested in a new set of patients or only a few times, meaning that their generalizability and real-world applicability remain unknown. Moreover, the world and the people in it are constantly subject to change. Therefore, validation, and updating if necessary, in the target population in which the prognostic model is intended to be used is crucial for ensuring their accuracy, reliability, and clinical utility and should be performed more frequently.52
More recently, comparative comprehensive external validation studies have been introduced, in which multiple prognostic models are compared. In this review, we identified 35 comparative external validation studies. This type of study allows for a direct comparison of the performance of different models and can provide insight into the strengths and weaknesses of each model. Besides testing the generalizability of the included models, comparative external validation studies can serve two other major aims: (1) They can help researchers and clinicians to identify the best performing model for a given outcome, allowing us to choose the most promising model for use in clinical practice, and (2) they can help to identify whether a new model is significantly better than already implemented models, making it possible to determine whether the effort and resources required to actually implement the new model are justified. Currently, most comparative external validation studies compare only a few prognostic models. To really be able to identify the best model for a specific clinical question, we recommend performing a systematic search to identify all relevant models for that research question. These models can then be compared in one study, and the best model can be presented. Such comprehensive comparative external validation studies are not often performed yet, but good examples exist also in the field of nephrology.14,17,27
Impact Assessment
After the development, validation and update stages it is time to look into implementation of the prognostic models. An important step in this process is assessment of their impact on patient care by performing so-called impact studies. The aim of an impact study is to quantify the effect of using a prognostic model in clinical practice on decision making and, consequently, patient outcomes. The preferred setting for an impact study would be a cluster randomized controlled trial (RCT) in which one cluster (e.g., a hospital) will be randomized to using the prognostic model and the other to not using it. However, because an RCT is often not feasible, another option for impact assessment is to compare outcomes before and after the implementation of a model in an observational study.53 Impact studies are currently rarely performed, and it remains an uncharted territory, despite their potential to further support clinical implementation of prognostic models. Although our search was not specifically designed for finding impact studies, we came across five observational studies that performed some sort of impact assessment.54–58 In addition, we found two protocols for RCTs of which the results have not yet been published.59,60 It is important to note that numerous models may be used in clinical practice without formal reporting on this usage in the literature. Implementation of prognostic models could thus also be assessed using surveys among nephrologists and/or patients.8
Clinical Use of Prognostic Models
We believe prognostic models have the potential to be of significant value in kidney patient care for a number of reasons. First, studies have shown that most patients with CKD experience feelings of uncertainty and anxiety regarding their future and their disease progression.61 Prognostic uncertainty is a recurring theme in this fragile population, and patients with CKD have explicitly described a wish for more information about their future because they value knowing what to expect when it comes to living with the disease.7,61,62 Examples are important clinical end points, such as kidney failure and mortality, and also PROs, such as HRQoL and symptom burden. By providing patients with such information on the expected course of the disease, these models can foster a sense of control over their condition while also facilitating their active involvement in the process of shared decision making. Second, when nephrologists inform their patient on their expected disease trajectory and advise on treatment options, they are implicitly making a risk assessment for that individual patient. This risk assessment is not a formal calculation but an intuitive assessment or extrapolation of a certain variable, such as the eGFR slope. Prognostic models cannot replace the clinical judgment of a nephrologist, but they may be a valuable addition to it because they provide a formal individualized predicted risk calculation on the basis of a multitude of patient characteristics and can also help less experienced clinicians develop a sense of risk assessment. Third, prediction models have proven to be of value in health care planning and referral strategies. As kidney disease is a chronic illness with a large variety in trajectories and treatment options and has a large role for shared decision making, many choices may be aided by prognostic models because they help provide patients with a realistic outlook on their journey with CKD. Prognostic models are used to assess risk of progression in early CKD stages and guide nephrologist referral. Subsequently, prognostic models are currently used to support kidney transplant allocation by predicting the risk of kidney graft failure in a specific recipient in the United States In the future, prognostic models could provide realistic insights into postkidney transplantation QoL, aligning patient expectations with potential outcomes. Other specific examples of the use of prognostic models include predicting kidney failure for timely vascular access surgery referral and mortality risk prediction for different RRT forms, which help make more informed choices between treatment modalities. Finally, it is worth mentioning the concept of counterfactual prediction. Counterfactual prediction aims to predict how a patient might fare on various alternative treatment plans. This entails combining prediction and etiology and ideally provides individualized risks for different treatment options, such as conservative care versus dialysis.29 Counterfactual prediction is still largely unchartered territory and a methodological challenge due to confounding, but an important research area to explore in years to come, particularly for a nephrological population.
Implemented Models for Kidney Patients
Although the uptake of prognostic models in nephrology seems limited compared with other medical fields, such as oncology, a few models have actually been implemented into clinical practice. First, the previously mentioned KFRE has been extensively validated, and previous impact studies have shown benefit of using the model in patient care.25,63,64 The model was originally intended to predict kidney failure requiring RRT among patients with CKD stage 3–5 but is now also recommended to be used by general practitioners to help them decide whether patients need to be referred to a nephrologist and for RRT planning. The KFRE is now endorsed in several guidelines, such as the UK National Institute for Health and Care Excellence guidelines, the KDIGO guidelines, and the Kidney Disease Outcomes Quality Initiative guidelines. Another model that is recommended in one of the KDIGO guidelines is a model proposed by Grams et al. which predicts the risk of ESKD in healthy people who consider donating their kidney.65,66 Finally, the current US kidney transplant allocation system is partly based on the KDRI, which predicts the quality of kidney grafts and recipient life expectancy.32,58,67
Prognostic models that are endorsed in clinical guidelines, such as the KFRE and the KDRI, are inclined to have a better uptake in clinical practice. The inclusion of a prognostic model in clinical guidelines endorses a model's credibility, enhancing health care providers' trust and adoption of the model.
Implementation Barriers
Besides the abovementioned shortcomings of current prediction research for kidney patients, there are other barriers that hinder the implementation of prognostic models. First, the potential of prognostic models is not yet widely recognized by clinicians. A recent study has shown that almost half of the nephrologists did not use prognostic models. Reasons for this included not knowing how to use such models, not knowing where to find them, and thinking that current models are not reliable enough. In addition, nephrologists worried that prediction models may give patients false expectations, do not give enough information about the individual in front of them, and require too much time.8 Instead of the continuous development of novel models, more attention needs to be paid to these barriers and how to overcome them. Second, novel models or updates to existing models frequently introduce the use of predictors that are hard to obtain in routine clinical practice, such as specialized biochemical markers or complex imaging techniques, which may pose practical difficulties for integration into standard clinical workflows. This limitation underscores the need for the development of prognostic models that rely on readily accessible predictors, facilitating their implementation into everyday patient care. Finally, another perceived barrier to the implementation of prognostic models that deserves more attention is the classification of prognostic models as a medical device according to the Medical Devices Regulations.68
Implementation Opportunities and Future Suggestions
There are various opportunities to further implement prognostic models in kidney patient care. In Box 1, these suggestions, as described below, are summarized.
Box 1.
Implementation opportunities and future suggestions.
Opportunities to further implement prognostic models in kidney patient care exist. In this review, several approaches are described.
More focus on rigorous methodological and reporting quality when conducting a prediction study. To do so, we recommend adherence to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis guideline.
New models should be developed for patient-reported outcomes.
Instead of continuously developing new models for the same outcomes, existing models can be externally validated and updated to improve their predictive abilities in the CKD and RRT population.
Two important steps should be undertaken more: (1) comparative external validation studies and (2) impact studies.
Promising models with good predictive abilities and positive impact on clinical practice should be endorsed in clinical guidelines.
Barriers to the implementation of prognostic models, such as limited trust from nephrologists, and ways to overcome them should be further investigated.
First, researchers should focus on rigorous methodology and reporting when conducting a prediction study. To do so, we recommend adherence to the TRIPOD guideline. Second, to combat research waste and to make use of the knowledge we have obtained from previous studies, promising models should be externally validated and, if necessary, updated to improve their predictive abilities in the CKD and RRT population. Third, to reduce the gap between research and clinical practice, two important steps should be undertaken: (1) comparative comprehensive external validation studies to guide clinicians and researchers in choosing the most appropriate prognostic model and facilitating direct comparison between existing models and (2) impact studies in which the effect of such prognostic models on patient care is evaluated. Fourth, prognostic models with good predictive abilities and positive impact on clinical practice should be endorsed in clinical guidelines. For example, the use of prognostic models to predict kidney disease progression for timely referral to a nephrologist is currently encouraged in the KDIGO 2012 Clinical Practice Guideline for the Evaluation and Management of CKD and three prognostic models are referred to.25,69–71 In addition, in the KDIGO guideline on the evaluation and care of living kidney donors, the use of a publicly available online risk calculator to predict the risk of end-stage renal disease in healthy people that consider donating their kidney is advised.65 Fifth, even models that perform well and have been extensively validated are often not used in clinical practice because of limited trust and other experienced barriers of nephrologists. To overcome these barriers, more research is needed into why these barriers exist and what is needed for nephrologists to feel ready to use prognostic models. Sixth, new models should be developed for PROs. By doing so, these new models can be used to improve information provision in clinical consultations and ultimately to support the shared decision-making process and patient-centered care provision. And finally, well-performing models should be used in clinical practice to inform patients and to support treatment decisions. Guidance on how risk predictions should be communicated exists.72
Strengths and Limitations
This scoping review has several strengths, including its broad systematic search and thorough screening process with two independent reviewers. Although the large scale of this study is a major strength, it also comes with some potential drawbacks. First, the vast amount of studies that were screened may have led to some unjust inclusions or exclusions because studies were mainly included on the basis of their abstract. However, owing to the large number of included studies, it is unlikely that these potentially could have influenced the results and conclusions in a significant manner. Second, formal risk of bias assessment of the included studies using for example the Prediction model Risk of Bias Assessment Tool was not feasible because of the scope of this review.73 Thus, despite our efforts to report on the methodological and reporting quality of the studies, this information is limited. Finally, although most papers are published in English, the restriction to include only English language studies may have inadvertently led to underrepresentation of research from non–English-speaking regions.
In conclusion, the body of literature on prognostic modeling in the field of nephrology has grown immensely, and to achieve more cohesion within the field, it is crucial that we take inventory of the work that has been done over the past 40 years. In this scoping review, we have provided the reader with a comprehensive overview of all prognostic modeling studies for kidney patients, their reporting and methodology, predicted outcome definitions, and potential implementation opportunities. A plethora of prognostic models now exist for patients with CKD, of which a small part is promising for clinical uptake. Using this review, we can reflect on what has and has not been achieved so far and more harmonious plans for the future can be made. This could not only include setting up large consortia to carry out validation of models on a larger structural scale with attention for representation of various patient groups and ethnicities but also policy on which outcomes are missing in available models and how we can make a joint effort to give patients the individualized prognostic information that is essential to furthering shared decision making and their own disease understanding.
Supplementary Material
Acknowledgments
We would like to thank Ilse Jansma, head of the medical information specialists of the Leiden University library, for her assistance with developing a search string for our systematic search. The funders of this study did not influence in any way the study design, data collection and analysis, interpretation of the results, writing process, or the decision to publish the paper.
Footnotes
See related article, “Predicting Outcomes in Nephrology: Lots of Tools, Limited Uptake: How Do We Move Forward?,” on pages 361–363.
Disclosures
W.J.W. Bos reports Research Funding: Dutch Kidney Foundation, Leading the Change, St. Antonius Research Fund, and ZonMW; and Advisory or Leadership Role: Chair ICHOM Working Group CKD, various committees on Value Based Healthcare of the Association of Dutch University Hospitals (NFU), Various committees of the Dutch Nephrology Quality Initiative “Nefrovisie,”, Linnean Initiative. F.W. Dekker reports Research Funding: Astellas, Chiesi, and Vifor. J.I. Rotmans reports Consultancy: Xeltis BV; Advisory or Leadership Role: Advisory Board Nextkidney and President Vascular Access Society; and Other Interests or Relationships: Chair Thematic Working Group Vascular Tissue Engineering at TERMIS, Member Guideline Committee Dutch Society of Nephrology, and President-Elect Vascular Access Society. M. van Diepen reports Employer: Leiden University Medical Center; spouse: ING Bank. All remaining authors have nothing to disclose.
Funding
J. Milders, C.L. Ramspek, R.J. Janse, and M. van Diepen are supported by a grant from the Dutch Kidney Foundation (20OK016).
Author Contributions
Conceptualization: Willem Jan W. Bos, Friedo W. Dekker, Roemer J. Janse, Jet Milders, Chava L. Ramspek, Joris I. Rotmans, Merel van Diepen.
Data curation: Jet Milders, Chava L. Ramspek, Merel van Diepen.
Formal analysis: Jet Milders, Merel van Diepen.
Funding acquisition: Merel van Diepen.
Investigation: Friedo W. Dekker, Jet Milders, Merel van Diepen.
Methodology: Friedo W. Dekker, Jet Milders, Merel van Diepen.
Project administration: Jet Milders, Merel van Diepen.
Resources: Jet Milders, Merel van Diepen.
Software: Roemer J. Janse, Jet Milders, Merel van Diepen.
Supervision: Friedo W. Dekker, Merel van Diepen.
Validation: Jet Milders, Merel van Diepen.
Visualization: Friedo W. Dekker, Roemer J. Janse, Jet Milders, Merel van Diepen.
Writing – original draft: Jet Milders.
Writing – review & editing: Willem Jan W. Bos, Friedo W. Dekker, Roemer J. Janse, Chava L. Ramspek, Joris I. Rotmans, Merel van Diepen.
Data Sharing Statement
An Excel file with all extracted data accompanied by a data dictionary is publicly available on https://github.com/jmilders.
Supplemental Material
This article contains the following supplemental material online at http://links.lww.com/JSN/E564.
Detailed description of the search strategies for the scoping review.
Search string for PubMed.
Search string for Embase.
Supplemental Table 1. Preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) checklist.
Explanation of the definitions that were used during the data extraction process.
Supplemental Figure 1. Continents from which patient populations were derived.
Supplemental Table 2. Outcome categories identified per population.
Supplemental Figure 2. Number of studies that referenced the TRIPOD statement.
Supplemental Figure 3. Percentage of studies that reported measures of discrimination and calibration over time.
Supplemental Figure 4. Flow chart depicting how many models were presented in a useable format, how many were accompanied by a measure of discrimination and calibration, how many models were validated internally and externally, and how many models were developed in a sample size larger than 250 patients.
Supplemental Figure 5. Percentage of studies that performed internal and external validations within the same study over time.
Supplemental Table 3. Methodological and reporting quality of included studies per population.
Supplemental Table 4. External validation of models proposed in the development studies.
References
- 1.Go AS, Chertow GM, Fan D, McCulloch CE, Hsu CY. Chronic kidney disease and the risks of death, cardiovascular events, and hospitalization. N Engl J Med. 2004;351(13):1296–1305. doi: 10.1056/NEJMoa041031 [DOI] [PubMed] [Google Scholar]
- 2.van der Velde M Matsushita K Coresh J, et al.; Chronic Kidney Disease Prognosis Consortium. Lower estimated glomerular filtration rate and higher albuminuria are associated with all-cause and cardiovascular mortality. A collaborative meta-analysis of high-risk population cohorts. Kidney Int. 2011;79(12):1341–1352. doi: 10.1038/ki.2010.536 [DOI] [PubMed] [Google Scholar]
- 3.Astor BC Matsushita K Gansevoort RT, et al.; Chronic Kidney Disease Prognosis Consortium. Lower estimated glomerular filtration rate and higher albuminuria are associated with mortality and end-stage renal disease. A collaborative meta-analysis of kidney disease population cohorts. Kidney Int. 2011;79(12):1331–1340. doi: 10.1038/ki.2010.550 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Perlman RL Finkelstein FO Liu L, et al. Quality of life in chronic kidney disease (CKD): a cross-sectional analysis in the Renal Research Institute-CKD study. Am J Kidney Dis. 2005;45(4):658–666. doi: 10.1053/j.ajkd.2004.12.021 [DOI] [PubMed] [Google Scholar]
- 5.Schmidt IM Hübner S Nadal J, et al. Patterns of medication use and the burden of polypharmacy in patients with chronic kidney disease: the German Chronic Kidney Disease study. Clin Kidney J. 2019;12(5):663–672. doi: 10.1093/ckj/sfz046 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Al-Mansouri A Al-Ali FS Hamad AI, et al. Assessment of treatment burden and its impact on quality of life in dialysis-dependent and pre-dialysis chronic kidney disease patients. Res Social Adm Pharm. 2021;17(11):1937–1944. doi: 10.1016/j.sapharm.2021.02.010 [DOI] [PubMed] [Google Scholar]
- 7.de Jong Y van der Willik EM Milders J, et al. Person centred care provision and care planning in chronic kidney disease: which outcomes matter? A systematic review and thematic synthesis of qualitative studies: care planning in CKD: which outcomes matter? BMC Nephrol. 2021;22(1):309. doi: 10.1186/s12882-021-02489-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.van der Horst DEM Engels N Hendrikx J, et al. Predicting outcomes in chronic kidney disease: needs and preferences of patients and nephrologists. BMC Nephrol. 2023;24(1):66. doi: 10.1186/s12882-023-03115-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Vogenberg FR. Predictive and prognostic models: implications for healthcare decision-making in a modern recession. Am Health Drug Benefits. 2009;2(6):218–222. [PMC free article] [PubMed] [Google Scholar]
- 10.Engels N, de Graav GN, van der Nat P, van den Dorpel M, Stiggelbout AM, Bos WJ. Shared decision-making in advanced kidney disease: a scoping review. BMJ Open. 2022;12(9):e055248. doi: 10.1136/bmjopen-2021-055248 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Lerner B, Desrochers S, Tangri N. Risk prediction models in CKD. Semin Nephrol. 2017;37(2):144–150. doi: 10.1016/j.semnephrol.2016.12.004 [DOI] [PubMed] [Google Scholar]
- 12.Kadatz MJ, Lee ES, Levin A. Predicting progression in CKD: perspectives and precautions. Am J Kidney Dis. 2016;67(5):779–786. doi: 10.1053/j.ajkd.2015.11.007 [DOI] [PubMed] [Google Scholar]
- 13.Forzley B Chiu HHL Djurdjev O, et al. A survey of Canadian nephrologists assessing prognostication in end-stage renal disease. Can J Kidney Health Dis. 2017;4:2054358117725294. doi: 10.1177/2054358117725294 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Ramspek CL Evans M Wanner C, et al.; EQUAL Study Investigators. Kidney failure prediction models: a comprehensive external validation study in patients with advanced CKD. J Am Soc Nephrol. 2021;32(5):1174–1186. doi: 10.1681/ASN.2020071077 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.de Jong Y Ramspek CL van der Endt VHW, et al. A systematic review and external validation of stroke prediction models demonstrates poor performance in dialysis patients. J Clin Epidemiol. 2020;123:69–79. doi: 10.1016/j.jclinepi.2020.03.015 [DOI] [PubMed] [Google Scholar]
- 16.Tangri N Kitsios GD Inker LA, et al. Risk prediction models for patients with chronic kidney disease: a systematic review. Ann Intern Med. 2013;158(8):596–603. doi: 10.7326/0003-4819-158-8-201304160-00004 [DOI] [PubMed] [Google Scholar]
- 17.Ramspek CL, Voskamp PW, van Ittersum FJ, Krediet RT, Dekker FW, van Diepen M. Prediction models for the mortality risk in chronic dialysis patients: a systematic review and independent external validation study. Clin Epidemiol. 2017;9:451–464. doi: 10.2147/CLEP.S139748 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32. doi: 10.1080/1364557032000119616 [DOI] [Google Scholar]
- 19.Liberati A Altman DG Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700. doi: 10.1136/bmj.b2700 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Tricco AC Lillie E Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–473. doi: 10.7326/M18-0850 [DOI] [PubMed] [Google Scholar]
- 21.Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143. doi: 10.1186/s12874-018-0611-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Collins GS, Reitsma JB, Altman DG, Moons KG. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement. BMC Med. 2015;13:1. doi: 10.1186/s12916-014-0241-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Moons KG Altman DG Reitsma JB, et al. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med. 2015;162(1):W1–W73. doi: 10.7326/M14-0698 [DOI] [PubMed] [Google Scholar]
- 24.Hutchinson TA, Thomas DC, MacGibbon B. Predicting survival in adults with end-stage renal disease: an age equivalence index. Ann Intern Med. 1982;96(4):417–423. doi: 10.7326/0003-4819-96-4-417 [DOI] [PubMed] [Google Scholar]
- 25.Tangri N Stevens LA Griffith J, et al. A predictive model for progression of chronic kidney disease to kidney failure. JAMA. 2011;305(15):1553–1559. doi: 10.1001/jama.2011.451 [DOI] [PubMed] [Google Scholar]
- 26.Wilson PW, D'Agostino RB, Levy D, Belanger AM, Silbershatz H, Kannel WB. Prediction of coronary heart disease using risk factor categories. Circulation. 1998;97(18):1837–1847. doi: 10.1161/01.cir.97.18.1837 [DOI] [PubMed] [Google Scholar]
- 27.van Rijn MHC van de Luijtgaarden M van Zuilen AD, et al. Prognostic models for chronic kidney disease: a systematic review and external validation. Nephrol Dial Transplant. 2021;36(10):1837–1850. doi: 10.1093/ndt/gfaa155 [DOI] [PubMed] [Google Scholar]
- 28.Nemcsik J, Tabák Á, Batta D, Cseprekál O, Egresits J, Tislér A. Integrated central blood pressure-aortic stiffness risk score for cardiovascular risk stratification in chronic kidney disease. Physiol Int. 2018;105(4):335–346. doi: 10.1556/2060.105.2018.4.29 [DOI] [PubMed] [Google Scholar]
- 29.Ramspek CL, Verberne WR, van Buren M, Dekker FW, Bos WJW, van Diepen M. Predicting mortality risk on dialysis and conservative care: development and internal validation of a prediction tool for older patients with advanced chronic kidney disease. Clin Kidney J. 2021;14(1):189–196. doi: 10.1093/ckj/sfaa021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373–383. doi: 10.1016/0021-9681(87)90171-8 [DOI] [PubMed] [Google Scholar]
- 31.Bouillanne O Morineau G Dupont C, et al. Geriatric nutritional risk index: a new index for evaluating at-risk elderly medical patients. Am J Clin Nutr. 2005;82(4):777–783. doi: 10.1093/ajcn/82.4.777 [DOI] [PubMed] [Google Scholar]
- 32.Rao PS Schaubel DE Guidinger MK, et al. A comprehensive risk quantification score for deceased donor kidneys: the kidney donor risk index. Transplantation. 2009;88(2):231–236. doi: 10.1097/TP.0b013e3181ac620b [DOI] [PubMed] [Google Scholar]
- 33.Irish WD McCollum DA Tesi RJ, et al. Nomogram for predicting the likelihood of delayed graft function in adult cadaveric renal transplant recipients. J Am Soc Nephrol. 2003;14(11):2967–2974. doi: 10.1097/01.ASN.0000093254.31868.85 [DOI] [PubMed] [Google Scholar]
- 34.Irish WD, Ilsley JN, Schnitzler MA, Feng S, Brennan DC. A risk prediction model for delayed graft function in the current era of deceased donor renal transplantation. Am J Transplant. 2010;10(10):2279–2286. doi: 10.1111/j.1600-6143.2010.03179.x [DOI] [PubMed] [Google Scholar]
- 35.Nyberg SL Matas AJ Rogers M, et al. Donor scoring system for cadaveric renal transplantation. Am J Transplant. 2001;1(2):162–170. doi: 10.1034/j.1600-6143.2001.10211.x [DOI] [PubMed] [Google Scholar]
- 36.Nyberg SL Matas AJ Kremers WK, et al. Improved scoring system to assess adult donors for cadaver renal transplantation. Am J Transplant. 2003;3(6):715–721. doi: 10.1034/j.1600-6143.2003.00111.x [DOI] [PubMed] [Google Scholar]
- 37.Molnar MZ Nguyen DV Chen Y, et al. Predictive score for posttransplantation outcomes. Transplantation. 2017;101(6):1353–1364. doi: 10.1097/TP.0000000000001326 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Jang KS, Jeon GS. Prediction model for health-related quality of life in hospitalized patients with pulmonary tuberculosis. J Korean Acad Nurs. 2017;47(1):60–70. doi: 10.4040/jkan.2017.47.1.60 [DOI] [PubMed] [Google Scholar]
- 39.Lee SK Son YJ Kim J, et al. Prediction model for health-related quality of life of elderly with chronic diseases using machine learning techniques. Healthc Inform Res. 2014;20(2):125–134. doi: 10.4258/hir.2014.20.2.125 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Eckardt KU Bansal N Coresh J, et al.; Conference Participants. Improving the prognosis of patients with severely decreased glomerular filtration rate (CKD G4+): conclusions from a Kidney Disease: Improving Global Outcomes (KDIGO) Controversies Conference. Kidney Int. 2018;93(6):1281–1292. doi: 10.1016/j.kint.2018.02.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Bouwmeester W Zuithoff NP Mallett S, et al. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12. doi: 10.1371/journal.pmed.1001221 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Andaur Navarro CL Damen JAA Takada T, et al. Risk of bias in studies on prediction models developed using supervised machine learning techniques: systematic review. BMJ. 2021;375:n2281. doi: 10.1136/bmj.n2281 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Christodoulou E van Smeden M Edlinger M, et al. Adaptive sample size determination for the development of clinical prediction models. Diagn Progn Res. 2021;5(1):6. doi: 10.1186/s41512-021-00096-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Riley RD Ensor J Snell KIE, et al. Calculating the sample size required for developing a clinical prediction model. BMJ. 2020;368:m441. doi: 10.1136/bmj.m441 [DOI] [PubMed] [Google Scholar]
- 45.Riley RD Snell KI Ensor J, et al. Minimum sample size for developing a multivariable prediction model: PART II - binary and time-to-event outcomes. Stat Med. 2019;38(7):1276–1296. doi: 10.1002/sim.7992 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Riley RD Snell KIE Ensor J, et al. Minimum sample size for developing a multivariable prediction model: Part I - continuous outcomes. Stat Med. 2019;38(7):1262–1275. doi: 10.1002/sim.7993 [DOI] [PubMed] [Google Scholar]
- 47.Collins GS Dhiman P Andaur Navarro CL, et al. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence. BMJ Open. 2021;11(7):e048008. doi: 10.1136/bmjopen-2020-048008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Van Calster B, McLernon DJ, van Smeden M, Wynants L, Steyerberg EW.; Topic Group ‘Evaluating diagnostic tests and prediction models’ of the STRATOS initiative. Calibration: the Achilles heel of predictive analytics. BMC Med. 2019;17(1):230. doi: 10.1186/s12916-019-1466-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Collins GS de Groot JA Dutton S, et al. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting. BMC Med Res Methodol. 2014;14:40. doi: 10.1186/1471-2288-14-40 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Van Calster B, Vickers AJ. Calibration of risk prediction models: impact on decision-analytic performance. Med Decis Making. 2015;35(2):162–169. doi: 10.1177/0272989X14547233 [DOI] [PubMed] [Google Scholar]
- 51.Ramspek CL, Jager KJ, Dekker FW, Zoccali C, van Diepen M. External validation of prognostic models: what, why, how, when and where? Clin Kidney J. 2021;14(1):49–58. doi: 10.1093/ckj/sfaa188 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Sperrin M, Riley RD, Collins GS, Martin GP. Targeted validation: validating clinical prediction models in their intended population and setting. Diagn Progn Res. 2022;6(1):24. doi: 10.1186/s41512-022-00136-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Moons KG, Altman DG, Vergouwe Y, Royston P. Prognosis and prognostic research: application and impact of prognostic models in clinical practice. BMJ. 2009;338:b606. doi: 10.1136/bmj.b606 [DOI] [PubMed] [Google Scholar]
- 54.Bae S, Massie AB, Luo X, Anjum S, Desai NM, Segev DL. Changes in discard rate after the introduction of the kidney donor profile index (KDPI). Am J Transplant. 2016;16(7):2202–2207. doi: 10.1111/ajt.13769 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Barbour SJ Canney M Coppo R, et al.; International IgA Nephropathy Network. Improving treatment decisions using personalized risk assessment from the International IgA Nephropathy Prediction Tool. Kidney Int. 2020;98(4):1009–1019. doi: 10.1016/j.kint.2020.04.042 [DOI] [PubMed] [Google Scholar]
- 56.Calisa V Craig JC Howard K, et al. Survival and quality of life impact of a risk-based allocation algorithm for deceased donor kidney transplantation. Transplantation. 2018;102(9):1530–1537. doi: 10.1097/TP.0000000000002144 [DOI] [PubMed] [Google Scholar]
- 57.Cannon RM, Brock GN, Marvin MR, Slakey DP, Buell JF. The contribution of donor quality to differential graft survival in African American and Caucasian renal transplant recipients. Am J Transplant. 2012;12(7):1776–1783. doi: 10.1111/j.1600-6143.2012.04091.x [DOI] [PubMed] [Google Scholar]
- 58.Philipse E Lee APK Bracke B, et al. Does Kidney Donor Risk Index implementation lead to the transplantation of more and higher-quality donor kidneys? Nephrol Dial Transplant. 2017;32(11):1934–1938. doi: 10.1093/ndt/gfx257 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Harasemiw O Drummond N Singer A, et al. Integrating risk-based care for patients with chronic kidney disease in the community: study protocol for a cluster randomized trial. Can J Kidney Health Dis. 2019;6:2054358119841611. doi: 10.1177/2054358119841611 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Foucher Y Meurette A Daguin P, et al. A personalized follow-up of kidney transplant recipients using video conferencing based on a 1-year scoring system predictive of long term graft failure (TELEGRAFT study): protocol for a randomized controlled trial. BMC Nephrol. 2015;16:6. doi: 10.1186/1471-2369-16-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Lopez-Vargas PA, Tong A, Phoon RK, Chadban SJ, Shen Y, Craig JC. Knowledge deficit of patients with stage 1-4 CKD: a focus group study. Nephrology (Carlton). 2014;19(4):234–243. doi: 10.1111/nep.12206 [DOI] [PubMed] [Google Scholar]
- 62.Tong A Sainsbury P Chadban S, et al. Patients' experiences and perspectives of living with CKD. Am J Kidney Dis. 2009;53(4):689–700. doi: 10.1053/j.ajkd.2008.10.050 [DOI] [PubMed] [Google Scholar]
- 63.Hingwala J Wojciechowski P Hiebert B, et al. Risk-based triage for nephrology referrals using the kidney failure risk equation. Can J Kidney Health Dis. 2017;4:2054358117722782. doi: 10.1177/2054358117722782 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Hong R, Pirabhahar S, Turner K, Katz IJ. Triage system for nephrology referrals using the kidney failure risk equation (KFRE) score. Nephrology. 2020;25:53.doi: 10.1177/2054358117722782 [Google Scholar]
- 65.Grams ME, Garg AX, Lentine KL. Kidney-failure risk projection for the living kidney-donor candidate. N Engl J Med. 2016;374(21):2094–2095. doi: 10.1056/NEJMc1603007 [DOI] [PubMed] [Google Scholar]
- 66.The International Society of Nephrology (ISN). KDIGO 2017 Clinical Practice Guideline on the Evaluation and Care of Living Kidney Donors. 2017. Accessed June 12, 2023. https://kdigo.org/wp-content/uploads/2017/07/2017-KDIGO-LD-GL.pdf [Google Scholar]
- 67.Israni AK Salkowski N Gustafson S, et al. New national allocation policy for deceased donor kidneys in the United States and possible effect on patient outcomes. J Am Soc Nephrol. 2014;25(8):1842–1848. doi: 10.1681/ASN.2013070784 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.European Medicines Agency. Medical Devices. 2023. https://www.ema.europa.eu/en/human-regulatory/overview/medical-devices#:∼:text=Medical%20devices%20legislation,-The%20Regulations%20on&text=The%20Medical%20Devices%20Regulation%20applies,on%20active%20implantable%20medical%20devices [Google Scholar]
- 69.Keane WF Zhang Z Lyle PA, et al.; RENAAL Study Investigators. Risk scores for predicting outcomes in patients with type 2 diabetes and nephropathy: the RENAAL study. Clin J Am Soc Nephrol. 2006;1(4):761–767. doi: 10.2215/CJN.01381005 [DOI] [PubMed] [Google Scholar]
- 70.Halbesma N, Jansen DF, Heymans MW, Stolk RP, de Jong PE, Gansevoort RT.; PREVEND Study Group. Development and validation of a general population renal risk score. Clin J Am Soc Nephrol. 2011;6(7):1731–1738. doi: 10.2215/CJN.08590910 [DOI] [PubMed] [Google Scholar]
- 71.The International Society of Nephrology (ISN). KDIGO 2012 Clinical Practice Guideline for the Evaluation and Management of Chronic Kidney Disease. 2012. Accessed April 4, 2023.https://kdigo.org/wp-content/uploads/2017/02/KDIGO_2012_CKD_GL.pdf. [DOI] [PubMed] [Google Scholar]
- 72.University of Cambridge. Winton Centre for Risk and Evidence Communication. https://wintoncentre.maths.cam.ac.uk/resources/medicine/ [Google Scholar]
- 73.Wolff RF Moons KGM Riley RD, et al., PROBAST Group†. PROBAST: a tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med. 2019;170(1):51–58. doi: 10.7326/M18-1376 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
An Excel file with all extracted data accompanied by a data dictionary is publicly available on https://github.com/jmilders.

