Abstract
Purpose of review:
The universal adoption of electronic health records, improvement in technology, and the availability of continuous monitoring has generated large quantities of healthcare data. Machine learning is increasingly adopted by nephrology researchers to analyze this data in order to improve the care of their patients.
Recent findings:
In this review, we provide a broad overview of the different types of machine learning algorithms currently available and how researchers have applied these methods in nephrology research. Current applications have included prediction of acute kidney disease and chronic kidney disease along with progression of kidney disease. Researchers have demonstrated the ability of machine learning to read kidney biopsy samples, identify patient outcomes from unstructured data, identify subtypes in complex diseases, and discuss the potential benefits on drug discovery. We end with a discussion on the ethics and potential pitfalls of machine learning.
Summary:
Machine learning provides researchers with the ability to analyze data which was previously inaccessible. While still burgeoning several studies show promising results which will enable researchers to perform larger scale studies and clinicians the ability to provide more personalized care. However, we must ensure that implementation aids providers and does not lead to harm to patients.
Keywords: Machine learning, nephrology, AKI, CKD
Introduction:
The use of information technology has increased exponentially in healthcare. Near-universal adoption of electronic health records (EHR), increasing use of continuous monitoring, cost reductions in collection, storage, and retrieval of patient information have generated unprecedented amounts of data and will usher in the age of big data in medicine.(1) Care protocols are increasingly data driven, and clinicians have more data to contextualize.(2) Artificial intelligence (AI) with its roots in theoretical computer science is used in every aspect of our lives and may offer a way to leverage this ‘data deluge’ to improve the lives of our patients.
Machine Learning (ML) is a subset of AI- algorithms capable of adapting to and improving with repeated exposure to data. A formal description of such behavior is “A machine learns with respect to a particular task (T), performance metric (P), and type of experience (E), if the system reliably improves its P at T, following multiple E’s”.(3) Below we will review the basic aspects of ML/AI, use cases and recent papers on their use in nephrology as well as future directions.
Machine Learning Algorithms
There is a broad range of ML algorithms differing in their mathematics/computational logic. Consequently, they vary in their applicability to diverse data, use cases, and in performance when applied to the same data. A popular way of classifying ML algorithms is by how much human supervision they require when training (Figure 1).(4)
Figure 1.
Classification of Machine Learning algorithms
Supervised
Training data contains information about desired outcomes (labeled). Algorithms can be trained on the basis of this pre-populated knowledge. This includes both Classification – categorization of data, and Regression – prediction of a value for new data. The most important supervised learning algorithms are summarized in Table 1.
Table 1.
Supervised Learning Algorithms
Algorithm | Description |
---|---|
k-Nearest Neighbors | Classification on the basis of the closest data points in high dimensional space |
Decision Trees | Classification or Regression on the basis of hierarchical splitting in training data |
Random Forests | Combinations of Decision Trees working together |
Support Vector Machines | Classification on the basis of a separating hyperplane in high dimensional space |
Neural Networks | Classification or Regression on the basis of linear functions wrapped in non-linear activation functions. |
Unsupervised
Training data has no information about desired outcomes (unlabeled). Algorithms find similarities within data – Clustering, or outliers – Anomaly detection. Such algorithms are also used to create simpler representations of complex data – Dimensionality reduction. A non-exhaustive list of such algorithms is given in Table 2.
Table 2.
Unsupervised Learning Algorithms
Algorithms | Description |
---|---|
Clustering | |
K-Means | Clustering on the basis of proximity to a central point |
Spectral clustering | Clustering on the basis of eigenvalues and vectors of the graph Laplacian similarity matrix |
Density-Based Spatial Clustering of Applications with Noise (DBSCAN) | Clustering on the basis of how closely packed data points are in higher dimensional spaces |
Anomaly Detection | |
One-class SVM | Creation of the smallest hyperplane containing the largest amount of data so that outliers have the maximal distance from the origin of the hyperplane |
Isolation Forest | Identification of outliers on the basis of the number of splits in a decision tree required to identify a datapoint |
Dimensionality Reduction | |
Principal Component Analysis | Orthogonal transformation of possibly correlated variables in higher dimensional space into linearly uncorrelated variables in lower dimensional space |
Factor Analysis | Transformation of possibly correlated variables in higher dimensional spaces on the basis of latent variables or factors that account for that correlation |
t-distributed Stochastic Neighbor Embedding | Visualization technique suitable for reduction of higher dimensional data into 2 or 3 dimensions on the basis of similarity to nearby points |
Autoencoders | Neural networks with an equivalent number of input and output nodes, and an informational bottleneck. The bottleneck can be used to derive meaningful representations of higher dimensional data. |
Semi-Supervised
A combination of supervised and unsupervised methods. Some training data is labeled, and this is used for drawing conclusions about and classifying the rest of the data. An example is Deep Belief Networks that was applied to EEG interpretation and drug discovery.(5)
Reinforcement learning
An algorithm is allowed information about the environment it is going to operate in and given a set of actions it is allowed to perform. Correct actions are rewarded, and incorrect actions are penalized. Successive iterations lead to the development of a self-learned policy applicable to automated future performance. This has been very successfully utilized in gaming and other applications.(6)
Neural Networks
Neural networks are modeled on biological nervous systems. A biological neuron collects signals using its dendrites, creates and processes a rough summation of these signals, and depending on the result, either does or does depolarize – leading to signal propagation down its axon. An artificial neuron functions in much the same way, except the importance or weights of the signals being input are adjusted in repeated iterations on the basis of a calculation (loss function) of how different current output is from a desired output.(7, 8)
Sequentially connected layers of such artificial neurons form Artificial Neural Networks (ANNs).(8) ANNs have at minimum three layers, an input layer that accepts incoming data, a middle (hidden) layer, and an output layer that returns the result of its inner calculations. Calculations are performed in the middle/hidden layers – named so because their state is not observable outside the ANN. Shallow Learning is a term applicable to ANNs which have only one hidden layer. Similarly, Deep Learning is a term used to describe ANNs with more than one hidden layer.
The architecture of inter-neuron connections within a neural network can be adjusted to adapt to specialized tasks. Recurrent Neural Networks remember state by feeding the output of a neuron into itself, and are suited to learning time series data, as well as handwriting and speech recognition, as well as Natural Language Processing (NLP).(9, 10) Convolutional Neural Networks are modelled after the structure of the visual cortex and are suited to image/video classification.(11, 12) Figure 2 shows common NN architectures.(13) Medical applications of NNs are summarized in Table 4.(14)
Figure 2. Common neural network architectures.
Straight lines represent neuron inputs and outputs. Mathematically, these are n-dimensional arrays of numbers or Tensors. Multi-Layer Perceptron: Network with one or more hidden layers between input and output layers. Recurrent Neural Network: Hidden layer outputs feed into hidden layer inputs leading to retention of memory. Convolutional Neural Network: Layers are not fully connected. Each neuron in a hidden convolutional layer processes a part of total input data and passes it forward. Autoencoder: Recreates data presented to the input layer in the output layer. The hidden layer is a compressed or lower dimensional representation of this data.
Table 4.
Medical applications of neural networks
Architecture | Usage |
---|---|
Multi-Layer Perceptrons | Useful when there’s no natural ordering of inputs – as in gene expression measurements, outcome prediction |
Recurrent Neural Networks | Natural language processing – patient notes, time series data such as lab investigations, genome sequencing |
Convolutional Neural Networks | Histopathology and radiology image recognition, DNA sequencing |
Autoencoders | Analysis of gene expression data and EHR data |
Restricted Boltzman Machine | Combination of -omics data (DNA methylation, mRNA expression etc.) |
Deep Belief Networks | EEG interpretation, drug discovery |
Generative Adversarial Networks | Generation of microscopic images |
An approach unique to neural networks is Transfer Learning. Transfer learning refers to the process of using a pre-trained model as a starting point for related tasks on similar data.(15, 16) The expectation is improved performance and generalization at reduced cost.
Metrics of ML algorithm performance
For binary classification problems, accuracy can be measured from the 2×2 table - referred to as the confusion matrix.(17) This matrix compares negative and positive predictions of the algorithm to ground-truth negative and positive values. All of the following metrics have values between 0 and 1, with values closer to 1 indicative of better performance. These metrics are summarized in Table 3. For multi-class classification problems, accuracy can be determined from Logarithmic Loss.(18)
Table 3.
Metrics for assessing ML algorithm performance. TN: True Negative, TP: True Positive, FP: False Positive, FN: False Negative
Metric | Description | Equation |
---|---|---|
Accuracy | Ratio of the total number of correct predictions to the total number of samples in the dataset. | |
Sensitivity | Ratio of the number of correct positive predictions to the total number of positives in the dataset. Sensitivity is also referred to as Recall or True Positive Rate. | |
Specificity | Ratio of the number of correct negative predictions to the total number of negatives in the dataset. Specificity is also referred to as the True Negative Rate. The False Positive Rate can be derived from subtracting the sensitivity from 1. | |
Positive Predictive Value | Ratio of the number of correct positive predictions to the total number of positive predictions. It is also referred to as Precision. | |
Negative Predictive Value | Ratio of the number of correct negative predictions to the total number of negative predictions. | |
F1 Score | Harmonic mean of Precision and Recall. This is a useful metric for problems with large class negative / positive outcome imbalances since other metrics like Accuracy tend to be skewed. | |
Area Under Receiver Operating Characteristic curve (AUROC) | The ROC curve for an algorithm is created the True Positive Rate against the False Positive Rate at various probability thresholds. The area under this curve is indicative of how well a model can differentiate negatives from positives. |
Application of AI/ML in Kidney Disease
While AI/ML is prevalent in our daily lives from finding optimal driving routes to facial recognition, uptake in nephrology research is slow. A search in PubMed for “machine learning” and “kidney” restricted to human studies resulted in only 207 results, half of which were published in the past two years and most do not comprise ML in its classical sense (Figure 1). Although, a comprehensive review of all articles is out of scope, we review pertinent recent literature on broader themes of applications of AI/ML in nephrology.
ML for prediction:
Acute kidney injury (AKI) is common and is associated with morbidity and mortality. A recently published paper by Google DeepMind used RNNs on electronic health records of 703,782 adults from US Department of Veterans Affairs to predict AKI at 48 hours.(19) Information was grouped into 6-hour periods and data from each period was used to predict AKI 48 hours later. They found that 56% of inpatient AKI events were predicted up to 48 hours in advance with higher sensitivity in those who required dialysis (84%). This corresponds to an area under the curve (AUC) of 92%. RNN’s had better performance compared to other simpler models. This can be compared with another paper that used gradient boosting machine model to predict stage 2 AKI which had a sensitivity of 0.87 at 48 hours.(20) While results are promising, both studies have generalizability issues as they were done at a single center/single healthcare system. Additionally, while they did separate their cohorts into validation and test sets there was no external validation done. Lastly, 94% of patients were male, it is unclear whether these results would replicate in a more balanced cohort, since there are sex differences in AKI.(21)
Type 2 diabetes mellitus (T2D) is the leading cause of kidney failure in the United States, however only 20–40% of patients with T2D will develop CKD.(22) While there is no cure, several measures exist to slow progression and thus early risk stratification is needed. To do this Ravizza et al used EHR data from 417,912 people with diabetes (type 1 and 2) to create a prediction model for CKD development.(23) They selected 7 features for inclusion into their model and used a logistic regression model. They found an AUC of 0.79 on their validation set, and a sensitivity of 79% and a specificity of 71% at their optimal algorithm cut off. However, these results need to be interpreted in view of several critiques. These include their simplistic approach to missing values (replacing with mean values) and also their use of logistic regression model over more advanced modeling techniques.(24) Authors justify these methods citing simplicity and to allow for medical interpretation of features instead of the “black box” of deep learning. Also surprising was the AUC of their independent validation set was higher than the AUC of the training and verification set (IBM Explorys) which were from the same cohort. While the similar AUC between the training/verification cohorts is understandable since they are part of the same dataset, usually AUC decreases externally due to inherent differences in the data structure. Finally, how imputation of missing values is affecting these results is unclear. Despite limitations to both studies, they show how machine learning can improve on prediction of AKI and CKD compared to traditional models.
Prediction using AI/ML may also be used for disease progression. Chen et al used a gradient boosting tree (GBT) model to predict progression at 5 years in patients with biopsy proven IgA nephropathy.(25) The GBT model had the highest AUC (0.89 and 0.84 for derivation and validation cohorts respectively) compared to standard Cox models. However, this was conducted only in a Chinese population and therefore generalizability is unclear. Finally, Makino et al used EHR data from patients with type 2 diabetes to predict progression.(26) They included laboratory tests, ICD 10 codes, and incorporated text extracted by natural language processing (NLP) into their models and found an AUC of 0.74. Again, this was a single center study, so generalizability is unclear.
Pattern recognition: Images and biopsies
The kidney biopsy is gold standard for kidney disease diagnoses. However, nephropathologist agreement is poor and ML can potentially increase reproducibility and assist in clinical trials where large numbers of biopsy readings are necessary. The ability to quickly digitize entire tissue sections down to a microscopic level enables AI/ML use for automated biopsy analysis. Two papers in the Journal of the American Society of Nephrology present excellent examples for this.
The first paper analyzed digitized kidney biopsies from patients with diabetic kidney disease (DKD).(27) The goal was to identify glomerular locations, identify and discretize glomerular components, quantify glomerular components, and classify glomerular features. They used a combination of traditional classifiers and deep learning (CNN and RNN). RNN used the sequence of glomerular features as temporal data. They also calculated feature importance by calculating change in prediction after feature removal. This helps address some “black box” concerns of DL methods. Results for identifying glomeruli were promising with accuracy of 0.94 and achieved high sensitivity/specificity for detection of nuclei, capillaries, Bowman’s space, mesangium, and basement membranes. Finally, the ML algorithm achieved a kappa of 0.55 with the most experienced nephropathologist. Unfortunately, their sample size was small (only 54 human samples and 25 mice samples) which forced them to split biopsies for analysis which may contribute to bias and limited analysis by DKD stage.
Hermsen et al. developed a CNN model to segment the different kidney compartments.(28) They used a combination of sources including kidney transplants (n=101), living donor kidneys (n=10), and nephrectomies for RCC (n=15). Additionally, they used samples from two institutions. They performed internal cross validation to prevent overfitting prior to external testing. The reference standard was ASAP (software for annotation) assisted pathologist reading. They assessed their performance using the Dice coefficient (DC), a measure of how similar two images.(29) High DC were found for glomeruli, followed by interstitium, capsule, and proximal tubules. However, the algorithm did not correctly classify sclerotic glomeruli and empty Bowman’s space which may be due to low sample size. Additionally, authors calculated interclass correlation coefficients (ICC) and spearman’s correlations coefficients between CNN and pathologist. High ICC was found for number of glomeruli but the spearman coefficient for interstitial fibrosis and tubular atrophy (IFTA) grading was only 0.58 which is an important predictor of renal transplant outcomes. Limitations included the need to split sections of the sample leading to bias and certain assumptions that were made for classification which may not be generalizable.
Natural language Processing
Natural language process, a branch of AI, allows for the extraction of important concepts from narrative text. While not ML in the traditional sense, many NLP platforms use ML in their algorithms. NLP can be used in multiple ways including identification of new predictors of kidney failure and identification of patient centered outcomes. As a recent example, Chan et al used NLP to identify seven symptoms in the clinical narrative of chronic hemodialysis patients.(30) Overall NLP outperformed ICD codes for symptom identification. However, this needs to be tested further for generalizability.
Finding Subtypes in Complex Diseases
Clinicians have long recognized that complex diseases are not homogenous, and ML can identify subtypes with differing outcomes and architecture. While several examples of this exist outside of the nephrology literature including cancers, diabetes, and sepsis, only one study was found in nephrology literature. (31–34) Researchers used machine learning to identify CKD progression subtypes from EHR data. Additional subtyping studies in nephrology may be useful in risk assessment, identification of individuals for clinical trials, and better understanding of pathogenesis kidney disease.
Drug Discovery
Drug discovery is slow, with average time from development to testing of 12 years.(35). AI/ML may be applied to drug development including identification of novel drug targets, in drug screening and design, and biomarker discovery.(36) At the time of this review, no published data on ML for drug discovery in nephrology was identified, however this is a promising future area.
Ethics, translation into clinical practice, and preventing bias
As more hospitals are partnering with private sector machine learning groups, care is needed to ensure that they are maintaining privacy. Considerable controversy has emerged with companies making a profit over patient data, at times without patient knowledge. We must ensure that research is carried out in an ethical manner that upholds patient’s right to privacy, data security and confidentiality.
EHR was meant to simplify physicians’ lives, however most physicians feel that there are more challenges with EHRs than benefits. Researchers must ensure that incorporation of ML into physician’s daily lives enhances their performance not hamper them.(37) By simplifying the vast amount of data and automating tasks, AI has the potential to improve doctor-patient relationships and optimize clinical workflow, however this has yet to be realized.(38)
ML algorithms built on homogenous data may not generalize to a different population. Additionally, there is the potential that since AI learns from data, it may perpetuate biases (demographic or otherwise) prevalent in healthcare. This needs to be carefully considered so that biases are not propagated, and patient safety compromised. (39) Additionally, healthcare data is siloed and an increased focus on validation, calibrating models to individual institutions as well as federated machine learning (training shared models locally and only sharing model updates/weights) is necessary.(40)
Finally, the field of AI/ML needs robust prospective validation, in close collaboration with regulatory agencies, patient advocates and diverse stakeholders, along with ‘techno-vigilance’ similar to pharmacovigilance, to ensure that the full potential of AI to change medicine and healthcare is met, without causing harm to patients.(41)
Conclusions:
AI/ML is increasingly being used in kidney research but as demonstrated above results of these studies are limited by generalizability concerns and lack of prospective validation. Therefore, there is a need for development of consortia and funding of prospective trials for eventual clinical implementation and transformation. We should all consider a recent quote “Machines will not replace physicians, but physicians using AI will soon replace those not using it”.(42) Thus, it behooves the nephrology community to play a leading role in this transformation to improve the care of our vulnerable patients with kidney disease.
Key Points:
Machine learning methods can be leveraged to process and interpret the large amounts of data generated by the near universal adoption of electronic health records in healthcare systems.
In this paper we reviewed common machine learning algorithms and discuss how they have been used in nephrology research.
Machine learning has been used to predict kidney disease progression and classification of tissue on kidney biopsies.
Implementation of machine learning into clinical practice must ensure protection of patient privacy, enhancement of physician workflow, and reflect the diversity of the patients we care for.
Acknowledgements:
Financial support and sponsorship: GNN is supported by a career development award from the National Institutes of Health (NIH) (K23DK107908) and is also supported by R01DK108803, U01HG007278, U01HG009610, and 1U01DK116100–01 grants.
Footnotes
Conflicts of interest: GNN receives financial compensation as a consultant and is an advisory board member for RenalytixAI and is a co-founder of and owns equity in RenalytixAI. GNN has received operational funding from Goldfinch Bio and consulting fees from BioVie Inc, AstraZeneca, Reata and GLG consulting in the past three years. The remaining authors have no conflicts of interest.
References:
- 1.Butte BN, Benjamin SG, Atul J. A call for deep-learning healthcare. Nature Medicine. 2019;25(1):14–5. [DOI] [PubMed] [Google Scholar]
- 2.Chawla NV, Davis DA. Bringing big data to personalized healthcare: a patient-centered framework. J Gen Intern Med. 2013;28 Suppl 3:S660–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Mitchell TM, Carbonell JG, Michalski RS. Machine learning : a guide to current research. Boston: Kluwer Academic Publishers; 1986. xiv, 429 p. p. [Google Scholar]
- 4.Ayodele TO. Types of Machine Learning Algorithms. New Advances in Machine Learning: IntechOpen; 2010.
- 5.Voulodimos A, Doulamis N, Doulamis A, Protopapadakis E. Deep Learning for Computer Vision: A Brief Review. Comput Intell Neurosci. 2018;2018:7068349. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, et al. Human-level control through deep reinforcement learning. Nature. 2015;518(7540):529–33. [DOI] [PubMed] [Google Scholar]
- 7.Harmon LD. Studies with artificial neurons, I: properties and functions of an artificial neuron | SpringerLink. Kybernetik. 1961;1(3):89–101. [DOI] [PubMed] [Google Scholar]
- 8.Gersherson C Artificial Neural Networks for Beginners 2003. [Available from: https://arxiv.org/abs/cs/0308031.
- 9.Boden M A guide to recurrent neural networks and backpropagation. the Dallas project. 2002.
- 10.Luo Y Recurrent neural networks for classifying relations in clinical notes. J Biomed Inform. 2017;72:85–95. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Howard AG. Some Improvements on Deep Convolutional Neural Network Based Image Classification. 2013.
- 12.Li Q, Weidong C, Xiaogang W, Zhou Y, Dagan FD, Mei C. Medical image classification with convolutional neural network - IEEE Conference Publication: IEEE; of Conference: 10–12 Dec. 2014 Date Added to IEEE Xplore: 23 March 2015 Electronic ISBN: 978–1-4799–5199-4INSPEC Accession Number: 15013430 DOI: 10.1109/ICARCV.2014.7064414Publisher: IEEEConference Location: Singapore, Singapore [Available from: https://ieeexplore.ieee.org/document/7064414. [Google Scholar]
- 13.The Neural Network Zoo - The Asimov Institute. 2016.
- 14.Ching T, Himmelstein DS, Beaulieu-Jones BK, Kalinin AA, Do BT, Way GP, et al. Opportunities and obstacles for deep learning in biology and medicine. 2018. [DOI] [PMC free article] [PubMed]
- 15.Pan SJ, Qiang Y. A Survey on Transfer Learning - IEEE Journals & Magazine: IEEE; of Publication: 16 October 2009. [Available from: https://ieeexplore.ieee.org/document/5288526.
- 16.Torrey L, Shavlik J. Transfer Learning. http://servicesigi-globalcom/resolvedoi/resolveaspx?doi=104018/978-1-60566-766-9ch011 0001.
- 17.Sokolova M, Japkowicz N, Szpakowicz S. Beyond Accuracy, F-Score and ROC: A Family of Discriminant Measures for Performance Evaluation | SpringerLink. 2019.
- 18.Rosasco L, De Vito E, Caponnetto A, Piana M, Verri A. Are loss functions all the same? Neural Comput. 2004;16(5):1063–76. [DOI] [PubMed] [Google Scholar]
- **19.Tomašev N, Glorot X, Rae JW, Zielinski M, Askham H, Saraiva A, et al. A clinically applicable approach to continuous prediction of future acute kidney injury. Nature. 2019;572(7767):116–9.A clinically applicable approach to continuous prediction of future acute kidney injury: A paper published by Google DeepMind which used electronic health records data to create a dynamic model for acute kidney injury prediction.
- 20.Koyner JL, Carey KA, Edelson DP, Churpek MM. The development of a machine learning inpatient acute kidney injury prediction model. Critical Care Medicine. 2018;46:1070–7. [DOI] [PubMed] [Google Scholar]
- 21.Boddu R, Fan C, Rangarajan S, Sunil B, Bolisetty S, Curtis LM. Unique sex- and age-dependent effects in protective pathways in acute kidney injury. Am J Physiol Renal Physiol. 2017;313(3):F740–f55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Gheith O, Farouk N, Nampoory N, Halim MA, Al-Otaibi T. Diabetic kidney disease: world wide difference of prevalence and risk factors. J Nephropharmacol. 2016;5(1):49–56. [PMC free article] [PubMed] [Google Scholar]
- 23.Ravizza S, Huschto T, Adamov A, Böhm L, Büsser A, Flöther FF, et al. Predicting the early risk of chronic kidney disease in patients with diabetes using real-world data. Nature Medicine. 2019;25(1):57–9. [DOI] [PubMed] [Google Scholar]
- 24.Schmitt P, M J, el, Guedj M. A Comparison of Six Methods for Missing Data Imputation | OMICS International. Journal of Biometrics & Biostatistics. 2015;6(1). [Google Scholar]
- 25.Chen t, Li X, Li Y, Xia E, Qin Y, Shaoshan L, et al. Prediction and Risk Stratification of Kidney Outcomes in IgA Nephropathy - American Journal of Kidney Diseases. 2019. [DOI] [PubMed]
- 26.Makino M, Yoshimoto R, Ono M, Itoko T, Katsuki T, Koseki A, et al. Artificial intelligence predicts the progression of diabetic kidney disease using big data machine learning. Scientific Reports. 2019;9(1):1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- *27.Ginley B, Lutnick B, Jen K-Y, Fogo AB, Jain S, Rosenberg A, et al. Computational Segmentation and Classification of Diabetic Glomerulosclerosis. 2019.Computational Segmentation and Classification of Diabetic Glomerulosclerosis: Used several machine learning algorithms including deep learning to classify glomerular features in kidney biopsy samples of patients with diabetic kidney disease with high accuracy.
- *28.Hermsen M, Bel Td, Boer Md, Steenbergen EJ, Kers J, Florquin S, et al. Deep Learning–Based Histopathologic Assessment of Kidney Tissue. 2019.Deep Learning–Based Histopathologic Assessment of Kidney Tissue: Using a CNN authors perfomed multiclass segmentation of kidney tissue from multiple centers.
- 29.Zou KH, Warfield SK, Bharatha A, Tempany CM, Kaus MR, Haker SJ, et al. Statistical Validation of Image Segmentation Quality Based on a Spatial Overlap Index1: Scientific Reports. Acad Radiol. 2004;11(2):178–89. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Chan L, Beers K, Yau A, Chauhan K, Duffy A, Chaudhary K, et al. Natural language processing of electronic health records is superior to billing codes to identify symptom burden in hemodialysis patients. Kidney International. 2019;0(0). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Ahlqvist E, Storm P, Käräjämäki A, Martinell M, Dorkhan M, Carlsson A, et al. Novel subgroups of adult-onset diabetes and their association with outcomes: a data-driven cluster analysis of six variables. The Lancet Diabetes and Endocrinology. 2018;6:361–9. [DOI] [PubMed] [Google Scholar]
- 32.Seymour CW, Department of Critical Care Medicine SoM, University of Pittsburgh, Pittsburgh, Pennsylvania, Department of Emergency Medicine SoM, University of Pittsburgh, Pittsburgh, Pennsylvania, Clinical Research I, and Systems Modeling of Acute Illness Center, School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, Kennedy JN, Department of Critical Care Medicine SoM, University of Pittsburgh, Pittsburgh, Pennsylvania, et al. Derivation, Validation, and Potential Treatment Implications of Novel Clinical Phenotypes for Sepsis JAMA. 2019;321(20):2003–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Chen R, Department of Computer Science and Engineering UaB, The State University of New York, Buffalo, NY, USA, Yang L, Department of Computer Science and Engineering UaB, The State University of New York, Buffalo, NY, USA, Goodison S, Department of Health Sciences Research MC, Jacksonville, FL, USA, et al. Deep-learning approach to identifying cancer subtypes using high-dimensional genomic data. Bioinformatics. 2019. [DOI] [PMC free article] [PubMed]
- 34.Huopaniemi I, Nadkarni G, Nadukuru R, Lotay V, Ellis S, Gottesman O, et al. Disease progression subtype discovery from longitudinal EMR data with a majority of missing values and unknown initial time points. AMIA Annu Symp Proc. 2014;2014:709–18. [PMC free article] [PubMed] [Google Scholar]
- 35.Van Normal GA. Drugs, Devices, and the FDA: Part 1: An Overview of Approval Processes for Drugs. JACC: Basic to Translational Science. 2016;1(3):170–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Zhavoronkov A, Ivanenkov YA, Aliper A, Veselov MS, Aladinskiy VA, Aladinskaya AV, et al. Deep learning enables rapid identification of potent DDR1 kinase inhibitors. Nat Biotechnol. 2019;37(9):1038–40. [DOI] [PubMed] [Google Scholar]
- 37.Coiera E, Kocaballi B, Halamka J, Laranjo L. The digital scribe. NPJ Digit Med. 2018;1:58. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Topol E Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. 2019.
- 39.Challen R, Denny J, Pitt M, Gompels L, Edwards T, Tsaneva-Atanasova K. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. 2019;28(3):231–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Brisimi TS, Chen R, Mela T, Olshevsky A, Paschalidis IC, Shi W. Federated learning of predictive models from federated Electronic Health Records. Int J Med Inform. 2018;112:59–67. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Cabitza F, Zeitoun JD. The proof of the pudding: in praise of a culture of real-world validation for medical artificial intelligence Ann Transl Med. 7 China: 2019. p. 161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Di Ieva A AI-augmented multidisciplinary teams: hype or hope? Lancet. 2019;394(10211):1801. [DOI] [PubMed] [Google Scholar]