Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Sep 13.
Published in final edited form as: Proceedings (IEEE Int Conf Bioinformatics Biomed). 2015 Dec 17;2015:1316–1321. doi: 10.1109/BIBM.2015.7359869

Deep Learning of Tissue Fate Features in Acute Ischemic Stroke

Noah Stier 1, Nicholas Vincent 1, David Liebeskind 1, Fabien Scalzo 1
PMCID: PMC5597003  NIHMSID: NIHMS844893  PMID: 28919983

Abstract

In acute ischemic stroke treatment, prediction of tissue survival outcome plays a fundamental role in the clinical decision-making process, as it can be used to assess the balance of risk vs. possible benefit when considering endovascular clot-retrieval intervention. For the first time, we construct a deep learning model of tissue fate based on randomly sampled local patches from the hypoperfusion (Tmax) feature observed in MRI immediately after symptom onset. We evaluate the model with respect to the ground truth established by an expert neurologist four days after intervention. Experiments on 19 acute stroke patients evaluated the accuracy of the model in predicting tissue fate. Results show the superiority of the proposed regional learning framework versus a single-voxel-based regression model.

I. Introduction

Recent clinical trials in acute ischemic stroke [2] have illustrated the possible benefits of endovascular clot-retrieval interventions, which have been shown to be efficient overall on a randomly sampled population. However, stroke is heterogeneous, and it is important to be aware of the balance of risk versus possible benefit of invasive treatment under the specific conditions of each patient. Methods are under ongoing development for analyzing this balance in different patient sub-groups. Given the state of the field, accurate data-driven models for assessing tissue damage could play a major role in the development of decision support systems for stroke clinicians. Deep learning algorithms show potential in this area, as they have been able to capture complex imaging features in the context of big data while remaining robust to significant levels of noise.

Within the last 30 years, brain imaging has revolutionized our understanding of stroke pathophysiology and paved the way for effective treatments. During an acute ischemic stroke, an area of the brain is deprived of blood circulation due to a clot in the supplying artery. Current treatments aim at maximizing the recovery of injured brain tissue by restoring blood flow in the affected artery. There is a recognized need for accurate clinical decision tools to identify, as early as possible, stroke patients with still-salvageable brain tissue who could benefit from such a treatment.

Infarct growth in ischemic stroke is a complex process that varies greatly across patients and remains poorly understood. Several factors (such as degree of reperfusion, quality of collateral flow, and age) have been shown to influence lesion growth [15], but their exact relationship over time remains to be clarified. In current clinical practice, the spatial mismatch [12], [6], [26], [5] between diffusion (DWI) and perfusion-weighted (PWI) magnetic resonance imaging (MRI) is often used to differentiate irreversible infarction from salvageable tissue. However, the DWI-PWI mismatch as a diagnostic parameter is not perfect, as tissue damage detectable using DWI may still be reversible. In addition, hypoperfusion is often analyzed using a simple threshold on a time-to-maximum-perfusion (Tmax) image, which is commonly derived from PWI [7]. A more sophisticated approach to Tmax analysis warrants investigation.

Predictive models of tissue fate based on computer vision and pattern recognition techniques may provide a more reliable and objective insight. Early models have been trained on a voxel-by-voxel basis using multimodal perfusion imaging. Among the techniques that have been studied are generalized linear model (GLM) [30], [31], Gaussian models [22] trained on multiple parameters (DWI, CBF, CBV, mean transit time (MTT)), logistic regression [32], and ISODATA clustering [28].

More recently, regional models of tissue fate that incorporate data in the area surrounding the target location have been shown to improve the accuracy of the predictive model, relative to single-voxel-based methods. These approaches exploit spatial correlation between voxels [18], a prior map of spatial frequency-of-infarct [27], Artificial Neural Networks [10], and spatio-temporal maps [21]. These methods have all demonstrated improvement over single-voxel-based approaches. Regional models based on spectral regression showed promising results in our previous studies [25], [24], [23] using different PWI parameters acquired from acute stroke patients: Tmax, MTT, and time-to-peak (TTP).

In the last two decades, the explosion of digital images and the availability of data storage and computational power have enabled researchers in many areas to approach big data from new perspectives. Deep learning has emerged as a highly versatile set of techniques and has been applied to various problems, such as handwritten character recognition [9], face detection and recognition [29], speech recognition [8], and image classification [13]. Machine learning algorithms, and Deep learning in particular, will likely revolutionize medical care as we know it, driving significant progress toward precise and individualized medicine. The near future will see machine learning algorithms provide decision support comparably to experts in the field of medical image analysis and radiology, providing valuable inputs to physicians via computer-aided diagnosis, image segmentation, image annotation and retrieval, image registration, and multimodal image analysis. So far, however, deep learning is primarily used as a research tool, and it has not yet been successfully translated to clinical practice.

In this paper, we follow these findings and trends to introduce a regional predictive model of tissue fate based on deep learning. The model uses tissue information available at symptom onset, specifically the Tmax parameter of PWI, to predict the tissue outcome at the time of the follow-up MRI (usually four days after intervention). Fluid-attenuated inversion recovery (FLAIR) images from patients' follow-up MRIs are used to determine which tissue survives, providing the ground truth for training the model, as FLAIR is regarded as the current gold standard in neurology to discern irreversible lesions [4].

II. Methods

A. Data Acquisition

The imaging data was collected from acute ischemic stroke patients admitted at the University of California, Los Angeles Medical Center. The use of these data was approved by the local Institutional Review Board (IRB) and introduced in our previous study [23]. Inclusion criteria for this study included: (1) Diagnosis of acute stroke, (2) last known well time within six hours, (3) Perfusion MRI of the brain performed before recanalization therapy and approximately four days later. A total of 25 patients (mean age, 56 ± 21 years; age range, 27 to 89; 15 women; average NIHSS of 14±6.3) satisfied the above criteria and underwent MRI using a 1.5 Tesla echo planar MR imaging scanner (Siemens Medical Systems). The PWI scanning was performed with a timed contrast-bolus passage technique (0.1 mg/kg contrast administered intravenously at a rate of 5cm3/s) and with the following parameters: repetition time (TR), 2000 ms (21 cases), 1920 ms (2), 1770 ms (1), 2890 ms (1); with an average echo time (TE) 44 ± 10.4 ms. The FLAIR sequence was acquired with the following parameters: repetition time (TR), 10000 ms (19 cases), 8800 ms (4), 8160 ms (1), 7000 ms (1); echo time (TE), 105 ms (20), 89 ms (3), 123 ms (1), 82 ms (1); inversion time (TI), 2400 ms (20), 2500 ms (4), 2472 ms (1), respectively. All the images were resized using bilinear interpolation to match a resolution of 1 × 1 × 5 mm per voxel. Average lesion size at onset is 10.35 ± 13.1cm3 and 43.6 ± 18.2cm3 at follow-up (as measured in FLAIR images at day 4). All patients had the revascularization success determined using the Thrombolysis in Cerebral Infarction (TICI) scale: TICI Score 3, 2b, and 2a in 5, 7, and 13 patients, respectively.

The FLAIR images taken approximately four days after symptom onset will be referred to as the follow-up images, and the PWI images taken during the presentation of acute symptoms will be referred to as the acute images.

B. Data Preprocessing

1) Slice Extraction

The slice thickness of the perfusion MRI scans included in this study was between 5 and 7 mm and therefore would not provide enough granularity to model the relationship between slices. We therefore visually identify the transverse plane exhibiting the greatest lesion size in the acute data for each patient. We then extract that plane as a two-dimensional slice from the acute and from the follow-up images, to be used in the study.

2) Brain Volume Segmentation

This step was performed because non-brain tissue in the images can interfere with the image registration step, which was performed subsequently. The FSL Brain Extraction Tool (BET) was used to remove skull and non-brain tissue. BET estimates an intensity threshold to discriminate between brain/non-brain voxels. Then, it determines the center of gravity of the head, defines a sphere based on the center of gravity of the volume, and finally deforms it toward the brain surface.

3) Image Registration

It was necessary to register acute and follow-up images in order to match the tissue fate labels to their proper anatomical locations. Co-registration was performed for each patient independently. Because the intensity of FLAIR images may present large anatomical deformations due to changes in the tissue perfusion, pressure, and lesion growth caused by the stroke, several attempts to use automatic image registration methods failed to accurately align the volumes. Instead, our framework utilized five landmark points placed manually at specific anatomical locations (center, plus four main cardinal directions) on the slice of the brain that had the largest ventricular area. An affine projection was applied to project the follow-up FLAIR and acute Tmax onto the original FLAIR image.

4) FLAIR Image Normalization and Ground Truth

Because the follow-up images were acquired with different settings and originated from different patients, their intensity values were not directly comparable. To allow for inter-patient comparisons, follow-up images were normalized with respect to the average intensities within the contralateral white matter. The normal-appearing white matter was delineated manually by an experienced researcher for both onset and follow-up brain volumes.

We obtained the ground truth tissue outcomes from an expert in neurology from UCLA who was asked to precisely delineate them infarcts manually, comparing the affected hemisphere with the contralateral hemisphere. Outlining was performed with the help of the commercially available medical imaging software 3D Doctor (http://www.ablesw.com/). At each pixel, the ground truth was set to 1 for infarction and 0 for no infarction.

5) Tmax Features

The Tmax parameter was computed on the acute images using software developed at UCLA, the Stroke Cerebral Analysis 2 (SCAN 2) package. SCAN 2 expresses the tissue contrast agent concentration C(t) as a convolution of the arterial input function (AIF) identified from the contralateral middle cerebral artery (MCA) and the residue function R(t) [4], C(t)=CBF×(AIF(t)R(t)), where CBF is the cerebral blood flow. The residue function is obtained by deconvolution, and the time to its maximum value is used to specify Tmax. Therefore, Tmax is the arrival delay between the AIF and C(t).

6) Orientation Normalized Patch Sampling

In order to be able to predict tissue fate on a per-pixel basis, we randomly sampled a set of pixels from each image, and extracted a square patch of size 23 × 23 centered at each sampled pixel (as illustrated in Figure 1). Because the pose of the head varies from image to image, patches from different images are oriented at various angles relative to the head. Normalized orientation is desirable when comparing local image features across images [16], so we normalized the patches with respect to their orientation.

Fig. 1.

Fig. 1

Tissue fate is predicted by our Convolutional Neural Network (CNN) using local patches sampled across the Tmax feature obtained from perfusion MRI.

The patches were normalized with respect to the direction θ of the image gradient using a rotation performed with a bilinear interpolation (similarly to other works in computer vision [3]),

cM=imrotate(cM,θM)θM=tan1Ly,Mσ(i,j)Lx,Mσ(i,j) (1)

where Lx,Mσ and Ly,Mσ are the Gaussian derivatives in x and y directions at the point (i,j) in the perfusion map M,

Lx,Mσ=xGσMLy,Mσ=yGσM (2)

where Gσ is a 2D isotropic Gaussian filter with standard deviation σ = 3 in our experiments.

Each patch was then paired with the label corresponding to its central pixel in the ground truth image. The resulting dataset consists of a set of orientation-normalized patches and their corresponding tissue fate labels.

C. Predictive Model

1) Deep Learning

Deep learning is a field of machine learning that uses multiple layers of nonlinear processing units to model complex, nonlinear features in data [1]. It can produce very accurate results in tasks such as speech and image recognition [17], and it has been used successfully to approach problems in neuroimaging [20]. In a deep learning architecture, each layer models progressively higher-level features of the data. These features are learned during the process of training the model, and need not be specified ahead of time. Thus, deep learning allows for the automatic generation of a very high-level, abstract representation of data, even when it is not clear from the outset which will be the relevant features.

2) Convolutional Neural Networks

In this study, a deep learning method called a Convolutional Neural Network (CNN) [14] was used to generate a predictive model of tissue fate. This is a deep learning method inspired by the human visual system, specifically adapted to image recognition. It is designed to minimize training time and model complexity, and to provide translational invariance to image features. It achieves these design goals by feeding input images, or maps, forward through alternating convolutional layers and pooling layers. Convolutional layers convolve the input maps with a set of filters, or kernels, producing an output map for each kernel, for each input map. The parameters of these kernels are initialized to very small non-zero values, and during training, the parameters are adjusted via back-propagation over labeled examples to maximize the accuracy of the network–this is where the learning takes place. The network is able to learn features by which it can distinguish images from the different output classes. Pooling layers downsample the maps they are fed. This is not a learning step, it simply speeds up the training process without adding any parameters to the model, and it is this downsampling that provides translational invariance. The final layer assigns likelihoods for each output class, and is often a fully-connected neural network. Each cycle of one feed-forward and one back-propagation is referred to as a learning epoch.

3) Our Model

To construct our CNNs, We made use of DeepLearnToolbox, an open source set of Matlab libraries for deep learning algorithms [19]. In the deep learning community, there is no canonical method for determining the proper CNN configuration–that is, the optimal values of hyper-parameters such as number of layers and size of kernels for a given problem. When creating our CNNs, we selected hyper-parameters based on manual variation and observation. In order to maximize reproducibility and practical value, we selected hyper-parameters that would minimize computational complexity, where possible. The model developed in this study consists of two convolutional layers and two pooling layers, as illustrated in Figure 2. The dimensions of the kernels are 6 × 6. The output of the second pooling layer is unwound into a feature vector, and a linear classifier on that feature vector is used as a final layer. 100 learning epochs are performed. The model's input maps are the sampled Tmax patches; the model's output for each map, for each output class, is the probability that the map falls into that output class. A binary prediction is then obtained by selecting the output class with higher probability.

Fig. 2.

Fig. 2

Illustration of the Convolutional Neural Network (CNN) architecture used as part of our predictive model.

D. Experiments

To quantify the perfonnance of the deep learning model, it was used to predict the tissue outcomes of the 19 patients, and its results were evaluated against the manually-determined ground truths. As a point of comparison, a linear regression model was trained and evaluated in parallel.

For each patient, a network was trained on the combined labeled data of all other patients, in a leave-one-patient-out cross validation. It was then used to predict the tissue outcome for each of the patient's sampled Tmax patches, generating a prediction at each location in the patient's Tmax feature map. Those predictions were then evaluated against the ground truths.

A linear regression model was also trained for each patient, using the same leave-one-patient-out crossvalidation scheme. For this model, the ground truth labels were regressed against the mean values of their respective Tmax patches.

III. Results

After data pre-processing, there was evidence of lesions from previous strokes and/or unsuccessful perfusion processing in 6 of the 25 patients. These lesions had the potential to confound the results of the model, so the data for these patients was discarded.

The predictions of each model are visualized in Figure 3, in which the intensity of each pixel corresponds to the predicted probability of infarction at that location. Each image is labeled with its predictive accuracy, defined as number of correct binary predictions divided by total number of Tmax patches. The mean accuracy over the 19 patients for the deep learning model was 85.3 ± 9.1 %, and the mean accuracy for the linear regression model was 78.3 ± 5.5%.

Fig. 3.

Fig. 3

FLAIR (col. 1) and Tmax perfusion at onset (col. 2) are illustrated with the corresponding prediction of the CNN models (col. 4) for six patients. During our evaluation, predictions were compared to the ground truth (col. 5) manually established in follow up FLAIR at day 4.

IV. Discussion

A. Model Performance

Based on sheer accuracy, this deep learning model of tissue fate out-performs a linear, single-voxel-based model by a significant margin, when evaluated against the ground truth set by an expert neurologist.

We suspect that the heightened performance of this method is primarily a result of two factors. (1) At each point, it makes use of data from the surrounding region. This is important when analyzing noisy data, where predictions benefit greatly from being able to take the context of each voxel into account. (2) The nature of the neural network allows it to model a non-linear relationship between Tmax and tissue fate, and it is apparent from visually comparing the acute and follow-up images that their actual relationship is not entirely linear.

The CNN benefits from an advantage over other non-linear models, in that the precise mathematical relationship between acute Tmax and tissue fate does not need to be specified ahead of time. One typical disadvantage of the neural network approach is that training times can be long. However, for this application in a clinical setting, training time is not a large concern, as the network only needs to be trained once before it is deployed. Once trained, CNNs are computationally efficient to apply, and thus make optimal use of available computing power.

B. Future Directions

The model proposed here makes predictions based on two-dimensional input maps, as CNNs were specifically adapted to do. However, MRI provides data in three dimensions, and this additional information could potentially be used to increase predictive power, using a modified 3D CNN [11]. Furthermore, extensions of CNNs that take multi-channel data as inputs, RGB images for example, have been very successful [13]; in the case of MRI data, additional variables, such as FLAIR at symptom onset, could be incorporated into our model as channels alongside Tmax.

As CNNs have shown their potential not just for research but for clinical practice, there is an increasingly strong need for systematic investigation and standardization of hyper-parameter settings, which will accelerate development of applied deep learning methods. The evolution of the method developed here will continue to provide insight into the behavior of CNNs with respect to hyper-parameters in the medical imaging domain.

V. Conclusion

Our results indicate that tissue fate in ischemic stroke can be accurately forecasted by a relatively simple CNN at the time of symptom onset. We anticipate that the proposed improvements to our method will yield a clinically viable predictive tool to aid in stroke diagnosis and treatment decisions, ensuring that treatments such as clot-retrieval interventions are administered with confident knowledge of the associated risk and potential benefit.

References

  • 1.Bengio Y. Learning deep architectures for AI. Foundations and trends in Machine Learning. 2009;2(1):1–127. [Google Scholar]
  • 2.Berkhemer OA, et al. A randomized trial of intraarterial treatment for acute ischemic stroke. New England Journal of Medicine. 2015;372(1):11–20. doi: 10.1056/NEJMoa1411587. [DOI] [PubMed] [Google Scholar]
  • 3.Brown M, Szeliski R, Winder S. Multi-image matching using multi-scale oriented patches. CVPR. 2005:510–517. [Google Scholar]
  • 4.Calamante F, Christensen S, Desmond PM, Ostergaard L, Davis SM, Connelly A. The physiological significance of the time-to-maximum (Tmax) parameter in perfusion MRI. Stroke. 2010 Jun;41:1169–1174. doi: 10.1161/STROKEAHA.110.580670. [DOI] [PubMed] [Google Scholar]
  • 5.Chen F, Liu Q, Wang H, Suzuki Y, Nagai N, Yu J, Marchal G, Ni Y. Comparing two methods for assessment of perfusion-diffusion mismatch in a rodent model of ischaemic stroke: a pilot study. Br J Radiol. 2008 Mar;81:192–198. doi: 10.1259/bjr/70940134. [DOI] [PubMed] [Google Scholar]
  • 6.Fisher M, Ginsberg M. Current concepts of the ischemic penumbra: introduction. Stroke. 2004;35:2657–2658. [Google Scholar]
  • 7.Heiss W, Sobesky J. Can the penumbra be detected: MR versus PET imaging. J Cereb Blood Flow Metab. 2005;25:S702. [Google Scholar]
  • 8.Hinton G, Deng L, Yu D, Dahl GE, Mohamed A.-r., Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE. 2012;29(6):82–97. [Google Scholar]
  • 9.Hinton GE, Osindero S, Teh Y-W. A fast learning algorithm for deep belief nets. Neural computation. 2006;18(7):1527–1554. doi: 10.1162/neco.2006.18.7.1527. [DOI] [PubMed] [Google Scholar]
  • l0.Huang S, Shen Q, Duong TQ. Artificial neural network prediction of ischemic tissue fate in acute stroke imaging. J Cerebr Blood F Met. 2010 Apr; doi: 10.1038/jcbfm.2010.56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Ji S, Xu W, Yang M, Yu K. 3d convolutional neural networks for human action recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 2013;35(1):221–231. doi: 10.1109/TPAMI.2012.59. [DOI] [PubMed] [Google Scholar]
  • 12.Kidwell CS, Alger JR, Saver JL. Evolving Paradigms in Neuroimaging of the Ischemic Penumbra. Stroke. 2004;35:2662–2665. doi: 10.1161/01.STR.0000143222.13069.70. [DOI] [PubMed] [Google Scholar]
  • 13.Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems. 2012:1097–1105. [Google Scholar]
  • 14.LeCun Y, Boser BE, Denker JS, Henderson D, Howard R, Hubbard WE, Jackel LD. Handwritten digit recognition with a back-propagation network. Advances in Neural Information Processing Systems. 1990:396–404. [Google Scholar]
  • l5.Liebeskind DS, Liu D, Sanossian N, Sheth SA, Liang C, Johnson MS, Ali LK, Kim D, Hinman JD, Rao NM, Starkman S, Jahan R, Gonzalez NR, Tateshima S, Duckwiler GR, Saver JL, Yoo B, Alger JR, Scalzo F. Time is brain on the collateral clock! collaterals and reperfusion determine tissue injury. Stroke. 2015;46(Suppl 1):A182. [Online]. Available: http://stroke.ahajournals.org/content/46/Suppl_1/A182.abstract. [Google Scholar]
  • 16.Mikolajczyk K, Schmid C. A performance evaluation of local descriptors. IEEE Trans Pattern Anal Mach Intell. 2005 Oct;27:1615–1630. doi: 10.1109/TPAMI.2005.188. [DOI] [PubMed] [Google Scholar]
  • 17.Ngiam J, Khosla A, Kim M, Nam J, Lee H, Ng AY. Multimodal deep learning. Proceedings of the 28th international conference on machine learning (ICML-11) 2011:689–696. [Google Scholar]
  • 18.Nguyen V, Pien H, Menenzes N, Lopez C, Melinosky C, Wu O, Sorensen A, Cooperman G, Ay H, Koroshetz W, Liu Y, Nuutinen J, Aronen H, Karonen J. Stroke Tissue Outcome Prediction Using A Spatially-Correlated Model. PPIC. 2008 [Google Scholar]
  • l9.Palm RB. Master's thesis. Technical University of Denmark; 2012. Prediction as a candidate for learning deep hierarchical models of data. [Google Scholar]
  • 20.Plis SM, Hjelm DR, Salakhutdinov R, Allen EA, Bockholt HJ, Long JD, Johnson HJ, Paulsen JS, Turner JA, Calhoun VD. Deep learning for neuroimaging: a validation study. Frontiers in neuroscience. 2014;8 doi: 10.3389/fnins.2014.00229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Rekik I, Allassonniere S, Carpenter TK, Wardlaw JM. Using longitudinal metamorphosis to examine ischemic stroke lesion dynamics on perfusion-weighted images and in relation to final outcome on T2-w images. Neuroimage Clin. 2014;5:332–340. doi: 10.1016/j.nicl.2014.07.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Rose S, Chalk J, Griffin M, Janke A, Chen F, McLachan G, Peel D, Zelaya F, Markus H, Jones D, Simmons A, OSullivan M, Jarosz J, Strugnell W, Doddrell D, Semple J. MRI based diffusion and perfusion predictive model to estimate stroke evolution. JMRI. 2001;19(8):1043–1053. doi: 10.1016/s0730-725x(01)00435-0. [DOI] [PubMed] [Google Scholar]
  • 23.Scalzo F, Hao Q, Alger JR, Hu X, Liebeskind DS. Regional prediction of tissue fate in acute ischemic stroke. Ann Biomed Eng. 2012 Oct;40(10):2177–2187. doi: 10.1007/s10439-012-0591-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Scalzo F, Hu X, Liebeskind DS. Neuroimaging. 2012 In Tech. ch. Tissue Fate Prediction from Regional Imaging Features in Acute Ischemic Stroke. [Google Scholar]
  • 25.Scalzo F, Hao Q, Alger J, Hu X, Liebeskind D. Tissue fate prediction in acute ischemic stroke using cuboid models. ISVC. 2010;6454:292–301. [Google Scholar]
  • 26.Schlaug G, Benfield A, Baird AE, Siewert B, Lovblad KO, Parker RA, Edelman RR, Warach S. The ischemic penumbra: operationally defined by diffusion and perfusion MRI. Neurology. 1999 Oct;53:1528–1537. doi: 10.1212/wnl.53.7.1528. [DOI] [PubMed] [Google Scholar]
  • 27.Shen Q, Duong T. Quantitative Prediction of Ischemic Stroke Tissue Fate. NMR Biomedicine. 2008;21:839–848. doi: 10.1002/nbm.1264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Shen Q, Ren H, Fisher M, Duong T. Statistical prediction of tissue fate in acute ischemic brain injury. J Cereb Blood Flow Metab. 2005;25:1336–1345. doi: 10.1038/sj.jcbfm.9600126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Sun Y, Wang X, Tang X. Deep learning face representation from predicting 10,000 classes. Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Coriference on. IEEE. 2014:1891–1898. [Google Scholar]
  • 30.Wu O, Koroshetz W, Ostergaard L, Buonanno F, Copen W, Gonzalez R, Rordorf G, Rosen B, Schwamm L, Weisskoff R, Sorensen A. Predicting tissue outcome in acute human cerebral ischemia using combined diffusion- and perfusion-weighted MR imaging. Stroke. 2001 Apr;32(4):933–42. doi: 10.1161/01.str.32.4.933. [DOI] [PubMed] [Google Scholar]
  • 31.Wu O, Sumii T, Asahi M, Sasamata M, Ostergaard L, Rosen B, Lo E, Dijkhuizen R. Infarct prediction and treatment assessment with MRI-based algorithms in experimental stroke models. J Cerebr Blood F Met. 2007;27:196–204. doi: 10.1038/sj.jcbfm.9600328. [DOI] [PubMed] [Google Scholar]
  • 32.Yoo AJ, Barak ER, Copen WA, Kamalian S, Gharai LR, Pervez MA, Schwamm LH, Gonzalez RG, Schaefer PW. Combining acute diffusion-weighted imaging and mean transmit time lesion volumes with NIHSSS improves the prediction of acute stroke outcome. Stroke. 2010 Aug;41:1728–1735. doi: 10.1161/STROKEAHA.110.582874. [DOI] [PubMed] [Google Scholar]

RESOURCES