Abstract
The performance of current machine learning methods to detect heterogeneous pathology is limited by the quantity and quality of pathology in medical images. A possible solution is anomaly detection; an approach that can detect all abnormalities by learning how ‘normal’ tissue looks like. In this work, we propose an anomaly detection method using a neural network architecture for the detection of chronic brain infarcts on brain MR images. The neural network was trained to learn the visual appearance of normal appearing brains of 697 patients. We evaluated its performance on the detection of chronic brain infarcts in 225 patients, which were previously labeled. Our proposed method detected 374 chronic brain infarcts (68% of the total amount of brain infarcts) which represented 97.5% of the total infarct volume. Additionally, 26 new brain infarcts were identified that were originally missed by the radiologist during radiological reading. Our proposed method also detected white matter hyperintensities, anomalous calcifications, and imaging artefacts. This work shows that anomaly detection is a powerful approach for the detection of multiple brain abnormalities, and can potentially be used to improve the radiological workflow efficiency by guiding radiologists to brain anomalies which otherwise remain unnoticed.
Subject terms: Brain imaging, Magnetic resonance imaging, Neurological disorders, Image processing, Machine learning, Cerebrovascular disorders
Introduction
In clinical practice, radiologists acquire and assess magnetic resonance (MR) images of the brain for the diagnosis of various brain pathologies. Unfortunately, the process of reading brain MR images is laborious and observer dependent1–5. To reduce observer dependence, and to improve workflow efficiency and diagnostic accuracy, automated (machine learning/‘artificial intelligence’) methods have been proposed to assist the radiologist6–13. A common drawback of these methods is their ‘point solution’ design, in which they are focused on a specific type of brain pathology. Furthermore, the performance of supervised machine learning based solutions is dependent on the quantity and quality of available examples of pathology. In, for example, cerebral small vessel disease, the development of such solutions is challenging, because the parenchymal damage is heterogeneous in image contrast, morphology, and size14,15.
A solution that breaks with this conventional approach is anomaly detection: a machine learning approach that can identify all anomalies solely based on features that describe normal data. Because the features of possible anomalies were not learnt, they stand out from the ordinary, and can subsequently be detected. Anomaly detection methods are particularly useful when there is an interest in the detection of anomalous events, but their manifestation is unknown a priori and their occurrence is limited16,17. Examples of applications include credit card fraud detection18, IT intrusion detection19, monitoring of aerospace engines during flight20, heart monitoring21, detection of illegal objects in airport luggage22, or the detection of faulty semiconductor wafers23.
In medical imaging, variational autoencoders and generative adversarial networks have been proposed for anomaly detection tasks. Schlegl et al. have developed a generative adversarial network architecture for the detection of abnormalities on optical coherence tomography images24,25. For brain MRI, models have been developed for the detection of tumor tissue26–28, white matter hyperintensities29,30, multiple sclerosis lesions31,32, and acute brain infarcts28.
One of the manifestations of cerebral small vessel disease are chronic brain infarcts, including cortical, subcortical, and lacunar infarcts; each with a different appearance on MRI14,15. Identification of these infarcts is important, because their occurrence is associated with vascular dementia, Alzheimer’s disease, and overall cognitive decline33,34. Because of the heterogeneity in appearance of chronic brain infarcts, location and morphology on MR imaging, anomaly detection would be a possible solution for the identification of these infarcts.
In this study, we constructed an anomaly detection method using a neural network architecture for the detection of chronic brain infarcts from MRI.
Materials and methods
MR acquisition
In this retrospective study, we used MR image data from the SMART-MR study35, a prospective study on the determinants and course of brain changes on MRI, where all eligible patients that were newly referred to our hospital with manifestations of coronary artery disease, cerebrovascular disease, peripheral arterial disease or an abdominal aortic aneurysm were included after acquiring written informed consent. This study was conducted in accordance with national guidelines and regulations, and has been approved by the University Medical Center Utrecht Medical Ethics Review Committee (METC). In total 967 patients, including 270 patients with brain infarcts (see Table 1 for patient demographics), were included in the current study, see Fig. 1 for exclusion criteria. The imaging data was acquired at 1.5 T (Gyroscan ACS-NT, Philips, Best, the Netherlands), and consisted of a T1-weighted gradient-echo sequence (repetition time (TR) = 235 ms; echo time (TE) = 2 ms), and a T2-weighted fluid-attenuated inversion recovery (T2-FLAIR) sequence (TR = 6000 ms; TE = 100 ms; inversion time: 2000 ms) (example given in Fig. 2). Both MRI sequences had a reconstructed resolution of 0.9 × 0.9 × 4.0 mm3, consisted of 38 contiguous transversal slices, and were coregistered35. A reasonable request for access to the image data can be send to Mirjam Geerlings (see author list). The code that supports the findings of this study is publicly available from Bitbucket (https://bitbucket.org/KeesvanHespen/lesion_detection).
Table 1.
Characteristics | Patients without brain infarcts | Patients with brain infarcts |
---|---|---|
No. of patients | 697 | 270 |
Age (years) | 57 (± 10) | 62 (± 10) |
Male sex | 550 | 213 |
All chronic brain infarcts -including cortical infarcts, lacunar infarcts, large subcortical infarct, and infratentorial infarcts- in these images, have been manually delineated by a neuroradiologist with more than 30 years of experience, as described by Geerlings et al.35 in more detail.
Image preprocessing
The images of both acquisitions were preprocessed by applying N4 bias field correction36. Additionally, image intensities were normalized such that the 5th percentile of pixel values within an available brain mask was set to zero, and the 95th percentile was set to one.
Two dimensional image patches (smaller subimages of the original image) were sampled within an available brain mask37 at corresponding locations on both acquisitions for four datasets; a training set, a training-validation set, a validation set, and a test set. For the training set, one million transversal image patches (15 × 15 voxels) were sampled from images of all but ten patients without brain infarcts. The remaining ten patients without brain infarcts were used for the training-validation set, which was used to assess potential overfitting of the network on the training set. For the training-validation set, 100,000 image patches were randomly sampled. The patches for the training and training-validation set were augmented at each training epoch, by performing random horizontal and vertical flips of the image patches.
The performance of our method on the detection of brain infarcts was evaluated on the validation and test set. The validation set, which was used to evaluate several model design choices consisted of 45 randomly selected patients with brain infarcts. The remainder (225) was included in the test set, on which the final network performance was evaluated. In the validation and test sets, 93, and 553 brain infarcts were present, with a median volume of 0.4 ml (range 0.072–282 ml) and 0.44 ml (range 0.036–156 ml), respectively. For these sets, the entire brain was sampled, using a stride of four voxels.
Network architecture
We implemented a neural network architecture based on the GANomaly architecture in PyTorch v1.1.022,38 which ran on the GPU of a standard workstation (Intel Xeon E-1650v3, 32gb RAM, Nvidia Titan Xp). The neural network (Fig. 3) consisted of a generator (bottom half) and discriminator (top half). The input of the network features two input channels, for both the T1-weighted and the T2-FLAIR image patches. The generator and discriminator consisted of encoder and decoder parts, that each contained three sequential (transposed) convolutional layers, interleaved with (leaky) rectified linear unit (ReLU) activation and batch normalization. The generator was trained to encode the input image patches into latent representations: and . Additionally, the generator was trained to realistically reconstruct the input images from the latent vector into the reconstructed image . The discriminator was used to help the generator create realistic reconstructions . The latent representations and were used to calculate an anomaly score per image patch.
During training of the generator, three error terms were minimized. First, the reconstruction error was computed as the mean difference between the input image and reconstructed image (. Second, the encoding error was given by the L2 loss between the latent space vectors and (). Last, the adversarial loss was computed as the L2 loss between the features from first to last layer of the discriminator, given the input image and reconstructed image (). To balance the optimization of the network, the generator loss was computed as a weighted combination of the aforementioned losses, with weights of 70, 10, and 1 for , , and respectively.
For the discriminator, two error terms were minimized during training40. The first term, is given by the binary cross entropy of the input images and the label of the reconstructed images, and the second term is given by the binary cross entropy of the reconstructed images and the label of the input images. To prevent vanishing gradients in the discriminator, a soft labeling method was chosen, where labels for the reconstructed and input images were uniformly chosen between 0 and 0.2, and between 0.8 and 1, respectively.
The network was trained using the training image patches, which were fed to the network in minibatches of size 64. A learning rate of 0.001 was used, with Adam as optimizer41. Training continued until the generator loss on the training-validation set did not decrease any further for ten epochs. The network weights, for the epoch with the lowest generator loss were used for testing.
Anomaly scoring
An anomaly score was calculated per image patch as the modified Z-score, a measure of how many median absolute deviations a value lies away from a median value. During training, a median and median absolute deviation were calculated per element of the difference vector , over all training image patches. These values were used to calculate the modified Z-score for all of the difference vector elements of the validation/test image patches. The anomaly score for each image patch was calculated by taking the Nth percentile of modified Z-score values over all vector elements. The value of the percentile was determined in the first experiment.
The anomaly scores over all image patches were projected back onto the original brain image, where areas with an anomaly score larger than 3 were flagged as suspected anomalies. Spurious activation, where only a single isolated patch had an anomaly score larger than 3, were filtered out from the final result.
Experiments
Latent vector size and anomaly score calculation
We investigated the effect of the size of the latent vectors and (Fig. 3), and the effect of the anomaly score calculation on the detection of brain infarcts. We trained several neural network instances with a varying latent vector size, namely: 50, 75, 100, 150, 200, 300, and 400 vector elements. Additionally, we varied the used percentile N for the anomaly score calculation between 10 and 75, in steps of 5 percentage points. For both parameters, we computed the sensitivity and the average number of suspected anomalies per image, as well as the volume fraction of the detected brain infarcts compared to the total brain infarct volume.
Suspected anomaly classification
We used the optimal parameters to evaluate the performance of our proposed method on the test set. We computed the sensitivity and the average number of suspected anomalies per image. In addition, we analyzed the origin of the remaining suspected anomalies, where a neuroradiologist with more than 10 years of experience (JWD), classified these suspected anomalies as one of seven classes, namely: normal tissue, unannotated brain infarct, white matter hyperintensity, blood vessel, calcification, bone and image artefact.
Missed brain infarcts
We investigated why our proposed method missed some brain infarcts by evaluating the volume and location of these missed brain infarcts. Additionally, we performed a nearest neighbor analysis, in which we analyzed which training image patches were similar to the test image patches with missed brain infarcts.
Results
Latent vector size and anomaly score calculation
Based on the validation set, a latent vector size of 100 showed the highest sensitivity and detected brain infarct volume fraction for the same number of suspected anomalies over almost the entire range, compared to all other latent vector sizes (Fig. 4). Similarly, using the 50th percentile yielded an optimal tradeoff between sensitivity and fraction of detected brain infarct volume against the number of suspected anomalies. Given our validation dataset, the optimal parameters for the detection of brain infarcts include the use of the 50th percentile with a latent vector size of 100.
Suspected anomaly classification
We used the optimal parameters to evaluate the performance of our proposed method on the test set, on which our proposed method found on average nine suspected anomalies per image (total: 1953, examples are given in Fig. 5). In total, 374 out of 553 brain infarcts were detected by our model (sensitivity: 68%), representing 19.2% of all suspected anomalies (Table 2). These detected brain infarcts represented 97.5% of the total brain infarct volume. Eight hundred and sixty-five (44.3%) suspected anomalies were caused by white matter hyperintensities. Image artefacts, e.g. due to patient motion, attributed to 115 (5.9%) of all suspected anomalies. Normal healthy tissue was accountable for 563 (28.8%) of all suspected anomalies. In most cases, these normal tissue false positives were located at tissue—cerebrospinal fluid boundaries. Most interestingly, 26 (1.3%) of all suspected anomalies corresponded to unannotated brain infarcts, which were oftentimes located in the cerebellum and the most cranial image slices.
Table 2.
Suspected anomaly category | Count (%) |
---|---|
White matter hyperintensity | 864 (44.3) |
Normal tissue | 563 (28.8) |
Annotated brain infarcts | 374 (19.2) |
Image artefact | 115 (5.9) |
Unannotated brain infarct | 26 (1.3) |
Blood vessel | 6 (0.3) |
Calcification | 2 (0.1) |
Bone | 2 (0.1) |
The suspected anomalies were categorized in eight categories, where the categorization was performed manually by a trained radiologist, except for the ‘annotated brain infarcts’ (these annotations were already available from the used image dataset).
Missed brain infarcts
We performed an additional experiment to better understand why our proposed method missed 179 small brain infarcts (2.5% of the total brain infarct volume). Volume analysis revealed that the missed brain infarcts were in almost all cases smaller than 1 ml (median volume = 0.23 ml). Almost half of the missed brain infarcts (75) were located near the ventricles, at the level of the basal ganglia. Twenty-three missed brain infarcts were present in the cerebellum and 26 in the brain stem. The remaining 55 brain infarcts were located in the cerebral cortex. Automated analysis, where we used a nearest neighbor algorithm to determine which training image patches were similar to the missed brain infarct image patches, revealed that missed brain infarct image patches were oftentimes closely related to training image patches that contained sulci or tissue—cerebrospinal fluid boundaries (see Fig. 6). Furthermore, missed brain infarcts in the brainstem were mostly linked to training image patches in the cortical region, and image patches in the brainstem that border on the cerebrospinal fluid looked similar to cortical image patches with cerebrospinal fluid.
Discussion
In this paper, we proposed an anomaly detection method for the detection of chronic brain infarcts on brain MR images. Our proposed method detected brain infarcts that accounted for 97.5% of the total brain infarct volume of 225 patients. The missed brain infarcts had in most cases a volume smaller than 1 ml, and were mostly located in the brain stem, and cerebellum, and next to the ventricles. White matter hyperintensities, anomalous calcifications, and imaging artefacts accounted for 44.3%, 0.1%, and 5.9% of suspected anomalies, respectively. Additionally, our proposed method identified additional brain infarcts, which were previously missed by the radiologist during radiological reading (1.3% of all suspected anomalies).
Our proposed method properly evaluated regions that are more easily overlooked by a trained radiologist, given the additional brain infarcts that our method found. The percentage of unannotated brain infarcts that our proposed method found (5% of all brain infarcts) is in accordance with literature, which suggest that reading errors occur in 3–5% of the cases in day-to-day radiological practice. These errors can occur because of an inattentional bias, where radiologists are focused on the center of an image, while overlooking findings at the edges of the acquired image3. As workload in radiology is ever increasing due to a larger load of radiological images to be assessed, it is even more easy to overlook brain pathologies42. Our presented automated analysis method can potentially alleviate part of this assessment by suggesting anomalous areas for the radiologist to look at, or guiding the radiologist to potentially overlooked brain areas.
Besides suggesting anomalous areas during assessment of the scans by a radiologist, anomaly detection can also be used during image acquisition. In case of an important suspected pathology, this information can then instantly be used to make changes to the acquisition protocol by relocating the field of view or by adding an acquisition that is important for subsequent analysis of the suspected pathology.
Cerebral small vessel disease is a disease with multiple manifestations on MR images. Our proposed method has shown that it can detect at least two of these manifestations, namely chronic brain infarcts -including cortical infarcts, lacunar infarcts, large subcortical infarct, and infratentorial infarcts- and anomalous white matter hyperintensities. This is in contrast to other methods6,43, that are only trained for the detection of a single homogeneous type of brain pathology. Similarly, other heterogeneous brain pathologies such as brain tumors, or other manifestations of cerebral small vessel disease can also likely be detected using anomaly detection.
Anomaly detection, which is already used for several years in various fields, such as banking, aerospace, IT, and the manufacturing industry, can also be further explored in the medical imaging field. Potential other applications of anomaly detection include the detection of lung nodules on chest CT images, anomalous regions in the retina, breast cancer in mammograms, calcifications in breast MRI, liver tumor metastasis, or for the detection of areas with low fiber tract integrity in diffusion tensor imaging25,44–47. Also for the detection of artefacts in MR spectroscopy or the detection of motion artefacts in MR images, anomaly detection can potentially be beneficial48,49. For example, by analyzing the acquired k-space data on motion artefacts during patient scanning, a decision can be made more quickly to redo (parts of) the acquisition.
In future work, the use of 2.5D or 3D contextual information can potentially mitigate problems related to the interpretation of small brain infarcts. This is preferably done with MR images with an isotropic voxel size. This approach would mimic the behavior of human readers who also use contextual information, by scrolling through images, when reading an image. Additional to the performance improvement on small brain infarcts, false positive detections of normal tissue can potentially be mitigated by adding contextual information.
Limitations
Our method has several limitations. First, its performance on brain infarcts smaller than 1 ml is limited. The detection of small lesions is a common problem that is also present in other medical image analysis applications50–55. Ghafoorian et al. have aimed to improve the detection of small lesions by splitting image processing into pathways finetuned to large and small lesions56. For our approach, other network architectures might be investigated. Our current architecture seems to predominantly find large anomalies with a relatively large contrast difference compared to the surrounding brain tissue. Other approaches (e.g. a recurrent convolutional neural network51) might be able to put more emphasis on finding smaller anomalies with a lower contrast compared to the background. Besides improvements to image analysis, changes in image acquisition may also contribute to better lesion detection. Performance of the detection of small lesions is expected to improve with more up-to-date scanning protocols, which oftentimes have a higher signal-to-noise ratio and/or higher spatial resolution. Van Veluw et al. have shown that microinfarcts can be detected at 7 T MRI57, and later work has shown that these small lesions can be imaged at 3 T MRI as well58. The current work was done on 1.5 T data with limited sensitivity for detecting small lesions.
Design choices on the neural network architecture were made based on its performance on the validation set that included some brain infarcts. This does not reflect complete anomaly detection, where anomalies should be completely unknown. However, such an approach is unfeasible in normal practice. Other anomaly detection methods have also used validation sets to tune hyperparameters59–61. The use of our validation dataset had no influence on the training of the network, because the network weights of the training epoch with the lowest training-validation loss were used, as opposed to using the network weights of the training epoch with the best brain infarct detection performance on the validation set.
Anomaly detection commonly does not involve classification of anomalies, but only their localization. In case classification is needed, automated methods or manual inspection should be performed after analysis by our proposed method. We envision a workflow in radiology routine where anomaly detection locates possible lesions, a secondary system classifies these lesions or labels them as ‘unknown’, and finally presents the results to a radiologist for inspection.
The method was developed and evaluated on data from a single cohort study. The performance on scans that are acquired on other scanners, from other vendors, and on different field strengths is therefore unknown and a topic of future work. The method could be made applicable to other scanners by partial or full retraining, or by applying transfer learning techniques. In the latter case, a relatively small new dataset might be needed.
The number of false positive detections is relatively high, compared to all other detections (28.8% of all detections). However, we believe that suggesting a few healthy brain locations to the radiologist is less of a problem than missing pathology. Our model has shown its added value by suggesting infarcts that were overlooked by the radiologist.
Lastly, our training set potentially contains unannotated brain infarcts similar to the test set, in which, 5% additional brain infarcts were detected. The potential effect of abnormalities being present in the training data on the final detection performance is likely to be low. The training will be dominated by numerous image patches from normal appearing brain tissue and any abnormalities will therefore have a minimal impact on the anomaly score calculation.
In conclusion, we developed an anomaly detection model for the purpose of detecting chronic brain infarcts on MR images, where our method recovered 97.5% of the total brain infarct volume. Additionally, we showed that our proposed method also finds additional brain abnormalities, some of which were missed by the radiologist. This supports the use of anomaly detection as automated tool for computer aided image analysis.
Acknowledgements
The Titan Xp used for this research was donated by the NVIDIA Corporation. This work has been made possible by the Dutch Heart Foundation and the Netherlands Organisation for Scientific Research (NWO), as part of their joint strategic research programme: "Earlier recognition of cardiovascular diseases”. This project is partially financed by the PPP Allowance made available by Top Sector Life Sciences & Health to the Dutch Heart foundation to stimulate public-private partnerships [Grant numbers 14729 and 2015B028]. J.J.M.Z was funded by the European Research Council under the European Union’s Horizon 2020 Programme (H2020)/ERC [Grant number 841865] (SELMA). J.H. was funded by the European Research Council [Grant number 637024] (HEARTOFSTROKE). H.J.K. was funded by ZonMW [Grant number 451001007].
Author contributions
K.M.H., J.J.Z., J.H., and H.J.K. were involved in the conceptualization of the work. M.I.G. was involved in the acquisition of the study data. Data curation and analysis was performed by K.M.H. and J.W.D. The manuscript was written by K.M.H., J.J.Z., and H.J.K. All authors reviewed and approved the manuscript.
Data availability
A reasonable request for access to the image data can be send to Mirjam Geerlings (see author list). The code that supports the findings of this study is publicly available from Bitbucket (https://bitbucket.org/KeesvanHespen/lesion_detection).
Competing interests
The authors declare no competing interests.
Footnotes
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Hagens MHJ, et al. Impact of 3 Tesla MRI on interobserver agreement in clinically isolated syndrome: A MAGNIMS multicentre study. Mult. Scler. J. 2019;25:352–360. doi: 10.1177/1352458517751647. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Geurts BHJ, Andriessen TMJC, Goraj BM, Vos PE. The reliability of magnetic resonance imaging in traumatic brain injury lesion detection. Brain Inj. 2012;26:1439–1450. doi: 10.3109/02699052.2012.694563. [DOI] [PubMed] [Google Scholar]
- 3.Busby LP, Courtier JL, Glastonbury CM. Bias in radiology: The how and why of misses and misinterpretations. Radiographics. 2018;38:236–247. doi: 10.1148/rg.2018170107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Brady AP. Error and discrepancy in radiology: Inevitable or avoidable? Insights Imaging. 2017;8:171–182. doi: 10.1007/s13244-016-0534-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Lee CS, Nagy PG, Weaver SJ, Newman-Toker DE. Cognitive and system factors contributing to diagnostic errors in radiology. Am. J. Roentgenol. 2013;201:611–617. doi: 10.2214/AJR.12.10375. [DOI] [PubMed] [Google Scholar]
- 6.Guerrero R, et al. White matter hyperintensity and stroke lesion segmentation and differentiation using convolutional neural networks. NeuroImage. Clin. 2018;17:918–934. doi: 10.1016/j.nicl.2017.12.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Atlason HE, Love A, Sigurdsson S, Gudnason V, Ellingsen LM. SegAE: Unsupervised white matter lesion segmentation from brain MRIs using a CNN autoencoder. NeuroImage Clin. 2019;24:102085. doi: 10.1016/j.nicl.2019.102085. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Gabr RE, et al. Brain and lesion segmentation in multiple sclerosis using fully convolutional neural networks: A large-scale study. Mult. Scler. J. 2020;26:1217–1226. doi: 10.1177/1352458519856843. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Devine J, Sahgal A, Karam I, Martel AL. Automated metastatic brain lesion detection: a computer aided diagnostic and clinical research tool. In: Tourassi GD, Armato SG, editors. Medical Imaging 2016: Computer-Aided Diagnosis. International Society for Optics and Photonics; 2016. [Google Scholar]
- 10.van Wijnen KMH, et al. et al. Automated lesion detection by regressing intensity-based distance with a neural network. In: Shen D, et al.et al., editors. Medical Image Computing and Computer Assisted Intervention—MICCAI 2019. Springer Verlag; 2019. pp. 234–242. [Google Scholar]
- 11.Ain Q, Mehmood I, Naqi SM, Jaffar MA. Bayesian classification using DCT features for brain tumor detection. In: Setchi R, Jordanov I, Howlett RJ, Jain LC, editors. Knowledge-Based and Intelligent Information and Engineering Systems. KES 2010. Lecture Notes in Computer Science. Springer; 2010. pp. 340–349. [Google Scholar]
- 12.Shen S, Szameitat AJ, Sterr A. Detection of infarct lesions from single MRI modality using inconsistency between voxel intensity and spatial location—A 3-D automatic approach. IEEE Trans. Inf. Technol. Biomed. 2008;12:532–540. doi: 10.1109/TITB.2007.911310. [DOI] [PubMed] [Google Scholar]
- 13.Cabezas M, et al. Automatic multiple sclerosis lesion detection in brain MRI by FLAIR thresholding. Comput. Methods Programs Biomed. 2014;115:147–161. doi: 10.1016/j.cmpb.2014.04.006. [DOI] [PubMed] [Google Scholar]
- 14.Wardlaw JM, et al. Neuroimaging standards for research into small vessel disease and its contribution to ageing and neurodegeneration. Lancet Neurol. 2013;12:822–838. doi: 10.1016/S1474-4422(13)70124-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Pantoni L. Cerebral small vessel disease: From pathogenesis and clinical characteristics to therapeutic challenges. Lancet Neurol. 2010;9:689–701. doi: 10.1016/S1474-4422(10)70104-6. [DOI] [PubMed] [Google Scholar]
- 16.Pimentel MAF, Clifton DA, Clifton L, Tarassenko L. A review of novelty detection. Signal Process. 2014;99:215–249. doi: 10.1016/j.sigpro.2013.12.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Chandola V, Banerjee A, Kumar V. Anomaly detection. ACM Comput. Surv. 2009;41:1–58. doi: 10.1145/1541880.1541882. [DOI] [Google Scholar]
- 18.Phua C, Alahakoon D, Lee V. Minority report in fraud detection. ACM SIGKDD Explor. Newsl. 2004;6:50–59. doi: 10.1145/1007730.1007738. [DOI] [Google Scholar]
- 19.Jyothsna V, Prasad VVR, Prasad KM. A review of anomaly based intrusion detection systems. Int. J. Comput. Appl. 2011;28:26–35. [Google Scholar]
- 20.Clifton DA, Bannister PR, Tarassenko L. A framework for novelty detection in jet engine vibration data. Key Eng. Mater. 2007;347:305–310. doi: 10.4028/www.scientific.net/KEM.347.305. [DOI] [Google Scholar]
- 21.Lemos AP, Tierra-Criollo CJ, Caminhas WM. ECG anomalies identification using a time series novelty detection technique. In: Müller-Karger C, Wong S, La Cruz A, editors. IFMBE Proceedings. Springer Verlag; 2007. pp. 65–68. [Google Scholar]
- 22.Akcay, S., Atapour-Abarghouei, A. & Breckon, T. P. GANomaly: Semi-supervised anomaly detection via adversarial training. In Computer Vision - ACCV 2018, Vol. 11363 LNCS (eds. Jawahar, C.V. et al.) 622–637 (Springer International Publishing, 2019).
- 23.Kim D, Kang P, Cho S, Lee H, Doh S. Machine learning-based novelty detection for faulty wafer detection in semiconductor manufacturing. Expert Syst. Appl. 2012;39:4075–4083. doi: 10.1016/j.eswa.2011.09.088. [DOI] [Google Scholar]
- 24.Schlegl, T., Seeböck, P., Waldstein, S. M., Schmidt-Erfurth, U. & Langs, G. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In Information Processing in Medical Imaging (eds Niethammer, M. et al.) 146–157 (Springer International Publishing, 2017).
- 25.Schlegl T, Seeböck P, Waldstein SM, Langs G, Schmidt-Erfurth U. f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks. Med. Image Anal. 2019;54:30–44. doi: 10.1016/j.media.2019.01.010. [DOI] [PubMed] [Google Scholar]
- 26.Chen, X. & Konukoglu, E. Unsupervised detection of lesions in brain MRI using constrained adversarial auto-encoders. Preprint at http://arxiv.org/abs/1806.04972 (2018).
- 27.Sun L, et al. An Adversarial Learning Approach to Medical Image Synthesis for Lesion Detection. IEEE J. Biomed. Heal. Informatics. 2020;24:2303–2314. doi: 10.1109/JBHI.2020.2964016. [DOI] [PubMed] [Google Scholar]
- 28.Alex V, Safwan KPM, Chennamsetty SS, Krishnamurthi G. Generative adversarial networks for brain lesion detection. In: Styner MA, Angelini ED, editors. Medical Imaging 2017: Image Processing. International Society for Optics and Photonics; 2017. [Google Scholar]
- 29.Bowles C, et al. Brain lesion segmentation through image synthesis and outlier detection. NeuroImage Clin. 2017;16:643–658. doi: 10.1016/j.nicl.2017.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Kuijf HJ, et al. Supervised novelty detection in brain tissue classification with an application to white matter hyperintensities. In: Styner MA, Angelini ED, et al., editors. Medical Imaging 2016: Image Processing. International Society for Optics and Photonics; 2016. [Google Scholar]
- 31.Baur C, Wiestler B, Albarqouni S, Navab N, et al. Deep autoencoding models for unsupervised anomaly segmentation in Brain MR images. In: Crimi A, et al., editors. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Springer Verlag; 2019. pp. 161–169. [Google Scholar]
- 32.Baur C, Graf R, Wiestler B, Albarqouni S, Navab N, et al. SteGANomaly: Inhibiting CycleGAN steganography for unsupervised anomaly detection in brain MRI. In: Martel AL, et al., editors. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Springer Science and Business Media Deutschland GmbH; 2020. pp. 718–727. [Google Scholar]
- 33.van Veluw SJ, et al. Detection, risk factors, and functional consequences of cerebral microinfarcts. Lancet Neurol. 2017;16:730–740. doi: 10.1016/S1474-4422(17)30196-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Saczynski JS, et al. Cerebral infarcts and cognitive performance. Stroke. 2009;40:677–682. doi: 10.1161/STROKEAHA.108.530212. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Geerlings MI, et al. Brain volumes and cerebrovascular lesions on MRI in patients with atherosclerotic disease. The SMART-MR study. Atherosclerosis. 2010;210:130–136. doi: 10.1016/j.atherosclerosis.2009.10.039. [DOI] [PubMed] [Google Scholar]
- 36.Tustison NJ, et al. N4ITK: Improved N3 bias correction. IEEE Trans. Med. Imaging. 2010;29:1310–1320. doi: 10.1109/TMI.2010.2046908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Anbeek P, Vincken KL, van Bochove GS, van Osch MJP, van der Grond J. Probabilistic segmentation of brain tissue in MR imaging. Neuroimage. 2005;27:795–804. doi: 10.1016/j.neuroimage.2005.05.046. [DOI] [PubMed] [Google Scholar]
- 38.Paszke A, et al. et al. PyTorch: An imperative style, high-performance deep learning library. In: Wallach H, et al.et al., editors. Advances in Neural Information Processing Systems. Curran Associates, Inc.; 2019. pp. 8024–8035. [Google Scholar]
- 39.Iqbal, H. HarisIqbal88/PlotNeuralNet v1.0.0. (2018). 10.5281/zenodo.2526396.
- 40.Goodfellow IJ, et al. GAN (generative adversarial nets) J. Japan Soc. Fuzzy Theory Intell. Inform. 2017;29:177–177. [Google Scholar]
- 41.Kingma DP, Ba J. Adam: A method for stochastic optimization. In: Lehman J, Stanley KO, editors. Genetic and Evolutionary Computation. ACM Press; 2014. pp. 103–110. [Google Scholar]
- 42.McDonald RJ, et al. The effects of changes in utilization and technological advancements of cross-sectional imaging on radiologist workload. Acad. Radiol. 2015;22:1191–1198. doi: 10.1016/j.acra.2015.05.007. [DOI] [PubMed] [Google Scholar]
- 43.Guo D, et al. Automated lesion detection on MRI scans using combined unsupervised and supervised methods. BMC Med. Imaging. 2015;15:50. doi: 10.1186/s12880-015-0092-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Zhang X, et al. Characterization of white matter changes along fibers by automated fiber quantification in the early stages of Alzheimer’s disease. NeuroImage Clin. 2019;22:101723. doi: 10.1016/j.nicl.2019.101723. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Wang J, et al. Detecting cardiovascular disease from mammograms with deep learning. IEEE Trans. Med. Imaging. 2017;36:1172–1181. doi: 10.1109/TMI.2017.2655486. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Ouardini K, et al. et al. Towards practical unsupervised anomaly detection on retinal images. In: Wang Q, et al.et al., editors. Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. Springer Verlag; 2019. pp. 225–234. [Google Scholar]
- 47.Tarassenko, L., Hayton, P., Cerneaz, N. & Brady, M. Novelty detection for the identification of masses in mammograms. In 4th International Conference on Artificial Neural Networks, Vol. 1995, 442–447 (IET, 1995).
- 48.Kyathanahally SP, Döring A, Kreis R. Deep learning approaches for detection and removal of ghosting artifacts in MR spectroscopy. Magn. Reson. Med. 2018;80:851–863. doi: 10.1002/mrm.27096. [DOI] [PubMed] [Google Scholar]
- 49.Küstner T, et al. Automated reference-free detection of motion artifacts in magnetic resonance images. Magn. Reson. Mater. Phys. Biol. Med. 2018;31:243–256. doi: 10.1007/s10334-017-0650-z. [DOI] [PubMed] [Google Scholar]
- 50.Kuijf HJ, et al. Standardized Assessment of automatic segmentation of white matter hyperintensities and results of the WMH segmentation challenge. IEEE Trans. Med. Imaging. 2019;38:2556–2568. doi: 10.1109/TMI.2019.2905770. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Sudre CH, et al. et al. 3D multirater RCNN for multimodal multiclass detection and characterisation of extremely small objects. In: Cardoso MJ, et al.et al., editors. Proceedings of Machine Learning Research. PMLR; 2019. pp. 447–456. [Google Scholar]
- 52.Ngo D-K, Tran M-T, Kim S-H, Yang H-J, Lee G-S. Multi-task learning for small brain tumor segmentation from MRI. Appl. Sci. 2020;10:7790. doi: 10.3390/app10217790. [DOI] [Google Scholar]
- 53.Binczyk F, et al. MiMSeg—An algorithm for automated detection of tumor tissue on NMR apparent diffusion coefficient maps. Inf. Sci. (Ny) 2017;384:235–248. doi: 10.1016/j.ins.2016.07.052. [DOI] [Google Scholar]
- 54.Schmidt P, et al. An automated tool for detection of FLAIR-hyperintense white-matter lesions in Multiple Sclerosis. Neuroimage. 2012;59:3774–3783. doi: 10.1016/j.neuroimage.2011.11.032. [DOI] [PubMed] [Google Scholar]
- 55.Fartaria MJ, et al. Automated detection and segmentation of multiple sclerosis lesions using ultra–high-field MP2RAGE. Invest. Radiol. 2019;54:356–364. doi: 10.1097/RLI.0000000000000551. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Ghafoorian M, et al. Automated detection of white matter hyperintensities of all sizes in cerebral small vessel disease. Med. Phys. 2016;43:6246–6258. doi: 10.1118/1.4966029. [DOI] [PubMed] [Google Scholar]
- 57.van Veluw SJ, et al. In vivo detection of cerebral cortical microinfarcts with high-resolution 7T MRI. J. Cereb. Blood Flow Metab. 2013;33:322–329. doi: 10.1038/jcbfm.2012.196. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Ferro DA, van Veluw SJ, Koek HL, Exalto LG, Biessels GJ. Cortical cerebral microinfarcts on 3 Tesla MRI in patients with vascular cognitive impairment. J. Alzheimer’s Dis. 2017;60:1443–1450. doi: 10.3233/JAD-170481. [DOI] [PubMed] [Google Scholar]
- 59.Atlason, H. E., Love, A., Sigurdsson, S., Gudnason, V. & Ellingsen, L. M. Unsupervised brain lesion segmentation from MRI using a convolutional autoencoder. Preprint at http://arxiv.org/abs/1811.09655 (2018).
- 60.Vasilev, A. et al. q-Space novelty detection with variational autoencoders. In Computational Diffusion MRI, (eds Bonet-Carne, E. et al.) 113–124 (Springer International Publishing, 2020).
- 61.Alaverdyan Z, Jung J, Bouet R, Lartizien C. Regularized siamese neural network for unsupervised outlier detection on brain multiparametric magnetic resonance imaging: Application to epilepsy lesion screening. Med. Image Anal. 2020;60:101618. doi: 10.1016/j.media.2019.101618. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
A reasonable request for access to the image data can be send to Mirjam Geerlings (see author list). The code that supports the findings of this study is publicly available from Bitbucket (https://bitbucket.org/KeesvanHespen/lesion_detection).