Skip to main content
BJR Open logoLink to BJR Open
. 2019 Nov 28;2(1):20190031. doi: 10.1259/bjro.20190031

The role of artificial intelligence in medical imaging research

Xiaoli Tang 1,
PMCID: PMC7594889  PMID: 33178962

Abstract

Without doubt, artificial intelligence (AI) is the most discussed topic today in medical imaging research, both in diagnostic and therapeutic. For diagnostic imaging alone, the number of publications on AI has increased from about 100–150 per year in 2007–2008 to 1000–1100 per year in 2017–2018. Researchers have applied AI to automatically recognizing complex patterns in imaging data and providing quantitative assessments of radiographic characteristics. In radiation oncology, AI has been applied on different image modalities that are used at different stages of the treatment. i.e. tumor delineation and treatment assessment. Radiomics, the extraction of a large number of image features from radiation images with a high-throughput approach, is one of the most popular research topics today in medical imaging research. AI is the essential boosting power of processing massive number of medical images and therefore uncovers disease characteristics that fail to be appreciated by the naked eyes. The objectives of this paper are to review the history of AI in medical imaging research, the current role, the challenges need to be resolved before AI can be adopted widely in the clinic, and the potential future.

A brief overview of the history

A handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. They gathered together at a workshop held on the campus of Dartmouth College during the summer of 1956. This is widely known as Dartmouth Workshop, and it founded a society of artificial intelligence (AI).1 The field then went through its peaks and valleys several cycles. MIT cognitive scientist Marvin Minsky along with other attendees at the Dartmouth Workshop were extremely optimistic about AI’s future. They believed that AI will substantially be solved within a generation. However, no significant progress was made. After several criticizing reports and ongoing pressure from congress, government funding and interests dropped off. 1974–90 became the first AI winter. In the 80’s, due to the competition of the British and Japan, AI revived. 1983–93 was a major winter for AI, coinciding with the collapse of the market for the needed computer power, which led to withdrawal of funding again. Research began to pick up again after that. One well-known event was IBM’s Deep Blue—the first computer beat a chess champion. In 2011, the computer giant’s question answering system Watson won the quiz show Jeopardy, and this marked the newest wave of AI booming. In Parallel of recent 10 years in medical imaging research, the amount of imaging data has grown exponentially. This has increased the burden to physicians to process the images. They need to read images with higher efficiency while maintain the same or better accuracy. At the same time, fortunately, computational power has also grown exponentially. These challenges and opportunities have formed the perfect foundation for the AI to be blossomed in the medical imaging research.

Researchers have successfully applied AI in radiology to identify findings either detectable or not by the human eye. Radiology is now moving from a subjective perceptual skill to a more objective science.2,3 In Radiation Oncology, AI has been successfully applied to automatic tumor and organ segmentation,4–6 78 and tumor monitoring during the treatment for adaptive treatment. In 2012, a Dutch researcher, Lambin P, proposed the concept of “Radiomics” for the first time and defined it as follows: the extraction of a large number of image features from radiation images with a high-throughput approach.9 As AI became more popular and also more medical images than ever have been generated, these are good reason for radiomics to evolve rapidly. Radiomics is a novel approach for solving the issue of precision medicine. These researches have demonstrated a great potential of the role of AI in medical imaging. In fact, it has sparkled one of the ongoing discussions—will AI replace clinicians entirely? We believe it will not. For short term, AI is constrained by a lack of high quality, high volume, longitudinal, outcomes data, a constraint that is further exacerbated by the competing need for strict privacy protection.10 There were approaches to address the privacy threat, like distributed learning. However, in a 2017 paper, it was argued that any distributed, federated, or decentralized deep learning approach is susceptible to attacks that reveal information about participant information from the training set.11 For long term, we believe that AI will continue to underperform human level accuracy in medical decision making. Fundamentally, medicine is art, not science. AI might be able to outperform human in terms of quantitative tasks. Overall medical decision, however, will still depend on human evaluation to achieve the optimal results for a given patient.

Current role of AI in radiology

Machine learning, as a subset of AI, also called the traditional AI, was applied on diagnostic imaging started 1980’s.12 Users first predefine explicit parameters and features of the imaging based on expert knowledge. For instance, the shapes, areas, histogram of image pixels of the regions-of-interest (i.e. tumor regions) can be extracted. Usually, for a given number of available data entries, part of them are used as training and the rest would be for testing. Certain machine learning algorithm is selected for the training to understand the features. Some examples of the algorithms are principal component analysis (PCA), support vector machines (SVM), convolutional neural networks (CNN), etc. Then, for a given testing image, the trained algorithm is supposed to recognize the features and classify the image.

One of the problems of machine learning is that users need to select the features which define the class of the image it belongs to. However, this might miss some contributing factors.12,13 For instance, lung tumor diagnosis requires user to segment the tumor region as structure features. Due to the patient and user variation, the consistency of the manual feature selection has always been a challenge. Deep learning, however, does not require explicit user input of the features. As its name suggests, deep learning learns from significantly more amount of data. It uses models of deep artificial neural networks. Deep learning uses multiple layers to progressively extract higher level features from raw image input. It helps to disentangle the abstractions and picks out the features that can improve performance. The concept of deep learning was proposed decades ago. Only till recent decade, the application of deep learning became feasible due to enormous number of medical images being produced and advancements in the development of hardware, like graphics processing units (GPU).14 However, with machine learning gaining its relevance and importance every day, even GPU became somewhat lacking. To combat this situation, Google developed an AI accelerator integrated circuit which would be used by its TensorFlow AI framework—tensor processing unit (TPU). TPU is designed specifically for neural network machine learning and would have potential to be applied on medical imaging research as well.

The main research area in diagnostic imaging is detection. Researchers started developing computer-aided detection (CAD) systems in the 1980s. Traditional machine learning algorithms were applied on image modalities like CT, MRI, and mammography. Despite a lot of effort made in the research area, the real clinical applications were not promising. Several large trials came to the conclusion that CAD has at best delivered no benefit15 and at worst has actually reduced radiology accuracy,16 resulting in higher recall and biopsy rates.17,18

The new era of AI—the deep learning has so far demonstrated promising improvements in the research area over the traditional machine learning. As an example, Ardila et al proposed a deep learning algorithm that uses a patient’s current and prior CT volumes to predict the risk of lung cancer.19 The model achieved a state-of-the-art performance (94.4% area under the curve) on 6716 national lung cancer screening trial cases and performed similarly on an independent clinical validation set of 1139 cases. As a comparison of conventional screening by low-dose CT, per cancer.gov,20 there are several associated harms: false-positive exams, overdiagnosis, complications of diagnostic evaluation, increase in lung cancer mortality, and radiation exposure. One false-positive exam example provided on the web site was 60%. Overdiagnosis was estimated at 67%. There is also radiation induced risk to develop lung cancer or other types of cancer later in life. AI-based diagnosis reduced these risks.

In fact, deep learning algorithms have become a methodology of choice for radiology imaging analysis.20 This includes different image modalities like CT, MRI, PET, ultrasonography etc and different tasks like tumor detection, segmentation, disease prediction etc. Researches have shown that AI/deep learning-based methods have substantial performance improvements over the conventional machine learning algorithms.21 Similar to human learning, deep learning learns from enormous amount of image examples. However, it might take much less time, as it solely depends on curated data and the corresponding metadata rather than the domain expertise, which usually takes years to develop.12 As the traditional AI requires predefined features and have shown plateauing performance over recent years, and with the current success of AI/deep learning in image research, it is expected that AI will further dominate the image research in radiology.

Current role of AI in radiation oncology

In radiation oncology imaging research, AI has been applied in organ and lesion segmentation, image registration, fiducial/marker detection, radiomics etc. Similar to radiology, it started with traditional AI and now with deep learning.3,22–242526 In the most recent Medical Physics journal (May 2019, Volume 46, Issue 5), there were 16/51 papers on deep learning-based imaging research. As we know, imaging research is only one subsection of the entire radiation oncology research. The large portion of the published deep learning imaging research articles demonstrates the important role AI is now playing in the field.

For organ and lesion segmentation, the main goal is to segment the organs at risk automatically for treatment planning. Deep learning algorithms have been applied to segment head and neck organs, brain, lung, prostate, kidney, pelvis etc. Lesion segmentation applications include bladder, breast, bone, brain, head and neck, liver, lung, lymph nodes, rectum etc. Sahiner et al23 has summarized the segmentation object, deep learning methods used, data set used, and the corresponding performance. One algorithm used often was U-net.27 Unlike traditional AI, U-nets consist of several convolution layers, followed by deconvolution layers, with connections between the opposing convolution and deconvolution layers. The network can therefore analyze the entire image during training and allow for obtaining segmentation likelihood maps directly.

Dong et al applied U-net-generative adversarial network (U-Net-GAN) to train deep neural networks for the segmentation of multiple organs on thoracic CT images.28 U-Net-GAN jointly trains a set of U-Nets as generators and fully convolutional networks (FCNs) as discriminators. The generator and discriminator compete against each other in an adversarial learning process to produce the optimal segmentation map of multiple organs. The proposed algorithm was demonstrated feasible and reliable in segmenting five different organs. Similarly, Feng et al successfully applied deep convolutional neural networks (DCNN) for thoracic organs at risks segmentation using cropped hree-dimensional images.29 CNN has also been used on head and neck organ segmentation.30

Holistically nested networks (HNN) uses side outputs of the convolutional layers, and it has been applied on prostate and brain tumor segmentation.31,32

In radiation therapy, often there are needs to register one image modality to another (multimodal) or an image on a one day to another (monomodal). To avoid traditional AI which required handcrafted features, an unsupervised deep learning feature selection framework was proposed. It implemented a convolutional stacked auto-encoder network to identify the intrinsic features in image patches.33 The algorithm demonstrated better Dice ratio scores compares to state of the art. These can be applied on both multimodal and monomodal image registrations. Sloan et al34 have proposed a novel method of image registration by regressing the transformation parameters using a convolutional neural network (CNN). This was applied on both mono- and multimodal applications. With the promising result AI has demonstrated so far in the research domain, we hope the AI-based image registration can be applied in the clinic soon. This is an important step towards real-time adaptive treatment planning and delivery.

The automatic fiducial/marker detection is needed for real time tracking of the treatment area during the delivery. Most common methods require prior knowledge of the marker properties to construct a template. Recent proposed deep learning CNN framework requires no prior knowledge of marker properties or additional learning periods to segment cylindrical and arbitrarily shaped fiducial markers.22 The algorithm achieved high classification performance.

Radiomics, one of the most advanced AI applications in medical imaging research, is a novel approach towards the precision medicine.35 Radiomics consists two steps. First step is feature extraction. Images from multiple modalities might be included. Image segmentation algorithms are applied to segment the volumes of interest. After the segmentation, features will be extracted. Common features include texture, geometric information, tumor volume, shape, density, pixel intensity etc. The second step is to incorporate the extracted features into mathematical models to decoding the phenotype of the tumor for treatment outcome prediction. A successful outcome prediction can provide valuable information for precise treatment design. For instance, different lung cancer patients might share many similarities like histology and age. However, the images of the tumor might appear different, and the survival time might be very different.36 If radiomics can take the image information, decode the phenotype, and therefore predict the survival time or prognosis prior to the treatment, different treatment regimens might be chosen. This is called personalized or precision medicine.35 Traditionally, precision medicine depended on biomarkers to estimate patient different prognosis or subtype, which usually required invasive biopsy. Radiomics, on the other hand, does not require invasive procedures. It was shown that features extracted from CT images of lung cancer patients alone correlate well with gene mutations and have prognostic powers.37 The success of radiomics can potentially avoid undesirable complications caused by biopsy38,39 and achieve the same or better prediction outcome.

Aerts et al37 built a radiomic signature, assessed on an independent lung data set. It demonstrated the translational capability of radiomics across different cancers. Authors also showed significant associations between the radiomic features and gene-expression patterns. Some researchers did radiomics modeling using positron emission tomography (PET) images,40 PET/CT or PET/MRI.41 Most applications were on lung cancer. There are also applications on head and neck42 and prostate cancers.43 All these models have achieved reasonable prediction power.

Challenges need to be resolved before clinical implementation

Despite the excitement AI has generated in the medical imaging research, there are challenges before it can become more robust and be widely adopted in the clinic. AI is constrained by a lack of high quality, high volume, longitudinal, outcomes data. Even the same image modality on the same disease site, the parameters of the imaging setting and protocols might be different in different clinical settings. Each set of images is associated with a clinical scenario. The number of potential clinical scenarios and the variety of tasks that each of the image might contain is astronomical and might be impossible to be tacked by one organization with any AI algorithm. Each patient cohort associated with a clinic is different. The way each clinic practices is also different. How to organize the data generated from different practices in a more standard way is a big challenge on AI-based medical imaging research. Medical imaging data organization itself might deserve to be a major research field.

There are challenges associated with medical imaging data curation.4445 Data curation is an important step. Accurate labeling therefore is a key. As the exponentially growth of the number of images, clinicians have challenges to process them with the same efficiency and accuracy. It usually takes years to train people to become experts. Therefore, the lack of ability to keep up labeling enormous number of images imposes limitations of the data curation.

On the policy level, there are increasing concerns on patient privacy. Patient-related health information was protected by tight privacy policies, which limited cross-institution image sharing. Recently, there were several headline news level healthcare data breaches and security attacks. As a result, hospitals are now more than ever concerning about securities and liabilities and have tightened up security and data sharing policies. However, the success implementations of AI needs large amount of data from multiple institutions. How to share images without compromising security is a challenge.

The future of AI in medical imaging research

Two challenges need to be resolved before AI can be more widely implemented in medical imaging research. First, how to organize and pre-process data generated from different institutions. Miotto et al stated in their breakthrough work “deep patient”—challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using electronic health records. They presented a novel unsupervised deep feature learning method to derive a general-purpose patient representation from electronic health record data that facilitates clinical predictive modeling.46 Authors have successfully derived patient representations from a large-scale data set that were not optimized for any specific task and can fit different clinical applications. However, their data are from one institution. Tackling data set from multiple institutions in fact is a much more challenging task. Even for the same procedure, different institution might implement differently. Patient cohorts might also be different. All these will need to be addressed when pre-process data for AI algorithm.

Second, on a policy or infrastructure level, how to encourage more image data sharing is also a challenge. Currently, image data sharing is very limited. HIPAA compliant is one concern, and lack of infrastructure is another. The medical data security needs to work with the emerging needs of data sharing. Corresponding infrastructure also needs to be built.

On the long run, how AI can become true “intelligent” at the human level is a key to the question if AI can replace human in medical imaging. Unlike pure quantitative task, the knowledge involved in medical imaging related decision making require life experience and philosophy. For the machine to behave in human level, there are not only challenges on data collection and algorithm development, but also on ethical regulations.

Conclusions

AI is playing a significant role in medical imaging researches. It changed the way people process the enormous number of images. There are still challenges to be resolved before AI can eventually impact clinical practices.

REFERENCES

  • 1.Andreas K, Haenlein M. Siri, Siri in my hand, who's the Fairest in the land? on the interpretations, illustrations and implications of artificial intelligence. Business Horizons 2019; 6: 15–25. [Google Scholar]
  • 2.Chartrand G, Cheng PM, Vorontsov E, Drozdzal M, Turcotte S, Pal CJ, et al. . Deep learning: a primer for radiologists. Radiographics 2017; 37: 2113–31. doi: 10.1148/rg.2017170077 [DOI] [PubMed] [Google Scholar]
  • 3.Lakhani P, Prater AB, Hutson RK, Andriole KP, Dreyer KJ, Morey J, et al. . Machine learning in radiology: applications beyond image interpretation. Journal of the American College of Radiology 2018; 15: 350–9. doi: 10.1016/j.jacr.2017.09.044 [DOI] [PubMed] [Google Scholar]
  • 4.Rastgarpour M, Shanbehzadeh J, 2011. Application of AI Techniques in Medical Image Segmentation and Novel Categorization of Available Methods and Tools. The international MultiConference of Engineers and Computer Scientists, Hong Kong. [Google Scholar]
  • 5.Roth HR. Deep learning and its application to medical image segmentation. Medical Imaging Technology 2018; 36: 63–71. [Google Scholar]
  • 6.Tang X, Wang B, Rong Y. Artificial intelligence will reduce the need for clinical medical physicists. J Appl Clin Med Phys 2018; 19: 6–9. doi: 10.1002/acm2.12244 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Mirajkar G, Barbadekar B. Automatic segmentation of brain tumors from MR images using undecimated wavelet transform and gabor. Wavelet IEEE 2011; 4. [Google Scholar]
  • 8.Magalhães Barros Netto S, Corrêa Silva A, Acatauassú Nunes R, Gattass M, Netto S, Silva A. Automatic segmentation of lung nodules with growing neural gas and support vector machine. Comput Biol Med 2012; 42: 1110–21. doi: 10.1016/j.compbiomed.2012.09.003 [DOI] [PubMed] [Google Scholar]
  • 9.Lambin P, Rios-Velazquez E, Leijenaar R, Carvalho S, van Stiphout RGPM, Granton P, et al. . Radiomics: extracting more information from medical images using advanced feature analysis. Eur J Cancer 2012; 48: 441–6. doi: 10.1016/j.ejca.2011.11.036 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Thompson RF, Valdes G, Fuller CD, Carpenter CM, Morin O, Aneja S, et al. . Artificial intelligence in radiation oncology imaging. Int J Radiat Oncol Biol Phys 2018; 102: 1159–61. doi: 10.1016/j.ijrobp.2018.05.070 [DOI] [PubMed] [Google Scholar]
  • 11.Hitaj B. Deep models under the GAN: information leakage from collaborative deep learning. Cryptography and Security 2017; arXiv–1702. [Google Scholar]
  • 12.Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer 2018; 18: 500–10. doi: 10.1038/s41568-018-0016-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? radiologists again at the forefront of innovation in medicine. Eur Radiol Exp 2018; 2: 35. doi: 10.1186/s41747-018-0061-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Lee J-G, Jun S, Cho Y-W, Lee H, Kim GB, Seo JB, et al. . Deep learning in medical imaging: general overview. Korean J Radiol 2017; 18: 570–84. doi: 10.3348/kjr.2017.18.4.570 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Lehman CD, Wellman RD, Buist DSM, Kerlikowske K, Tosteson ANA, Miglioretti DL. Diagnostic accuracy of digital screening mammography with and without computer-aided detection. JAMA Intern Med 2015; 175: 1828–37. doi: 10.1001/jamainternmed.2015.5231 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Fenton JJ, Taplin SH, Carney PA, Abraham L, Sickles EA, D'Orsi C, et al. . Influence of computer-aided detection on performance of screening mammography. N Engl J Med 2007; 356: 1399–409. doi: 10.1056/NEJMoa066099 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Gilbert FJ, Astley SM, Gillan MGC, Agbaje OF, Wallis MG, James J, et al. . Single reading with computer-aided detection for screening mammography. N Engl J Med 2008; 359: 1675–84. doi: 10.1056/NEJMoa0803545 [DOI] [PubMed] [Google Scholar]
  • 18.Oakden-Rayner L. The rebirth of CAD: how is modern AI different from the CAD we know? Radiology: artificial intelligence 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Ardila D, Kiraly A, Bharadwaj S, Choi B, Reicher J, Peng L, et al. . End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature medicine 2019. [DOI] [PubMed] [Google Scholar]
  • 20.Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. . A survey on deep learning in medical image analysis. Med Image Anal 2017; 42: 60–88. doi: 10.1016/j.media.2017.07.005 [DOI] [PubMed] [Google Scholar]
  • 21.Paul R, Hawkins SH, Balagurunathan Y, Schabath MB, Gillies RJ, Hall LO, et al. . Deep feature transfer learning in combination with traditional features predicts survival among patients with lung adenocarcinoma. Tomography 2016; 2: 388–95. doi: 10.18383/j.tom.2016.00211 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Mylonas A, Keall PJ, Booth JT, Shieh C-C, Eade T, Poulsen PR, et al. . A deep learning framework for automatic detection of arbitrarily shaped fiducial markers in intrafraction fluoroscopic images. Med Phys 2019; 46: 2286–97. doi: 10.1002/mp.13519 [DOI] [PubMed] [Google Scholar]
  • 23.Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, et al. . Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46: e1–36. doi: 10.1002/mp.13264 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Schreier J, Attanasi F, Laaksonen H. A Full-Image deep Segmenter for CT images in breast cancer radiotherapy treatment. Front Oncol 2019; 9: 677. doi: 10.3389/fonc.2019.00677 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Krishnan R, Hermann E, Wolff R, Zimmermann M, Seifert V, Raabe A. Automated fiducial marker detection for patient registration in image-guided neurosurgery. Comput Aided Surg 2003; 8: 17–23. doi: 10.3109/10929080309146098 [DOI] [PubMed] [Google Scholar]
  • 26.Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, et al. . Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46: e1–36. doi: 10.1002/mp.13264 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Ronneberger O. U-net: convolutional networks for biomedical image segmentation : Internaltional conference on medical image computing and computer-assisted intervention (MICCAI): Cham; 2015. [Google Scholar]
  • 28.Dong X, Lei Y, Wang T, Thomas M, Tang L, Curran WJ, et al. . Automatic multiorgan segmentation in thorax CT images using U-net-GAN. Med Phys 2019; 46: 2157–68. doi: 10.1002/mp.13458 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Feng X, Qing K, Tustison NJ, Meyer CH, Chen Q. Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3D images. Med Phys 2019; 46: 2169–80. doi: 10.1002/mp.13466 [DOI] [PubMed] [Google Scholar]
  • 30.Chan JW, Kearney V, Haaf S, Wu S, Bogdanov M, Reddick M, et al. . A convolutional neural network algorithm for automatic segmentation of head and neck organs at risk using deep lifelong learning. Med Phys 2019; 46: 2204–13. doi: 10.1002/mp.13495 [DOI] [PubMed] [Google Scholar]
  • 31.Cheng R, Roth HR, Lay N, Lu L, Turkbey B, Gandler W, et al. . Automatic magnetic resonance prostate segmentation by deep learning with holistically nested networks. J Med Imaging 2017; 4: 041302. doi: 10.1117/1.JMI.4.4.041302 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Zhuge Y, Krauze AV, Ning H, Cheng JY, Arora BC, Camphausen K, et al. . Brain tumor segmentation using holistically nested neural networks in MRI images. Med Phys 2017; 44: 5234–43. doi: 10.1002/mp.12481 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Wu G, Kim M, Wang Q, Munsell BC, Shen D. Scalable high-performance image registration framework by unsupervised deep feature representations learning. IEEE Trans Biomed Eng 2016; 63: 1505–16. doi: 10.1109/TBME.2015.2496253 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Sloan J. Learning rigid image rigistration utilizing convolutional neural networks for medical image registration In: Int. Joint Conf. on Bio. Eng. Syst. and Tech, Funchal, Portugal; 2018. [Google Scholar]
  • 35.Baumann M, Krause M, Overgaard J, Debus J, Bentzen SM, Daartz J, et al. . Radiation oncology in the era of precision medicine. Nat Rev Cancer 2016; 16: 234–49. doi: 10.1038/nrc.2016.18 [DOI] [PubMed] [Google Scholar]
  • 36.Arimura H, Soufi M, Kamezawa H, Ninomiya K, Yamada M. Radiomics with artificial intelligence for precision medicine in radiation therapy. J Radiat Res 2019; 60: 150–7. doi: 10.1093/jrr/rry077 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Aerts HJWL, Velazquez ER, Leijenaar RTH, Parmar C, Grossmann P, Carvalho S, et al. . Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat Commun 2014; 5: 4006. doi: 10.1038/ncomms5006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Nishihara M. Morbidity of stereotactic biopsy for intracranial lesions. The Kobe J. of Med Sciences 2011; 56: 148–53. [PubMed] [Google Scholar]
  • 39.Fukagai T, Namiki T, Namiki H, Carlile RG, Shimada M, Yoshida H. Discrepancies between Gleason scores of needle biopsy and radical prostatectomy specimens. Pathol Int 2001; 51: 364–70. doi: 10.1046/j.1440-1827.2001.01207.x [DOI] [PubMed] [Google Scholar]
  • 40.Ohri N, Duan F, Snyder BS, Wei B, Machtay M, Alavi A, et al. . Pretreatment 18F-FDG PET textural features in locally advanced non-small cell lung cancer: secondary analysis of ACRIN 6668/RTOG 0235. J Nucl Med 2016; 57: 842–8. doi: 10.2967/jnumed.115.166934 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Naqa IE. The role of quantitative PET in predicting cancer treatment outcomes. Clin Transl Imaging 2014; 2: 305–20. doi: 10.1007/s40336-014-0063-1 [DOI] [Google Scholar]
  • 42.Marcu L. Feeding the data monster: data science in head and neck cancer for personalized therapy. J Am Coll Radiol 2019; 19: s1546–50. [DOI] [PubMed] [Google Scholar]
  • 43.Perez-Lopez R, Tunariu N, Padhani AR, Oyen WJG, Fanti S, Vargas HA, et al. . Imaging diagnosis and follow-up of advanced prostate cancer: clinical perspectives and state of the art. Radiology 2019; 292: 1148 10.1148/radiol.2019181931 [DOI] [PubMed] [Google Scholar]
  • 44.Prevedello L. Challenges related to artificial intelligence research in medical imaging and the importance of image analysis competitions. Radiology: Artificial Intelligence 2019; 1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Prevedello L. Challenges related to artificial intelligence research in medical imaging and the importance of image analysis competitions. Radiology: Artificial Intelligence 2019; 1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Miotto R. Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Nature 2016; 6. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from BJR Open are provided here courtesy of Oxford University Press

RESOURCES