Abstract
This editorial introduces the Special Issue on Simulation and Synthesis in Medical Imaging. In this editorial, we define so-far ambiguous terms of simulation and synthesis in medical imaging. We also briefly discuss the synergistic importance of mechanistic (hypothesis-driven) and phenomenological (data-driven) models of medical image generation. Finally, we introduce the twelve papers published in this issue covering both mechanistic (5) and phenomenological (7) medical image generation. This rich selection of papers covers applications in cardiology, retinopathy, histopathology, neurosciences, and oncology. It also covers all mainstream diagnostic medical imaging modalities. We conclude the editorial with a personal view on the field and highlight some existing challenges and future research opportunities.
Keywords: Data-driven, hypothesis-driven, machine learning, modeling
I. INTRODUCTION
THE medical image community has always been fascinated by the possibility of creating simulated or synthetic data upon which to understand, develop, assess, and validate image analysis and reconstruction algorithms. From very basic digital phantoms all the way to very realistic in silico models of medical imaging and physiology, our community has progressed enormously in the available techniques and their applications. For instance, mechanistic models (imaging simulations) emulating the geometrical and physical aspects of the acquisition process have been used now for a long time. Advances on computational anatomy and physiology have further enhanced the potential of such simulation platforms by incorporating structural and functional realism to the simulations that can now account for complex spatio-temporal dynamics due to changes in anatomy, physiology, disease progression, patient and organ motion, etc.
More recently, developments in machine learning together with the growing availability of ever larger-scale databases have provided the theoretical underpinning and the practical data access to develop phenomenologic models (image synthesis) that learn models directly from data associations across subjects, time, modalities, resolutions, etc. These techniques may provide ways to address challenging tasks in medical image analysis such as cross-cohort normalization, image imputation in the presence of missing or corrupted data, transfer of knowledge across imaging modalities, views or domains.
To this date, however, these two main research avenues (simulation and synthesis) remain independent efforts despite sharing common challenges. For instance, both modeling approaches involve dealing with large scale optimization problems (e.g. in learning processes or physical equations), involve the use of regularization and priors (e.g. either based on mathematical or physical properties), need to generalize well, adapt to new scenarios, and degrade gracefully beyond the original learning set or modeling assumptions, require the definition of meaningful figures of merit to assess the quality, accuracy, or realism of simulated/synthesized data, in both approaches there is a growing emphasis on open source implementations, open data benchmarks, and evaluation challenges, just to name a few. These and other challenges have been discussed at the successful SASHIMI Satellite Workshop1 held in conjunction with the Medical Image Computing and Computer Assisted Interventions (MICCAI) Conference in 2016 (Athens, Greece) and 2017 (Quebec, Canada). We look forward to the future editions of this Workshop as a forum for identifying new research challenges and avenues, and tackling them as a community.
This special issue provides an overview of the state-of-theart in methods and algorithms at the bleeding edge of synthesis and simulation in/for medical imaging research. We hope this collection will stimulate new ideas leading to theoretical links, practical synergies, and best practices in evaluation and assessment common to these two research directions. We solicited contributions from cross-disciplinary teams with expertise, among others, on machine learning, statistical modeling, information theory, computational mechanics, computational physics, computer graphics, applied mathematics, etc.
In the sequel, we first aim to formally define simulation and synthesis in medical imaging and then discuss similarities and differences between simulation (mechanistic) vs. synthesis (phenomenologic) approaches. We then give the main highlights of the published papers within this issue and conclude by offering our perspective on some trends and challenges, and point our to some open problems awaiting future research.
II. CONTEXT AND DEFINITIONS
It is helpful at this point to be specific about the concepts of simulation and synthesis in this special issue, that is, in medical imaging and medical image computing. We found out that the concept of simulation is, in general, very ample and unspecific to medical imaging, and that there was virtually no formal definition of medical image synthesis. We could find none of these terms defined in the Dictionary of Computer Vision and Image Processing [item 1) in the Appendix].
The concepts of image simulation and synthesis can be ambiguous (or even interchangeable) if one attends to dictionary definitions of these terms by authoritative references such as Oxford (OED)2 and Merriam-Webster (MWD)3:
Simulation [OED] n • 3. The technique of imitating the behaviour of some situation or process (whether economic, military, mechanical, etc.) by means of a suitably analogous situation or apparatus, esp. for the purpose of study or personnel training.
Simulation [MWD] n • 3a: the imitative representation of the functioning of one system or process by means of the functioning of another – a computer simulation of an industrial process; b: examination of a problem often not subject to direct experimentation by means of a simulating device.
Synthesis [OED] n • 1. Logic, Philos., etc.: a. The action of proceeding in thought from causes to effects, or from laws or principles to their consequences. (Opposed to analysis n. 3).
Synthesis [MWD] n • 1 a : the composition or combination of parts or elements so as to form a whole.
The concept of synthesis currently in use in computer vision and medical image analysis contrasts strikingly as almost opposite to that traditionally used in philosophy or science.4 In computer graphics, the “goal in realistic image synthesis is to generate an image that evokes from the visual perception system a response indistinguishable from that evoked by the actual environment” [item 2) in the Appendix] [item 3) in the Appendix]. However, computer graphics is focused on perceptual accuracy. Glassner, in his classical book states: “our job as image synthesists is to create an illusion of reality – to make a picture that carries our message, not necessarily one that matches some objective standard. It’s a creative job” [item 4) in the Appendix]. While medical imaging does not neglect visual realism (e.g. for conventional radiographic assessment this remains important), the key concern is one of quantitative accuracy of the synthesised images or, at least, in accuracy in terms of figures of merit that are meaningful for the intended task (e.g. diagnostics, planning, prognosis, etc.). In the sequel, we attempt to provide some distinction between and propose a definition to the concepts of image synthesis and image simulation based on the literature and praxis of our medical imaging community.
At one level, in using the concepts of simulation and synthesis, our community usually makes a fundamental ontological distinction best described by referring to mechanistic and phenomenologic models, respectively. In simulation, we usually adopt first principles for image generation while in synthesis we start off with abundant data (with the notion of abundance changing through the years). We also usually assume behind these concepts a natural information processing direction: from data to models with synthesis; and from models to data with simulation (Fig. 1). Simulation implies the existence of an abstraction of the knowledge we possess, usually in the form of first principles, that is used to derive instances of that knowledge in a scenario that is fully controlled by the selection of simulation parameters. Synthesis, on the contrary, implies the ability to abstract or summarise (synthesise) knowledge from a collection of exemplars that are representative of a wider population, phenotype or phenomenon. This is usually accomplished through statistical or phenomenologic models. If a mechanistic model is available, one can perform data assimilation or parameter identification resulting in a customised or individualised mechanistic model. Conversely, one can simulate new image (or shape) examples from an image (or shape) synthesis method but we talk then of data-driven models and these are usually phenomenologic in nature. At this point, we make explicit that the notion of “medical image” we use here refers to any spatially (or spatio-temporally) resolved mapping or function [item 5) in the Appendix] to any physical or physiological parameter space, even if that space is non-measurable and hence derived from a computer-based synthesis or simulation. In this case, we can refer to “virtual” or “in silico” medical imaging [item 6) in the Appendix]. This has as a side-effect that while phenomenologic model can issue forecasts (i.e. are regressive or extrapolative), only mechanistic models are truly predictive (Latin: præ-, “before,” and dicere, “to say”).
Fig. 1.
Data-Information-Knowledge-Wisdom (DIKW) pyramid and how phenomenologic and mechanistic approaches relate to it. Adapted from [item 6) in the Appendix]
Here, we offer these two definitions:
(Image) Synthesis [ours] n • The generation of visually realistic and quantitatively accurate images through learning phenomenologic models with application to problems like interpolation, super resolution, image normalisation, modality propagation, data augmentation, etc.
(Image-based) Simulation [ours] n • The application of mechanistic first principles from imaging physics, organ physiology, and/or their interaction, to produce virtual images that are informed by individualised data; these result on both visually realistic and physically/clinically plausible images, and are generated under controlled hypothetical imaging conditions.
Synthetic images are generally useful in structuring information and capturing knowledge from vast image data sets when little is known about the underlying mechanisms. They are particularly useful as a modeling approach when data is abundant and we have few hypothesis to make about the underlying mechanisms. They are hypothesis-free but data-driven: this means the extracted knowledge must be cautiously interpreted in light of the way the data has been collected (e.g. what population is represented by this sample?, which inclusion and exclusion criteria underlie the data?, etc.). Virtual images derived from image-based simulations, in turn, produce images with strong mechanistic priors and are a great approach when acquiring (large amounts of) images is impractical, ethically unjustifiable, or simply impossible. Here, the data generated from simulations must also be cautiously interpreted checking the epistemological validity of the underlying modeling assumptions and mechanisms. In brief, both approaches have strengths and limitations. Synthetic images play a key role in data-driven information processing and knowledge discovery while image-based simulations are valuable in hypothesis-driven research in image-based diagnosis and treatment.
III. MECHANISTIC OR PHENOMENOLOGIC?
It is beyond the scope of this editorial to review the considerable progress made over the past decades in both physical models of image formation and in machine learning techniques for image synthesis. This special issue is a modern and exciting excerpt of the most recent developments. We would like, however, to put these two approaches underpinning these special issue in the wider context of current trends in science and data science.
There are opportunities and limitations in approaching image generation from a mechanistic or a phenomenologic standpoint, some of epistemological reach. Some would argue with increasing availability of big data, computational resources, and breakthroughs in artificial intelligence, data-driven phenomenologic models will eventually supersede the need of mechanistic theories [item 7) in the Appendix], while others seriously contest this viewpoint [item 8) in the Appendix]. The complexity of image generation process, the need to model detailed and accurately the geometry and physics of imaging, and the variability and uncertainty associated with anatomical and physiological factors, all seem to favour those challenging the need or feasibility of generating truly accurate medical images from first principles. In Chapter 12 of his book, Helbing [item 9) in the Appendix] presents an interesting cautionary argument that contrasts with Anderson’s vision of Big Data (assuming that we no longer will need theory and science). Fig. 2 shows Helbing’s model for digital growth in computational resources doubling about every 18 months (Moore’s law), and data resources doubling about every 12 months (soon every 12 hours!). While these two resources follow an exponential growth, the complexity of the processes that these resources help to elucidate or decide on (e.g. parametric complexity of the computational methods, ontological complexity of health data) follow a factorial growth as they are based on combinatorial combinations and system networks, respectively. The above implies the problem of “dark data”, i.e. the share of data we cannot process is increasing with time. As a consequence, we must know what data to process and how, which requires hypothesis-driven science and understanding of the underlying mechanisms relating data and phenomena so that algorithmic complexity is dealt with tractably.
Fig. 2.
Helbing’s model for digital growth where systemic complexity (e.g. algorithmic parametric complexity and complexity of health data) grows at a factorial rate compared to the exponential rate of data and computing resources. Courtesy of D Helbing. Reprinted with permission.
IV. SPECIAL ISSUE STATISTICS
Twenty-four manuscripts were received for this special issue. Two were immediately rejected while another ten were rejected after a revision round. Twelve papers were finally accepted after peer-review covering both mechanistic (5) and phenomenologic (7) modeling and data generation. This rich selection of papers covers applications in cardiology, retinopathy, histopathology, neurosciences, and oncology. It also covers all mainstream diagnostic medical imaging modalities. Two manuscripts were handled by Associate Editors Mehrdad Gangeh and Hayit Greenspan to avoid potential conflicts of interest. Each paper was reviewed, at least, by three expert reviewers.
V. SPECIAL ISSUE OVERVIEW
This special issue comprises 12 papers covering both image-based simulation and synthesis.
A. Image-Based Simulation
Simulation papers focus on either devising computational phantoms of anatomy or physiology in health and disease, or aim at developing computational phantoms in image formation.
In the first category of simulation papers, Segars et al. start off by reviewing what is arguably one of the most widespread digital phantoms in computational human anatomy and physiology of the human thorax. The authors overview the four dimensional (4D) eXtended CArdiac-Torso (XCAT) series of phantoms, which cover a vast population of phantoms of varying ages from newborn to adult, each including parametrised models for the cardiac and respiratory motions. This paper illustrates how these phantoms found great use in radiation dosimetry, radiation therapy, medical device design, and even the security and defence industry. Abadi et al. extend upon the capabilities of the XCAT series of computational phantoms, and propose a detailed lung architecture including airways and pulmonary vasculature. Eleven XCAT phantoms of varying anatomy were used to characterize the lung architecture. The XCAT phantoms were utilized to simulate CT images for validation against true clinical data. As the number of organs described as numerical phantoms as XCAT models increases, the potential use of such models as a tool to virtually evaluate the current and emerging medical imaging technologies increases. Polycarpou et al. propose a digital phantom to synthesise 3D+t PET data using a fast analytic method. The proposed method derives models of cardiac respiration and motion based on real respiratory signals derived from PET-CT images are combined with MRI-derived motion modeling and high resolution MRI images. In addition, this study incorporates changes in lung attenuation at different respiratory cycle positions. The proposed methodology and derived simulated datasets can be useful in the development and benchmarking of motion-compensated PET reconstruction algorithms by providing associated ground-truth of various controlled imaging scenarios.
Others consider the role of models in disease processes. For example, in the paper by García et al., the authors consider the challenging task of evaluating the correlation of parenchymal patterns (i.e. local breast density) as provided by mammography with MRI volume information. Differences in distributions (MRI versus x-ray) and radical deformation present (due to how the breast is imaged during mammography and MR) render this problem also relevant from a registration perspective. The authors in tackling this challenge, employ a subject-specific biomechanical model of the breast to assist the MRI volumes to X-ray mammograms. When converged, a direct projection of the MR-derived glandular tissue permits the comparison to the corresponding mammogram. Along the same theme, Roque et al. propose a reaction-diffusion model of tumour growth. Predicting tumour growth (based on models) and particularly its response to therapy is a critical aspect of cancer care and a challenge in cancer research. In this work, the authors derive an image-driven reaction-diffusion model of avascular tumour growth, which permits proliferation, death and spread of tumour cells, and accounts for nutrient distribution and hypoxia. The model parameters are learned (and evaluated) based on longitudinal time series of DCE-MRI images. Rodrigo et al. study the influence of anatomical inaccuracy in the reconstruction of Electrocardiographic Images (ECGI) in non-invasive diagnosis of cardiac arrhythmias. The precise position of the heart inside the body is important for accurate reconstructions but often not accurately known. They explored the curvature of L-curve from the Tikhonov regularization approach, which is one methodology used to solved the inverse problem, and discovered that optimization of the maximum curvature minimizes inaccuracies in the atrial position an orientation. Such automatic method to remove inaccuracies in atrial position improves the results of ECGI. Moreover, it allows to apply ECGI technology also where the electric recording, usually done via Body Surface Potential Mapping (BSPM) and the anatomical CT/MRI images are not recorded one after another, which could potentialy expand ECGI use to a larger group of patients.
B. Image Synthesis
This issue also comprises several papers using phenomenologic or data-driven methods for image synthesis or generating annotated reference datasets.
It is interesting to see that some methods are hybrid, i.e. they combine both data-driven with mechanistic approaches. Zhou et al., for instance, undertake to generate realistic synthetic cardiac images, of both ultrasound (US), and cine and tagged Magnetic Resonance Imaging (MRI), corresponding to the same virtual patient. This method develops a synthesis-by-registration approach where an initial dataset is segmented, transformed and warped (as needed) to generate a motion and deformation-informed set of cMRI, tMRI, and US images. Only the motion model in this method is derived from an actual physical model while the image intensity is created through mapping reference values from literature. In a related paper, Duchateau et al. also focus on the automatic generation of a large database of annotated cardiac MRI image sequences. Their approach, like the one of Zhou et al., combines both mechanistic motion models of cardiac electro-mechanics with anatomical augmentation via data-driven non-rigid deformations. The proposed method requires the existence of a small database of cine CMR sequences that serve as seed to augment the anatomical variability by creating simulations of cardiac electro-mechanics under diverse conditions. Augmented data is created by warping image intensities in the original sequence through the electromechanical simulation. This method ensures the material point correspondence between frames complies with a mechanistic electromechanical model yet image appearance is not altered compared to that of the original dataset used. The authors apply this approach to generate a database of subjects myocardial infarction under controlled conditions in infarct location and size. Finally, Mattausch and Goksel’s paper focuses on how to reconstruct the distribution of ultrasound image scatterers of tissue samples non-invasively. The recovered scatterer map will inform a realistic ultrasound image simulation under different viewing angles or transducer profiles. The robustness of this technique relies on obtaining images from multiple view points to accurately assess scatterer distribution, without which the forward problem is not accurately solved. Besides an inversion strategy, the authors contribute a novel beam-steering technique to insonify the tissue rapidly and conveniently acquiring multiple images of the same tissue. The authors also demonstrate that the scatterer map offers a new tissue representation that can be edited to create controlled variations.
Several papers focus on machine learning for image synthesis to tackle problems as diverse as generating benchmark data, image normalisation, super resolution, or cross-modality synthesis, to name just a few. One technique prominent among several submissions is adversarial learning. For instance, Costa et al. propose a combination of adversarial networks and adversarial auto-encoders to develop synthetic retinal colour images. Adversarial auto-encoders are used to learn a latent representation of retinal vascular trees and generate corresponding retinal vascular tree masks. Adversarial learning, in turn, is used to map these vascular masks into colour retinographies. The authors present a learning approach that jointly learns the parameters of the adversarial network and auto-encoder. The authors extensively validated of the quality of their synthetic images. The data produced can help in the generation of valuable labelled ground-truth data for testing or training retinal image analysis methods. Ben Taieb and Hamarneh also use adversarial learning to address the problem of histopathology normalisation. Recognizing the large variability between staining processes in different histopathology laboratories, the authors propose a method that aims to emulate stain characteristics from one laboratory to the other. Treated as a style transfer problem (to adopt the term from computer vision literature) the authors proposed a deep neural network that learns to map input images to output images that best match the distribution characteristics of a reference set of data, thus achieving stain normalization. A combination of generative, discriminative and task specific networks jointly optimized achieve the desired objective of finding stain normalizations suitable for segmentation or classification tasks.
Chartsias et al. propose an approach to MRI synthesis that is both multi-input and multi-output and uses fully convolutional neural networks. The model has two interesting properties: it is robust to handle missing data, and, while it benefits from, does not require, additional input modalities. The model was evaluated on the ISLES and BRATS datasets and demonstrated statistically significant improvements over state-of-theart methods for single input tasks. Using dictionary learning, Huang et al. present a method that can synthesize data across modalities using paired and unpaired data. Relying on the power of cross modal dictionaries they establish matching functions that can discover cross-modal sparse embeddings even when unpaired and unregistered data are available. Considering that across modalities different distributions may be present, a manifold geometry formulation term is considered. They extensively evaluate their method on two publicly available brain MRI datasets.
C. Outlook and Conclusions
We hope with this special issue we have successfully consolidated current efforts in image-based simulation and synthesis, and stimulate future research. Image-based simulation and image synthesis will only gain relevance in the years to come: consider the tsunami of healthcare data, [item 10) in the Appendix] emerging large-scale population imaging and its analytics [item 10) in the Appendix], [item 10) in the Appendix] and the growing role of machine learning [item 13) in the Appendix]–[item 15) in the Appendix] and computational medicine [item 16) in the Appendix], [item17) in the Appendix], just to name a few trends. As perhaps never before, intensive industrial innovation in this area fuels translation of these technologies into clinical applications and commercial products. Tractica [item 16) in the Appendix], for instance, forecasts global software revenue from 21 key healthcare AI use cases will grow from $165 million in 2017 to $5.6 billion annually by 2025. Including the hardware and services sales driven by these software implementations, the firm anticipates the total revenue opportunity for the healthcare AI market will reach $19.3 billion by 2025.
By unambiguously defining these terms and putting them in context, we will be in a better position to see the research gaps and synergies, address common challenges, and better track the evolution of these methods. With data becoming pervasive and machine learning a commodity, we expect image synthesis research to grow. As our discussion above shows, mechanistic understanding and interpretation of the available data will have to develop on par to data-driven approaches. Mechanism-driven priors will remain a foundation of Bayesian inference or physics-based approaches to data interpretation and reconstruction. Some methods presented do in fact combine both mechanistic and data-driven models, but the gap still exists and more research is needed here.
Evaluation of machine learning and computational modeling remain crucial if these models are to percolate to the clinical community with credibility. As machine learning, artificial intelligence, computational medicine, etc. turn into buzzwords even among clinicians and market analysts [item 19) in the Appendix], [item 20) in the Appendix], and the threshold to access and (mis)use these technologies lowers, they become commodities [item 21) in the Appendix] [item 22) in the Appendix] with the potential risk of confusing reality with fiction. Well-designed community challenges5 for performance assessment and cross-algorithmic benchmarking should keep us grounded in reality and grow their importance. For these challenges to be successful in this aim, larger and more diverse datasets must be developed and made openly available, alongside with standards ensuring transparent analysis and reporting protocols.
More benchmark data only part addresses the problem. Preprocessing, training, and testing largely remain ad hoc processes with non-negligible impact on performance comparisons. Standardised evaluation protocols are as key as standardised datasets. There are insufficient reference implementations of key algorithms that everyone uses in open benchmarks. This leads to considerable algorithmic re-implementation further obfuscating genuine contributions and the origin of improved performance. Reference open-source implementations of benchmark protocols are helpful but still remain the exception rather than the norm (e.g. only a fraction of the papers in the special issue offer that). Of course, this challenge holds both for simulation and synthesis approaches.
Computational sciences are increasingly pervasive in our lives. It is reassuring to see growing awareness on the importance of model verification and validation across engineering, [item 23) in the Appendix], [item 24) in the Appendix] medicine, [item 25) in the Appendix] [item 26) in the Appendix] and biology [item 27) in the Appendix]. While recent years have seen very positive initiatives in this arena, [item 28) in the Appendix]–[item 30) in the Appendix] our community of medical imaging and medical image computing will have to give even more consideration to these topics and develop and promote best practices in the assessment and benchmarking of simulation and synthesis methods.
One other area we believe is worth investigating is the definition of appropriate evaluation criteria. Numerical fidelity in reconstruction is rather common (e.g. mean square error and its variants) yet does not necessarily translate to best visual results. In computer vision research, human observers are recruited via crowd sourcing and visually score the results of image synthesis. In our domain (medical imaging), this would ideally require the involvement of clinical experts, which is costly and time consuming. Perhaps more suitable evaluations can be those that are application-driven, i.e. those that assess whether simulated/synthesised data can be used in lieu of real data in an analysis task (or several tasks). Some papers in this special issue did in fact use such application-driven evaluations, but these approaches are not standardised across methods or applications, which adds another layer of obfuscation to the assessment of performance.
In summary, simulation and synthesis are evolving areas in our field. Thankfully, specialised workshops such as the MICCAI SASHIMI series can facilitate cross-disciplinary exchange, visualise the progress made, and advance upon the challenges described earlier.
Fig. 3.
Top five healthcare artificial intelligence use cases revenue. World Markets: 2016–2025. Medical image analysis has the lion’s share of revenues; other use cases are likely to also involve image analytics of some sort. Courtesy of Tractica [item 18) in the Appendix]. Reprinted with permission.
ACKNOWLEDGEMENTS
The authors thank the Associate Editors Gangeh and Greenspan and the numerous reviewers for their contribution to the selection process of the manuscripts in this special issue. They also thank Prof D. Helbing from ETH Zürich for pointing us out to interesting literature and sharing his views with them.
Footnotes
The Oxford English Dictionary provides contextual quotes that illustrate this contrast. For instance, from T. Hobbes in Elements Philos. iii. xx. 230, 1656: “Synthesis is Ratiocination from the first causes of the Construction, continued through all the middle causes till we come to the thing itself which is constructed or generated.”, and from I. Newton in Opticks (ed. 2) iii. i. 380, 1718: “The Synthesis consists in assuming the Causes discover’d, and establish’d as Principles, and by them explaining the Phnomena proceeding from them.” Source: http://www.oed.com/view/Entry/196574.
APPENDIX RELATED WORK
- 1.) Fisher RB et al. , Dictionary of Computer Vision and Image Processing, 2nd ed. Hoboken, NJ, USA: Wiley, 2013. [Google Scholar]
- 2.) Hall RA and Greenberg DP, “A testbed for realistic image synthesis,” IEEE Comput. Graph. Appl, vol. 3, no. 8, pp. 10–20, Nov. 1983. [Google Scholar]
- 3.) Magnenat-Thalmann N and Thalmann D, “An indexed bibliography on image synthesis,” IEEE Comput. Graph. Appl, vol. 7, no. 8, pp. 27–38, Aug. 1987. [Google Scholar]
- 4.) Glassner A, Principles of Digital Image Synthesis. San Mateo, CA, USA: Morgan Kaufmann, 1995. [Google Scholar]
- 5.) Clapham C and Nicholson J, The Concise Oxford Dictionary of Mathematics, 5th ed. Oxford, U.K.: Oxford Univ. Press, 2014. [Google Scholar]
- 6.) Frangi AF, Taylor ZA, and Gooya A, “Precision imaging: More descriptive, predictive and integrative imaging,” Med. Image Anal, vol. 33, pp. 27–32, Oct. 2016. [DOI] [PubMed] [Google Scholar]
- 7.) Anderson C. (Jul. 23, 2008) The end of theory: The data deluge makes the scientific method obsolete. Wired. [Online]. Available: http://archive.wired.com/science/discoveries/magazine/16-07/pb_theory
- 8.) Mazzocchi F, “Could big data be the end of theory in science? A few remarks on the epistemology of data-driven science,” EMBO Rep, vol. 16, no. 10, pp. 1250–1255, Oct. 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.) Helbing D, Thinking Ahead—Essays on Big Data, Digital Revolution, and Participatory Market Society. Cham, Switzerland: Springer, 2015. [Google Scholar]
- 10.) Andreu-Perez J, Poon CCY, Merrifield RD, Wong STC, and Yang G-Z , “Big data for health,” IEEE J. Biomed. Health Inform, vol. 19, no. 4, pp. 1193–1208, Jul. 2015. [DOI] [PubMed] [Google Scholar]
- 11.) Petersen SE et al. , “Imaging in population science: Cardiovascular magnetic resonance in 100,000 participants of UK Biobank—Rationale, challenges and approaches,” J. Cardiovascular Magn. Reson, vol. 15,p. 46, May 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.) Alfaro-Almagro F et al. , “Image processing and quality control for the first 10,000 brain imaging datasets from UK Biobank,” NeuroImage, vol. 166, pp. 400–424, Feb. 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.) Suzuki K, “Overview of deep learning in medical imaging,” Radiol. Phys. Technol, vol. 10, no. 3, pp. 257–273, Sep. 2017. [DOI] [PubMed] [Google Scholar]
- 14.) Litjens G et al. , “A survey on deep learning in medical image analysis,” Med. Image Anal, vol. 42, pp. 60–88, Dec. 2017. [DOI] [PubMed] [Google Scholar]
- 15.) Ravi D et al. , “Deep learning for health informatics,” IEEE J. Biomed. Health Inform, vol. 21, no. 1, pp. 4–21, Jan. 2017. [DOI] [PubMed] [Google Scholar]
- 16.) Winslow RL, Trayanova N, Geman D, and Miller MI, “Computational medicine: Translating models to clinical care,” Sci. Transl. Med, vol. 4, no. 158, p. 158rv11, Oct. 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.) Viceconti M and Hunter P, “The virtual physiological human: Ten years after,” Annu. Rev. Biomed. Eng, vol. 18, pp. 103–123, Jul. 2016. [DOI] [PubMed] [Google Scholar]
- 18.) Kirkpatrick K and Kaul A, Artificial Intelligence for Healthcare Applications, Market Analysis and Forecast, Tractica, Boulder, CO, USA, Sep. 2017. [Google Scholar]
- 19.) Mayo RC and Leung J, “Artificial intelligence and deep learning—Radiology’s next frontier?” Clin. Imag, vol. 49, pp. 87–88, May-Jun 2017. [DOI] [PubMed] [Google Scholar]
- 20.) Dreyer KJ and Geis JR, “When machines think: Radiology’s next frontier,” Radiology, vol. 285, no. 3, pp. 713–718, Dec. 2017. [DOI] [PubMed] [Google Scholar]
- 21.) Kohli M, Prevedello LM, Filice RW, and Geis JR, “Implementing machine learning in radiology practice and research,” Amer. J. Roentgenol, vol. 208, no. 4, pp. 754–760, Apr. 2017. [DOI] [PubMed] [Google Scholar]
- 22.) Deo RC, “Machine learning in medicine,” Circulation, vol. 132, no. 20, pp. 1920–1930, Nov. 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.) Roy CJ and Oberkampf WL, “A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing,” Comput. Methods Appl. Mech. Eng, vol. 200, no. 25, pp. 2131–2144, 2011. [Google Scholar]
- 24.) Japkowicz N and Shah M, Evaluating Learning Algorithms: A Classification Perspective. New York, NY, USA: Cambridge Univ. Press, 2011. [Google Scholar]
- 25.) Pathmanathan P and Gray RA, “Verification of computational models of cardiac electro-physiology,” Int. J. Numer. Method Biomed. Eng, vol. 30, no. 5, pp. 525–544, May 2014. [DOI] [PubMed] [Google Scholar]
- 26.) Anderson AE, Ellis BJ, and Weiss JA, “Verification, validation and sensitivity studies in computational biomechanics,” Comput. Methods Biomech. Biomed. Eng, vol. 10, no. 3, pp. 171–184, Jun. 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.) Patterson EA and Whelan MP, “A framework to establish credibility of computational models in biology,” Prog. Biophys. Mol. Biol, vol. 129, pp. 13–19, Oct. 2017. [DOI] [PubMed] [Google Scholar]
- 28.) Kauppi T et al. , “Constructing benchmark databases and protocols for medical image analysis: Diabetic retinopathy,” Comput. Math. Methods Med, vol. 2013, May 2013, Art. no. 368514, doi: 10.1155/2013/368514. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.) Jannin P, Krupinski E, and Warfield SK, “Validation in medical image processing,” IEEE Trans. Med. Imag, vol. 25, no. 11, pp. 1405–1409, Nov. 2006. [DOI] [PubMed] [Google Scholar]
- 30.) Jannin P, Fitzpatrick JM, Hawkes DJ, Pennec X, Shahidi R, and Vannier MW, “Validation of medical image processing in image-guided therapy,” IEEE Trans. Med. Imag, vol. 21, no. 12, pp. 1445–1449, Dec. 2002. [DOI] [PubMed] [Google Scholar]