Abstract
Deep learning has recently gained high interest in ophthalmology due to its ability to detect clinically significant features for diagnosis and prognosis. Despite these significant advances, little is known about the ability of various deep learning systems to be embedded within ophthalmic imaging devices, allowing automated image acquisition. In this work, we will review the existing and future directions for ‘active acquisition’–embedded deep learning, leading to as high-quality images with little intervention by the human operator. In clinical practice, the improved image quality should translate into more robust deep learning–based clinical diagnostics. Embedded deep learning will be enabled by the constantly improving hardware performance with low cost. We will briefly review possible computation methods in larger clinical systems. Briefly, they can be included in a three-layer framework composed of edge, fog, and cloud layers, the former being performed at a device level. Improved egde-layer performance via ‘active acquisition’ serves as an automatic data curation operator translating to better quality data in electronic health records, as well as on the cloud layer, for improved deep learning–based clinical data mining.
Keywords: artificial intelligence, deep learning, embedded devices, medical devices, ophthalmic devices, ophthalmology
Introduction
Recent years have seen an explosion in the use of deep learning algorithms for medical imaging,1–4 including ophthalmology.5–9 Deep learning has been very efficient in detecting clinically significant features for ophthalmic diagnosis9,10 and prognosis.11,12 Recently, Google Brain demonstrated how one can, surprisingly, predict subject’s cardiovascular risk, age, and sex from a fundus image,13 a task impossible for an expert clinician.
Research effort has so far focused on the development of post hoc deep learning algorithms for already acquired data sets.9,10 There is, however, growing interest for embedding deep learning at the medical device level itself for real-time image quality optimization, with little or no operator expertise. Most of the clinically available fundus cameras and optical coherence tomography (OCT) devices require the involvement of a skilled operator in order to achieve satisfactory image quality, for clinical diagnosis. Ophthalmic images display inherent quality variability due to both technical limitations of the imaging devices and individual ocular characteristics. Recent studies in hospital settings have shown that 38% of nonmydriatic fundus images for diabetic screening,14 and 42–43% of spectral domain (SD)-OCTs acquired for patients with multiple sclerosis15 did not have acceptable image quality for clinical evaluation.
Desktop retinal cameras have been increasingly replaced by portable fundus cameras in standalone format16–18 or as smartphone add-ons,19 making the retinal imaging less expensive and accessible to various populations. The main drawback of the current generation portable fundus camera is the lower image quality. Some imaging manufacturers have started to include image quality assessment algorithms to provide a feedback for the operator to either re-acquire the image or accept it.20 To the best of our knowledge, no current commercial system is automatically reconstructing ‘the best possible image’ from multiframe image acquisitions.
Embedding of more advanced algorithms and high computation power at the camera level can be referred to as ‘smart camera architectures’,21 with or without the use of deep learning. For example, Google launched its Clips camera, and Amazon Web Services (AWS) its DeepLens camera which are capable of running deep learning models within the camera itself without relying on external processing Verily, the life sciences research organization of Alphabet Inc., partnered with Nikon and Optos to integrate deep learning algorithms for fundus imaging and diabetic retinopathy screening (https://verily.com/projects/interventions/retinal-imaging/). Similar implementation of ‘intelligence’ at the device level is happening in various other medical fields,22 including portable medical ultrasound imaging, with more of the traditional signal processing being accelerated graphics processing units (GPUs),23 with the deep learning integrated at the device level.24
There are various ways of distributing the signal processing from data acquisition to clinical diagnostics. For example, the use of fundus cameras in remote locations with no Internet access requires all the computations to be performed within the device itself, a system which has been implemented by SocialEyes, for retinal screening on GPU-accelerated tablets.25 This computing paradigm, known as edge computing,26 is based on locally performed computations, on the ‘edge’,27,28 as opposed to cloud computing in which the fundus image is transmitted over the Internet to a remote cloud GPU server, allowing subsequent image classification. In some situations, when there is a need for multilayer computational load distribution, additional nodes are inserted between the edge device and the cloud – a computation paradigm known as mist29 or fog computing.30 This situation applies typically to Internet-of-things (IoT) medical sensors, which often have very little computational capability.31
The main aim of the current review is to summarize the current knowledge related to device-level (edge computing) deep learning. We will refer to this as ‘active acquisition’, for improved ophthalmic diagnosis via optimization of image quality (Figure 1). We will also overview various possibilities of computing platforms integrate into the typical clinical workflow with a focus on standard retinal imaging techniques (i.e. fundus photography and OCT).
Figure 1.
Comparison between traditional passive acquisition and intelligent active acquisition approaches for fundus imaging. (Top-left) In passive acquisition, the healthcare professional manually aligns the camera and decides the best moment for image acquisition. This acquisition has to be often repeated, especially if the patient is not compliant, if the pupils are not dilated, or if there are media opacities, that is, cornea scar, cataract, and so on. (Top-right) In an ‘intelligent’ active acquisition process, the device is able vary imaging parameters and iterates automatically frames until the deep learning is been able to reconstruct an image of satisfactory quality. (Bottom) This intelligent acquisition serves as automated data curation operator for diagnostic deep learning networks (C)9,10 leading to improved deep leading to better class separation (healthy D vs disease E). In traditional passive acquisition, the image quality is less consistent leading to many false positives [patient from disease population B (cyan) is classified as healthy A (red)] and negatives [patient from healthy population A (red) is classified as disease B (cyan)]. The gray line represents the decision boundary of the classifier,32 and each point represents one patient.
Embedded ophthalmic devices
Emerging intelligent retinal imaging
The increased prevalence of ophthalmic conditions affecting the retinas and optic nerves of vulnerable populations prompts higher access to ophthalmic care both in developed33 and developing countries.34 This translates into an increased need of more efficient screening, diagnosis, and disease management technology, operated with no or little training in clinical settings or even at home.16 Although paraprofessionals with technical training are currently able to acquire fundus images, a third of these images may not be of satisfactory quality, being nongradable,35 due to reduced transparency of the ocular media.
Acquisition of such images may be even more difficult in nonophthalmic settings, such as emergency departments.36 Recent attempts have aimed to automate retinal imaging processing using a clinical robotic platform InTouch Lite (InTouch Technologies, Inc., Santa Barbara, CA, USA)37 or by integrating a motor to the fundus camera for automated pupil tracking (Nexy; Next Sight, Prodenone, Italy).38 These approaches have not been validated clinically and are based on relatively slow motors, possibly not adapted to clinically challenging situations. Automated acquisition becomes even more important with the recent surge of many smartphone-based fundus imagers.39 Due to the pervasiveness of smartphones, this approach would represent a perfect tool for non-eye specialists.40
Similar to fundus imaging, OCT systems are getting more portable and inexpensive and would benefit from easier and robust image acquisition.17,18,41 Kim and colleagues41 developed a low-cost experimental OCT system at a cost of US$ 7200 using a microelectromechanical system (MEMS) mirror42 with a tunable variable focus liquid lens to simplify the design of scanning optics, with inexpensive Arduino Uno microcontroller43 and GPU-accelerated mini PC handling the image processing. The increased computing power from GPUs enables some of the hardware design compromises to be offset through computational techniques.44,45 For example, Tang and colleagues46 employed three GPU units for real-time computational adaptive optics (AO) system, and recently Maloca and colleagues47 employed GPUs for volumetric OCT in virtual reality environment for enhanced visualization in medical education.
Active data acquisition
The computationally heavier algorithms made possible by the increased hardware performance can be roughly divided into two categories: (1) ‘passive’ single-frame processing and (2) ‘active’ multiframe processing. In our nomenclature, the ‘passive’ techniques refer to the standard way of acquiring ophthalmic images in which an operator takes an image, which is subsequently subjected to various image enhancement algorithms either before being analyzed by clinician or graded automatically by an algorithm.48 In ‘active’ image acquisition, multiple frames of the same structure are obtained either with automatic reconstruction or with interactive operator-assisted reconstruction of the image. In this review, we will focus on the ‘active’ paradigm, where clinically meaningful images would be reconstructed automatically from multiple acquisitions with varying image quality.
One example for the active acquisition in retinal imaging is the ‘Lucky imaging’ approach,49,50 in which multiple frames are acquired in quick succession assuming that at least some of the frames are of good quality. In magnetic resonance imaging (MRI), a ‘prospective gating scheme’ is proposed for acquiring because motion-free image acquisition is possible between the cardiovascular and respiration artifacts, iterating the imaging until satisfactory result is achieved.51 For three-dimensional (3D) computed tomography (CT), an active reinforcement learning-based algorithm was used to detect missing anatomical structures from incomplete volume data52 and trying to re-acquire the missing parts instead of relying just on postacquisition inpainting.53 In other words, the active acquisition paradigms have some level of knowledge of acquisition completeness or uncertainty based on ideal images, for example, via ‘active learning’ framework54 or via recently proposed Generative Query Networks (GQNs).55
To implement active data acquisition on an ophthalmic imaging device, we need to define a loss function (error term for the deep learning network to minimize) to quantify the ‘goodness’ of the image either directly from the image or using some auxiliary sensors and actuators, to drive the automatic reconstruction process. For example, eye movement artifacts during acquisition of OCT can significantly degrade the image quality,56 and we would like to quantify the retinal motion either from the acquired frames itself57 or using auxiliary sensors such as digital micromirror device (DMD).58 The latter approach has also been applied for correction of light scatter by opaque media.59 Due to the scanning nature of OCT, one can re-acquire the same retinal volume and merge only the subvolumes that were sampled without artifacts.60,61
Deep learning–based retinal image processing
Traditional single-frame OCT signal processing pipelines have employed GPUs allowing real-time signal processing.62,63 GPUs have been increasingly used in medical image processing even before the recent popularity of deep learning.64 The GPUs are becoming essentially obligatory with contemporary high-speed OCT systems.65 The traditional image restoration pipelines employ the intrinsic characteristics of the image in tasks such as denoising66 and deblurring67 without considering image statistics of a larger data set.
Traditionally, these multiframe reconstruction algorithms have been applied after the acquisition without real-time consideration of the image quality of the individual frames. Retinal multiframe acquisition such as fundus videography can exploit the redundant information across the consecutive frames and improve the image degradation model over single-frame acquisition.68,69 Köhler and colleagues70 demonstrated how a multiframe super-resolution framework can be used to reconstruct a single high-resolution image from sequential low-resolution video frames. Stankiewicz and colleagues71 implemented a similar framework for reconstructing super-resolved volumetric OCT stacks from several low-quality volumetric OCT scans. Neither of these approaches, however, applied the reconstruction in real time.
In practice, all of the traditional image processing algorithms can be updated for deep learning framework (Figure 2). The ‘passive’ approaches using input–output pairs to learn image processing operators range from updating individual processing blocks,74 to joint optimization of multiple processing blocks,75,76 or training an end-to-end network such as DeepISP (ISP, Image Signal Processor) to handle image pipeline from raw image toward the final edited image.77 The DeepISP network was developed as offline algorithm,77 with no real-time optimization of camera parameters during acquisition. Sitzmann and colleagues78 extended the idea even further by jointly optimizing the imaging optics and the image processing for extended depth of field and super-resolution.
Figure 2.
Typical image processing operators used in retinal image processing that are illustrated with 2D fundus images for simplicity. (a) Multiple frames are acquired in a quick succession, which are then registered (aligned) with semantic segmentation for clinically meaningful structures such as vasculature (in blue) and optic disc (in green). (b) Region-of-interest (ROI) zoom on optic disc of the registered image. The image is denoised with shape priors from the semantic segmentation to help the denoising to keep sharp edges. The noise residual is normalized for visualization showing some removal of structural information. The denoised image is decomposed72 into base that contain the texture-free structure (edge-aware smoothing) and the detail that contains the residual texture without the vasculature and optic disc. (c) An example of how the decomposed parts can be edited ‘layer-wise’73 and combined to detail enhanced image, in order to allow for optimized visualization of the features of interest.
With deep learning, many deep image restoration networks have been proposed to replace traditional algorithms. These networks are typically trained with input versus synthetic corruption image pairs, with the goodness of the restoration measured as the network’s capability to correct this synthetic degradation. Plötz and Roth79 demonstrated that the synthetic degradation model had significant limitation, and traditional state-of-the art denoising algorithm BM3D80 was still shown to outperform many deep denoising networks, when the synthetic noise was replaced with real photographic noise. This highlights the need of creating multiframe database of multiple modalities from multiple device manufacturers for realistic evaluation of image restoration networks in general, as was done by Mayer and colleagues81 by providing a freely available multiframe OCT data set obtained from ex vivo pig eyes.
Image restoration
Most of the literature on multiframe–based deep learning has focused on super-resolution and denoising. Super-resolution algorithms aim to improve the spatial resolution of the reconstructed image beyond what could be obtained from a single input frame.82 Tao and colleagues83 implemented a deep learning ‘subpixel motion compensation’ network for video input capable of learning the inter-frame alignment (i.e. image registration) and motion compensation needed for video super-resolution. In retinal imaging, especially with OCT, typical problems for efficient super-resolution are the retinal motion, lateral resolution limits set by the optical media, and image noise. Wang and colleagues84 demonstrated using photographic video that motion compensation can be learned from the data, simplifying data set acquisition for retinal deep learning training.
Deblurring (or deconvolution), close to denoising, allows the computational removal of static and movement blur from acquired images. In most cases, the exact blurring point spread function (PSF) is not known and has to be estimated (blind deconvolution) from an acquired image85 or sequential images.86 In retinal imaging, the most common source for image deblurring is retinal motion,56 scattering caused by ocular media opacities,87 and optical aberrations caused by the optical characteristics of the human eye itself.88 This estimation problem falls under the umbrella term inverse problems that have been solved with deep learning recently.89
Physical estimation and correction of the image degradation
Efficient PSF estimation retinal imaging can be augmented with auxiliary sensors trying to measure the factors causing retina to move during acquisition. Retinal vessel pulsations due to pressure fluctuations during the cardiac cycle can impact the quality. Gating allows imaging during diastole, when pressure remains almost stable.90 Optical methods exist for measuring retinal movement directly using, for example, DMDs58 and AO systems measuring the dynamic wavefront aberrations as caused, for instance, by tear film fluctuations.88
All these existing physical methods can be combined with deep learning, providing the measured movements as intermediate targets for the network to optimize.91 Examples of such approaches are the works by Bollepalli and colleagues,92 who provided training of the network for robust heartbeat detection, and Li and colleagues,93 who have estimated the blur PSF of light scattered through a glass diffuser simulating the degradation caused by cataract for retinal imaging.
Fei and colleagues94 used pairs of uncorrected and AO-corrected scanning laser ophthalmoscopic (AOSLO) images for learning a ‘digital AO’ correction. This type of AO-driven network training in practice might be very useful, providing a cost-effective version of super-resolution imaging. For example, Jian and colleagues95 proposed to replace deformable mirrors with waveform-correcting lens lowering the cost and simplifying the optical design,95 Carpentras and Moser96 demonstrated a see-through scanning ophthalmoscope without AO correction, and very recently a handheld AOSLO imager based on the use of miniature MEMS mirrors was demonstrated by DuBose and colleagues.97
In practice, all the discussed hardware and software corrections are not applied simultaneously, that is, joint image restoration with image classification.75 Thus, the aim of these operations is to achieve image restoration without loss of clinical information.
High-dynamic-range ophthalmic imaging
In ophthalmic applications requiring absolute or relative pixel intensity values for quantitative analysis, as in fundus densitometry,98 or Purkinje imaging for crystalline lens absorption measurements,99 it is desirable to extend the intensity dynamic range from multiple differently exposed frames using an approach called high-dynamic-range (HDR) imaging.100 OCT modalities requiring phase information, such as motion measurement, can benefit from higher bit depths.101 Even in simple fundus photography, the boundaries between optic disc and cup can sometimes be hard to delineate in some cases due to overexposed optic disc compared with surrounding tissue, illustrated by Köhler and colleagues70 in their multiframe reconstruction pipeline. Recent feasibility study by Ittarat and colleagues102 showed that HDR acquisition with tone mapping100 of fundus images, visualized on standard displays, increased the sensitivity but reduced specificity for glaucoma detection in glaucoma experts. In multimodal or multispectral acquisition, visible light range acquisition can be enhanced by high-intensity near-infrared (NIR) strobe103 if the visible light spectral bands do not provide sufficient illumination for motion-free exposure. The vasculature can be imaged clearly with NIR strobe for estimating the motion blur between successive visible light frames.104
Customized spectral filter arrays
Another operation handled by the ISP is demosaicing105 which involves interpolation of the color channels. Most color RGB (red-green-blue) cameras, including fundus cameras, include sensors with a filter grid called Bayer array that is composed of a 2 × 2 pixel grid with two green, one blue, and one red filter. In fundus imaging, the red channel has very little contrast, and hypothetically custom demosaicing algorithms for fundus ISPs may allow for better visualization of clinically relevant ocular structures. Furthermore, the network training could be supervised by custom illumination based on light-emitting diodes (LEDs) for pathology-specific imaging. Bartczak and colleagues106 showed that with pathology-optimized illumination, the contrast of diabetic lesions is enhanced by 30–70% compared with traditional red-free illumination imaging.
Recently, commercial sensors with more than three color channels have been released, Omnivision (Santa Clara, CA, USA) OV4682, for example, replaced one green filter of the Bayer array with an NIR filter. In practice, one could acquire continuous fundus video without pupil constriction using just the NIR channel for the video illumination and capturing fundus snapshot simultaneously with a flash of visible light in addition to the NIR.
The number of spectral bands on the filter array of the sensor was extended up 32 bands by Imec (Leuven, Belgium). This enables snapshot multispectral fundus imaging for retinal oximetry.107 These additional spectral bands or custom illuminants could also be used to aid the image processing itself before clinical diagnostics.108 For example, segmenting the macular region becomes easier with a spectral band around blue 460 nm, as the macular pigment absorbs strongly at that wavelength and appears darker than its background on this band.109
Depth-resolved fundus photography
Traditionally, depth-resolved fundus photography has been done via stereo illumination of the posterior pole that involves either dual path optics increasing the design complexity or operator skill to take a picture with just one camera.110 There are alternatives for depth-resolved fundus camera in a compact form factor such as plenoptic fundus imaging that was shown to provide higher degree of stereopsis than traditional stereo fundus photography using an off-the-shelf Lytro Illum (acquired by Google, Mountain View, CA, USA) consumer light field camera.111 Plenoptic cameras, however, trade spatial resolution for angular resolution, for example, Lytro Illum has over 40 million pixels, but the final fundus spatial resolution consists of 635 × 433 pixels. Simpler optical arrangement for depth imaging with no spatial resolution trade-off is possible with depth-from-focus algorithms112 that can reconstruct depth map from a sequence of images of different focus distances (z-stack). This rapid switching of focus distances can be achieved in practice, for example, using variable-focus liquid lenses, as demonstrated for retinal OCT imaging by Cua and colleagues.113
Compressed sensing
Especially with OCT imaging, and scanning-based imaging techniques in general, there is a possibility to use compressed sensing to speed up the acquisition and reduce the data rate.114 Compressed sensing is based on the assumption that the sampled signal is sparse in some domain, and thus it can be undersampled and reconstructed to have a matching resolution for the dense grid. Most of the work on combined compressed sensing and deep learning has been on MRI brain scans.115 OCT angiography (OCTA) is a special variant of OCT imaging that acquires volumetric images of the retinal and choroidal vasculature through motion contrast imaging. OCTA acquisition is very sensitive to motion and would benefit from sparse sampling with optimized scan pattern.116
Defining cost functions
The design of proper cost function used to define suboptimal parts of an image is not trivial at all. Early retinal processing work by Köhler and colleagues117 used the retinal vessel contrast as a proxy measure for image quality, which was implemented later as fast real-time algorithm by Bendaoudi and colleagues.118 Saha and colleagues119 developed a structure-agnostic data-driven deep learning network for flagging fundus images either as acceptable for diabetic retinopathy screening or as to be recaptured. In practice, however, the cost function used for deep learning training can be defined in multiple ways as reviewed by Zhao and colleagues.120 They compared different loss functions for image restoration and showed that the most commonly used norm (squared error or ridge regression) was clearly outperformed in terms of perceptual quality by the multiscale structural similarity index (MS-SSIM).121 This was shown to improve even slightly when the authors combined MS-SSIM with norm (absolute deviation, lasso regression). One could hypothesize that a data-driven quality indicator that reflects the diagnostic differentiation capability of the image accompanied with perceptual quality would be optimal particularly for fundus images.
Physics-based ground truths
The unrealistic performance of image restoration networks with synthetic noise and the lack of proper real noise benchmark data sets are major limitations at the moment. Plötz and Roth79 created their noise benchmark test by varying the ISO setting of the camera and taking the lowest ISO setting as the ground truth ‘noise-free’ image. In retinal imaging, construction of good-quality ground truth requires some special effort. Mayer and colleagues81 acquired multiple OCT frames of ex vivo pig eyes to avoid motion artifacts between acquisitions for speckle denoising.
In humans, commercially available laser speckle reducers can be used to acquire image pairs with two different levels of speckle noise122,123 (Figure 3). Similar pair for deblurring network training could be acquired with and without AO correction125 (see Figure 3). In phase-sensitive OCT application such as elastography, angiography, and vibrometry, a dual beam setup could be used with a highly phase-stable laser as the ground truth and ‘ordinary’ laser as the input to be enhanced.126
Figure 3.
High-level schematic representation of an adaptive optics retinal imaging system. The wavefront from (a) retina is distorted mainly by (b) the cornea and crystalline lens, which is corrected in our example by (c) lens-based actuator designed for compact imaging systems.95 (d) The imaging optical system88 is illustrated with a single lens for simplicity. The corrected wavefront on (e) the image sensor is a (h) sharper version of the image that would be of (f) lower quality without (c) the waveform correction. The ‘digital adaptive optics’ (g) universal function approximator maps the distorted image (f) to corrected image (h), and the network (g) is the network that was trained with the image pairs (uncorrected and corrected). For simplicity, we have omitted the wavefront sensor from the schematic and estimated the distortion in a sensorless fashion.88
Images (f) and (h) are courtesy of Professor Stephen A. Burns (School of Optometry, Indiana University) from AOSLO off-axis illumination scheme for retinal vasculature imaging.124
Emerging multimodal techniques, such as combined OCT and SLO,127 and OCT with photoacoustic microscopy (PAM), optical Doppler tomography (ODT),128 and fluorescence microscopy,129 enable interesting joint training from complementary modalities with each of them having different strengths. For example, in practice, the lower quality but inexpensive modality could be computationally enhanced.130
Inter-vendor differences could be further addressed by repeating each measurement with different OCT machines as taken into account with clinical diagnosis network by De Fauw and colleagues.9 All these hardware-driven signal restorations could be further combined with existing traditional filters and the filter output could be used as targets for so-called ‘copycat’ filters that can estimate existing filters.131
Quantifying uncertainty
Within the automatic ‘active acquisition’ scheme, it is important to be able to localize the quality problems in an image or in a volume.132,133 Leibig and colleagues134 investigated the commonly used Monte Carlo dropout method132 for estimating the uncertainty in fundus images for diabetic retinopathy screening and its effect on clinical referral decision quality. The Monte Carlo dropout method improved the identification of substandard images that were either unusable or had large uncertainty on the model classification boundaries. Such an approach should allow rapid identification of patients with suboptimal fundus images for further clinical evaluation by an ophthalmologist.
Similar approach was taken per-patch uncertainty estimation in 3D super-resolution135 and in voxel-wise segmentation uncertainty.136 Cobb and colleagues137 demonstrated an interesting extension to this termed ‘loss-calibrated approximate inference’ that allowed the incorporation of utility function to the network. This utility function was used to model the asymmetric clinical implications between prediction of false negatives and false positives.
The financial and quality-of-life cost of an uncertain patch in an image leading to false-negative decision might be a lot larger than false-positive that might just lead to an additional checkup by an ophthalmologist.The same utility function could be expanded to cover disease prevalence138; enabling end-to-end screening performance to be modeled for diseases such as glaucoma with low prevalence needs very high performance in order to be cost-efficient to screen.139
The regional uncertainty can then be exploited during active acquisition by guiding the acquisition iteration to only that area containing the uncertainty. For example, some CMOS sensors (e.g. Sony IMX250) allow readout from only a part of the image, faster than one could do for the full frame. One scenario for smarter fundus imaging could, for example, involve initial imaging with the whole field of view (FOV) of the device, followed by multiframe acquisition of only the optic disc area to ensure that the cup and disc are well distinguishable, and that the depth information is of good quality (Figure 4). Similar active acquisition paradigm is in use, for example, in drone-based operator-free photogrammetry. In that application, the drone can autonomously reconstruct a 3D building model from multiple views recognizing where it has not scanned yet and fly to that location to scan more.141
Figure 4.
(a) Example of re-acquisition using a region of interest (ROI) defined from the initial acquisition (the full frame). The ROI has 9% of the pixels of the full frame making the ROI acquisition a lot faster if the image sensor allows ROI-based readout. (b) Multiframe ROI re-acquisition is illustrated with three low-dynamic range (8-bit LDR) with simulated low-quality camera intensity compression. The underexposed frame (b, left) exposes optic disc correctly with less details visible on darker regions of the image as illustrated by the clipped dark values in histogram (c, left, clipped values at 0), whereas the overexposed frame (c, right) exposes dark vasculature with detail while overexposing (c, right, clipped values at 255) the bright regions such as the optic disc. The normal exposure frame (b, center) is a compromise (c, center) between these two extreme exposures. (d) When the three LDR frames are combined together using a exposure fusion technique140 into a high-dynamic range (HDR) image, all the relevant clinical features are exposed to correct possibly improving diagnostics.102
Distributing the computational load
In typical postacquisition disease classification studies with deep learning,10 the network training has been done on large GPU clusters either locally or using cloud-based GPU servers. However, when embedding deep learning within devices, different design trade-offs need to be taken into account. Both in hospital and remote healthcare settings, proper Internet connection might be lacking due to technical infrastructure or institutional policy limitations. Often, the latency requirements are very different for real-time processing of signals making the use of cloud services impossible.142 For example, a lag due to poor Internet connection is unacceptable at intensive care units (ICUs) as those seconds can affect human lives, and the computing hardware needs to placed next to the sensing device.143
Edge computing
In recent years, the concept of edge computing (Figure 5) has emerged as a complementary or alternative to the cloud computing, in which computations are done centrally, that is, away from the ‘edge’. The main driving factor for edge computing is the various IoT applications145 or Internet of Medical Things (IoMT).146 Gartner analyst Thomas Bittman has predicted that the market for processing at the edge will expand to similar or increased levels than the current cloud processing.147 Another market research study by Grand View Research, Inc.148 projected edge computing segment for healthcare and life sciences to exceed US$ 326 million by 2025. Specifically, the edge computing is seen as the key enabler of wearables to become a reliable tool for long-term health monitoring.149,150
Figure 5.
Separation of computations to three different layers. (1) Edge layer – the computations done at the device level which in active acquisition ocular imaging (top) require significant computational power, for example, in the form of an embedded GPU. With wearable intraocular measurement, the contact lens can house only a very low-power microcontroller (MCU), and it needs to let the (2) Fog layer to handle most of the signal cleaning, whereas for ocular imaging, the fog device mainly just relays the acquired image to (3) Cloud layer. The standardization of the data structure is ensured through FHIR (Fast Healthcare Interoperability Resources) API (application programming interface)144 before being stored on secure cloud server. This imaging data along with other clinical information can then be accessed via healthcare professionals, patients, and research community.
Fog computing
In many cases, an intermediate layer called fog or mist computing layer (Figure 5) is introduced between the edge device and the cloud layer to distribute the computing load.31,151–153 At simplest level, this three-layer architecture could constitute of simple low-power IoT sensor (edge device) with some computing power.154 This IoT device could be, for example, an inertial measurement unit (IMU)-based actigraph that sends data real time to user’s smartphone (fog device) which contains more computing power than the edge device for gesture recognition.155 The gesture recognition model could be used to detect the falls in elderly or send corrective feedback back to edge device which could also contain some actuators or a display. An example of such actuator could be a tactile buzzer for neurorehabilitation applications156 or a motorized stage for aligning a fundus camera relative to the patient’s eye.157 The smartphone subsequently sends the relevant data to the cloud for analyzing long-term patterns at both individual and population levels.16,158 Alternatively, the sensor itself could do some data cleaning and have the fog node to handle the sensor fusion of typical clinical one-dimensional (1D) biosignal. An illustration of this concept is the fusion of depth and thermal cameras for hand hygiene monitoring,159 including indoor position tracking sensors to monitor healthcare processes at a hospital level.
Balancing edge and fog computations
For the hardware used in each node, multiple options exist, and in the literature, very heterogeneous architectures are described for the whole system.31,160 For example, in the SocialEyes project,25 the diagnostic tests of MARVIN (for mobile autonomous retinal evaluation) are implemented on GPU-powered Android tablet (NVIDIA SHIELD). In their rural visual testing application, the device needs to be transportable and adapted to the limited infrastructure. In this scenario, most of the computations are already done at the tablet level, and the fog device could, for example, be a low-cost community smartphone/Wi-Fi link. The data can then be submitted to the cloud holding the centralized electronic health records (EHRs).161 If the local computations required are not very heavy, both the edge and fog functionalities could be combined into one low-cost Raspberry Pi board computer.162 In hospital settings with large patient volumes, it would be preferable to explore different task-specific data compression algorithms at the cloud level to reduce storage and bandwidth requirements. In a teleophthalmology setting, the compression could be done already at the edge level before cloud transmission.163
In the case of fundus imaging, most of that real-time optimization would be happening at the device level, with multiple different hardware acceleration options.164,165 One could rely on a low-cost computer such as Raspberry Pi166 and allow for limited computations.167 This can be extended if additional computation power is provided at the cloud level. In many embedded medical applications, GPU options such as the NVIDIA’s Tegra/Jetson platform168 have been increasingly used. The embedded GPU platforms in practice offer a good compromise between ease-of-use and computational power of Raspberry Pi and desktop GPUs, respectively.
In some cases, the general-purpose GPU (GPGPU) option might not be able to provide the energy efficiency needed for the required computation performance. In this case, field-programmable gate arrays (FPGAs)169 may be used as an alternative to embedded GPU, as demonstrated for retinal image analysis170 and real-time video restoration.171 FPGA implementation may, however, be problematic due to increased implementation complexity. Custom-designed accelerator chips172 and Application-Specific Integrated Circuit (ASIC)173 offer even higher performance but at even higher implementation complexity.
In ophthalmology, there are only a limited number of wearable devices, allowing for continuous data acquisition. Although the continuous assessment of intraocular pressure (IOP) is difficult to achieve, or even controversial,174 commercial products by Triggerfish® (Sensimed AG, Lausanne, Switzerland) and EYEMATE® (Implandata Ophthalmic Products GmbH, Hannover, Germany) have been cleared by the Food and Drug Administration (FDA) for clinical use.
Interesting future direction for this monitoring platform is an integrated MEMS/microfluidics system175 that could simultaneously monitor the IOP and has a passive artificial drainage system for the treatment of glaucoma.176 The continuous IOP measurement could be integrated with ‘point structure + function measures’ for individualized deep learning–driven management of glaucoma as suggested for the management of age-related macular degeneration (AMD).11
In addition to pure computational restraints, the size and the general acceptability of the device by the patients can represent a limiting factor, requiring a more patient-friendly approach. For example, devices analyzing eye movements177,178 or pupillary light responses179 can be better accepted and implemented when using more practical portable devices rather than bulky research lab systems. For example, Zhu and colleagues180 have designed an embedded hardware accelerator for deep learning inference from image sensors of the augmented/mixed reality (AR/MR) glasses.
This could be in future integrated with MEMS-based camera-free eye tracker chip developed by University of Waterloo spin-off company AdHawk Microsystems (Kitchener, ON, Canada)181 for functional diagnostics or to quantify retinal motion. In this example of eye movement diagnostics, most of the computations might be performed at the device level (edge), but the patient could carry a smartphone or a dedicated Raspberry Pi for further postprocessing and transmission to cloud services.
Cloud computing
The cloud layer (Figure 5) is used for centralized data storage, allowing both the healthcare professional and patients to access the EHRs, for example, via the FHIR (Fast Healthcare Interoperability Resources) API (application programming interface).144 Research groups can analyze the records as already demonstrated for deep learning for retinopathy diagnosis.9,10 Detailed analysis of different technical options in the cloud layer is beyond the scope of this article, and interested readers are referred to the following clinically relevant reviews.182,183
Discussion
Here, we have reviewed the possible applications of deep learning, introduced at the ophthalmic imaging device level. This extends well-known application of deep learning for clinical diagnostics.9,10,48 Such an ‘active acquisition’ aims for automatic optimization of imaging parameters, resulting in improved image quality and reduced variability.8 This active approach can be added to the existing hardware or can be combined with novel hardware designs.
The main aim of an embedded intelligent deep learning system is to favor acquisition of a high-quality image or recording, without the intervention of a highly skilled operator, in various environments. There are various healthcare delivery models, in which embedded deep learning could be used in future routine eye examination: (1) patients could self-screen themselves, using a shared device located either in a community clinic or at the supermarket, requiring no human supervision; (2) the patients could be imaged by a technician in a ‘virtual clinic’,184 in a hospital waiting room before an ophthalmologist appointment, or at the optician (https://www.aop.org.uk/ot/industry/high-street/2017/05/22/oct-rollout-in-every-specsavers-announced); (3) patients could be scanned in remote areas by a mobile general healthcare practitioner185; and (4) the patients themselves could do continuous home monitoring for disease progression.16,186 Most of the fundus camera and OCT devices come already with some quality metrics probing the operator to re-take the image, but so far no commercial device is offering sufficient automatic reconstruction, for example, in presence of ocular media opacities and poorly compliant patients.
Healthcare systems experiencing shortage of manpower may benefit from modern automated imaging. Putting more intelligence at the device level will relieve the healthcare professionals from clerical care for actual patient care.187 With the increased use of artificial intelligence (AI), the role of the clinician will evolve from the medical paternalism of the 19th century and evidence-based medicine of the 20th century to (big) data-driven clinician working more closely with intelligent machines and the patients.188 The practical-level interaction with AI is not just near-future science fiction, but very much a reality as the recent paper on ‘augmented intelligence’ in radiology demonstrated.189 A synergy between clinicians and AI system resulted in improved diagnostic accuracy, compared with clinicians’, and was better than AI system’s own performance.
At healthcare systems level, intelligent data acquisition will provide an additional automated data quality verification, resulting in improved management of data volumes. This is required because size of data is reported to double every 12–14 months,190 addressing the ‘garbage in–garbage out’ problem.190,191 Improved data quality will also allow more efficient EHR mining,192 enabling the healthcare systems to get closer to the long-term goal of learning healthcare systems193 leveraging on prior clinical experience in structured data/evidence-based sense along with expert clinical knowledge.188,194
Despite the recent developments of deep learning in ophthalmology, very few prospective clinical trials per se have evaluated its performance in real, everyday life situations. IDx-DR has recently been approved as the first fully autonomous AI-based FDA-approved diagnostic system for diabetic retinopathy,48 but the direct benefit of patients, in terms of visual outcome, is still unclear.195 Future innovations emerging from tech startups, academia, or from established companies will hopefully improve the quality of the data, through cross-disciplinary collaboration of designers, engineers, and clinicians,196,197 resulting in improved outcomes of patients with ophthalmic conditions.
Acknowledgments
The authors would like to acknowledge Professor Stephen Burns (Indiana University) for providing images to illustrate the adaptive optics deep learning correction.
Footnotes
Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by National Health Innovation Centre Singapore Innovation to Develop (I2D) Grant (NHIC I2D) (NHIC-I2D-1708181).
Conflict of interest statement: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
ORCID iD: Petteri Teikari
https://orcid.org/0000-0003-1095-4185
Contributor Information
Petteri Teikari, Visual Neurosciences Group, Singapore Eye Research Institute, Singapore; Advanced Ocular Imaging, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore.
Raymond P. Najjar, Visual Neurosciences Group, Singapore Eye Research Institute, Singapore Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, National University of Singapore, Singapore.
Leopold Schmetterer, Visual Neurosciences Group, Singapore Eye Research Institute, Singapore; Advanced Ocular Imaging, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria; Christian Doppler Laboratory for Ocular and Dermal Effects of Thiomers, Medical University of Vienna, Vienna, Austria.
Dan Milea, Visual Neurosciences Group, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, National University of Singapore, Singapore; Neuro-Ophthalmology Department, Singapore National Eye Centre, Singapore.
References
- 1. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42: 60–88. [DOI] [PubMed] [Google Scholar]
- 2. Hinton G. Deep learning – a technology with the potential to transform health care. JAMA 2018; 320: 1101–1102. [DOI] [PubMed] [Google Scholar]
- 3. Ching T, Himmelstein DS, Beaulieu-Jones BK, et al. Opportunities and obstacles for deep learning in biology and medicine. J R Soc Interface 2018; 15: 142760. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Mandal S, Greenblatt AB, An J. Imaging intelligence: AI is transforming medical imaging across the imaging spectrum. IEEE Pulse 2018; 9: 16–24. [DOI] [PubMed] [Google Scholar]
- 5. Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, et al. Artificial intelligence in retina. Prog Retin Eye Res 2018; 67: 1–29. [DOI] [PubMed] [Google Scholar]
- 6. Ting DSW, Liu Y, Burlina P, et al. AI for medical imaging goes deep. Nat Med 2018; 24: 539–540. [DOI] [PubMed] [Google Scholar]
- 7. Hogarty DT, Mackey DA, Hewitt AW. Current state and future prospects of artificial intelligence in ophthalmology: a review. Clin Exp Ophthalmol. Epub ahead of print 28 August 2018. DOI: 10.1111/ceo.13381. [DOI] [PubMed] [Google Scholar]
- 8. Lee A, Taylor P, Kalpathy-Cramer J, et al. Machine learning has arrived! Ophthalmology 2017; 124: 1726–1728. [DOI] [PubMed] [Google Scholar]
- 9. DeFauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med 2018; 24: 1342–1350. [DOI] [PubMed] [Google Scholar]
- 10. Ting DSW, Cheung CYL, Lim G, et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 2017; 318: 2211–2223. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Schmidt-Erfurth U, Bogunovic H, Sadeghipour A, et al. Machine learning to analyze the prognostic value of current imaging biomarkers in neovascular age-related macular degeneration. Ophthalmology Retina 2018; 2: 24–30. [DOI] [PubMed] [Google Scholar]
- 12. Wen JC, Lee CS, Keane PA, et al. Forecasting future Humphrey visual fields using deep learning. Arxiv:180404543 [Cs, Stat], 2018, http://arxiv.org/abs/1804.04543 [DOI] [PMC free article] [PubMed]
- 13. Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomed Eng 2018; 2: 158–164. [DOI] [PubMed] [Google Scholar]
- 14. Rani PK, Bhattarai Y, Sheeladevi S, et al. Analysis of yield of retinal imaging in a rural diabetes eye care model. Indian J Ophthalmol 2018; 66: 233–237. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Tewarie P, Balk L, Costello F, et al. The OSCAR-IB consensus criteria for retinal OCT quality assessment. PLoS ONE 2012; 7: e34823. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Roesch K, Swedish T, Raskar R. Automated retinal imaging and trend analysis – a tool for health monitoring. Clin Ophthalmol 2017; 11: 1015–1020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Monroy GL, Won J, Spillman DR, et al. Clinical translation of handheld optical coherence tomography: practical considerations and recent advancements. J Biomed Opt 2017; 22: 1–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Chopra R, Mulholland PJ, Dubis AM, et al. Human factor and usability testing of a binocular optical coherence tomography system. Transl Vis Sci Technol 2017; 6: 16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Kim TN, Myers F, Reber C, et al. A smartphone-based tool for rapid, portable, and automated wide-field retinal imaging. Transl Vis Sci Technol 2018; 7: 21–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Katuwal GJ, Kerekes JP, Ramchandran RS, et al. Automated fundus image field detection and quality assessment, 2018, https://patents.google.com/patent/US9905008B2/en
- 21. Brea V, Ginhac D, Berry F, et al. Special issue on advances on smart camera architectures for real-time image processing. J Real-Time Image Process 2018; 14: 635–636. [Google Scholar]
- 22. Zhang B, Tang K, Du J. Influence of intelligent unmanned system on the development of intelligent measuring. In: Proceedings of the global intelligence industry conference (GIIC 2018), vol. 10835, 2018, p. 108350Y International Society for Optics and Photonics, https://www.spiedigitallibrary.org/conference-proceedings-of-spie/10835/108350Y/Influence-of-intelligent-unmanned-system-on-the-development-of-intelligent/10.1117/12.2503984.short?SSO=1 [Google Scholar]
- 23. Gobl GR, Navab N, Hennersperger C. SUPRA: open source software defined ultrasound processing for real-time applications. Int J Comput Assist Radiol Surg 2018; 13: 759–767. [DOI] [PubMed] [Google Scholar]
- 24. Jarosik P, Lewandiwski M. WaveFlow-Towards Integration of Ultrasound Processing with Deep Learning. arXiv preprint arXiv 2018, https://arxiv.org/abs/1811.01566
- 25. Hansen T. Social eyes uses deep learning to save sight. NVIDIA Blog, 2016, https://blogs.nvidia.com/blog/2016/02/17/deep-learning
- 26. Shi W, Cao J, Zhang Q, et al. Edge computing: vision and challenges. IEEE Internet Things J 2016; 3: 637–646. [Google Scholar]
- 27. Cuff J. Getting to the heart of HPC and AI at the edge in healthcare, 2018, https://goo.gl/F8psgy
- 28. Harris S. The next frontier – medical imaging AI in the age of edge computing, 2018, https://goo.gl/E26sKs
- 29. Barik RK, Dubey AC, Tripathi A, et al. Mist data: leveraging mist computing for secure and scalable architecture for smart and connected health. Procedia Comput Sci 2018; 125: 647–653. [Google Scholar]
- 30. Xu J, Liu H, Shao W, et al. Quantitative 3-D shape features based tumor identification in the fog computing architecture. J Amb Intel Hum Comp 2018; 9: 1–11. [Google Scholar]
- 31. Farahani B, Firouzi F, Chang V, et al. Towards fog-driven IoT eHealth: promises and challenges of IoT in medicine and healthcare. Future Gener Comput Syst 2018; 78: 659–676. 10.1016/j.future.2017.04.036 [DOI] [Google Scholar]
- 32. Fawzi A, Moosavi-Dezfooli SM, Frossard P, et al. Classification regions of deep neural networks. Arxiv:170509552 [Cs], 2017, https://arxiv.org/abs/1705.09552
- 33. Lee CS, Su GL, Baughman DM, et al. Disparities in delivery of ophthalmic care: an exploration of public Medicare data. PLoS ONE 2017; 12: e0182598. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Sommer A, Taylor HR, Ravilla TD, et al. Challenges of ophthalmic care in the developing world. JAMA Ophthalmol 2014; 132: 640–644. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Davila JR, Sengupta SS, Niziol LM, et al. Predictors of photographic quality with a handheld nonmydriatic fundus camera used for screening of vision-threatening diabetic retinopathy. Ophthalmologica 2017; 238: 89–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Hassen GW, Chirurgi R, Menoscal JP, et al. All eye complaints are not created equal: the value of hand-held retina camera in the emergency department. Am J Emerg Med 2018; 36: 1518. [DOI] [PubMed] [Google Scholar]
- 37. Martel JBA, Anders UM, Kravchuk V. Comparative study of teleophthalmology devices: smartphone adapted ophthalmoscope, robotic ophthalmoscope, and traditional fundus camera-The recent advancements in telemedicine. New Front Ophthalmol 2015; 1(1): 2–5. [Google Scholar]
- 38. Nexy robotic retinal imaging system cleared by the FDA, for the US and Market, 2018, https://www.prweb.com/releases/2018/06/prweb15554831.htm
- 39. Barikian A, Haddock LJ. Smartphone assisted fundus fundoscopy/photography. Curr Ophthalmol Rep 2018; 6: 46–52. [Google Scholar]
- 40. Bifolck E, Fink A, Pedersen D, et al. Smartphone imaging for the ophthalmic examination in primary care. JAAPA 2018; 31: 34–38. [DOI] [PubMed] [Google Scholar]
- 41. Kim S, Crose M, Eldridge WJ, et al. Design and implementation of a low-cost, portable OCT system. Biomed Opt Express 2018; 9: 1232–1243. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Lin L, Keeler E, Lin LY, et al. Progress of MEMS scanning micromirrors for optical bio-imaging. Micromachines 2015; 6: 1675–1689. [Google Scholar]
- 43. Teikari P, Najjar RP, Malkki H, et al. An inexpensive Arduino-based LED stimulator system for vision research. J Neurosci Methods 2012; 211: 227–236. [DOI] [PubMed] [Google Scholar]
- 44. Altmann Y, McLaughlin S, Padgett MJ, et al. Quantum-inspired computational imaging. Science 2018; 361: eaat2298. [DOI] [PubMed] [Google Scholar]
- 45. Liu YZ, South FA, Xu Y, et al. Computational optical coherence tomography. Biomed Opt Express 2017; 8: 1549–1574. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Tang H, Mulligan JA, Untracht GR, et al. GPU-based computational adaptive optics for volumetric optical coherence microscopy. In: Proceedings of the high-speed biomedical imaging and spectroscopy: toward big data instrumentation and management, vol. 9720 International Society for Optics and Photonics, https://spie.org/Publications/Proceedings/Volume/9720 [Google Scholar]
- 47. Maloca PM, deCarvalho JER, Heeren T, et al. High-performance virtual reality volume rendering of original optical coherence tomography point-cloud data enhanced with real-time ray casting. Transl Vis Sci Technol 2018; 7: 2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Abràmoff MD, Lavin PT, Birch M, et al. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. Npj Digital Medicine 2018; 1: 39. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Samaniego A, Boominathan V, Sabharwal A, et al. MobileVision: a face-mounted, voiceactivated, non-mydriatic ‘lucky’ ophthalmoscope. In: Proceedings of the wireless health 2014 on National Institutes of Health WH ’ 14, pp. 2:1–2:8. New York: ACM, https://www.ece.rice.edu/~av21/Documents/2014/mobileVision.pdf [Google Scholar]
- 50. Lawson ME, Raskar R. Methods and apparatus for retinal imaging, 2016, https://patents.google.com/patent/US9295388B2/en
- 51. Kinchesh P, Gilchrist S, Beech JS, et al. Prospective gating control for highly efficient cardiorespiratory synchronised short and constant TR MRI in the mouse. Magn Resonan Imag 2018; 53: 20–27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52. Ghesu FC, Georgescu B, Grbic S, et al. Towards intelligent robust detection of anatomical structures in incomplete volumetric data. Med Image Anal 2018; 48: 203–213. [DOI] [PubMed] [Google Scholar]
- 53. Skalic M, Varela-Rial A, Jiménez J, et al. LigVoxel: inpainting binding pockets using 3D-convolutional neural networks. Bioinformatics 2019; 35: 243–250. [DOI] [PubMed] [Google Scholar]
- 54. Gal Y, Islam R, Ghahramani Z. Deep Bayesian active learning with image data. Arxiv:170302910 [Cs, Stat], 2017, http://arxiv.org/abs/1703.02910
- 55. Eslami SMA, Rezende DJ, Besse F, et al. Neural scene representation and rendering. Science 2018; 360: 1204–1210. [DOI] [PubMed] [Google Scholar]
- 56. Baghaie A, Yu Z, D’Souza RM. Involuntary eye motion correction in retinal optical coherence tomography: hardware or software solution. Med Image Anal 2017; 37: 129–145. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Sheehy CK, Yang Q, Arathorn DW, et al. High-speed, image-based eye tracking with a scanning laser ophthalmoscope. Biomed Opt Express 2012; 3: 2611–2622. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. Vienola KV, Damodaran M, Braaf B, et al. In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope. Biomed Opt Express 2018; 9: 591–602. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59. Turpin A, Vishniakou I, Seelig JD. Light scattering control with neural networks in transmission and reflection. Arxiv:180505602 [Cs], 2018, https://arxiv.org/abs/1805.05602 [DOI] [PubMed]
- 60. Carrasco-Zevallos OM, Nankivil D, Viehland C, et al. Pupil tracking for real-time motion corrected anterior segment optical coherence tomography. PLoS ONE 2016; 11: e0162015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61. Chen Y, Hong YJ, Makita S, et al. Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning. Biomed Opt Express 2018; 9: 1111–1129. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62. Zhang K, Kang JU. Real-time 4d signal processing and visualization using graphics processing unit on a regular nonlinear-k Fourier-domain OCT system. Opt Express 2010; 18: 11772–11784. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63. Wieser W, Draxinger W, Klein T, et al. High definition live 3d-OCT in vivo: design and evaluation of a 4d OCT engine with 1 GVoxel/s. Biomed Opt Express 2014; 5: 2963–2977. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64. Eklund A, Dufort P, Forsberg D, et al. Medical image processing on the GPU – past, present and future. Med Image Anal 2013; 17: 1073–1094. [DOI] [PubMed] [Google Scholar]
- 65. Klein T, Huber R. High-speed OCT light sources and systems. Biomed Opt Express 2017; 8: 828–859. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66. Li M, Idoughi R, Choudhury B, et al. Statistical model for OCT image denoising. Biomed Opt Express 2017; 8: 3903–3917. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67. Liu Y, Liang Y, Mu G, et al. Deconvolution methods for image deblurring in optical coherence tomography. J Opt Soc Am A Opt Image Sci Vis 2009; 26: 72–77. [DOI] [PubMed] [Google Scholar]
- 68. Bian L, Suo J, Chen F, et al. Multi-frame denoising of high speed optical coherence tomography data using inter-frame and intra-frame priors. Arxiv:13121931, 2013, https://arxiv.org/abs/1312.1931 [Google Scholar]
- 69. Devalla SK, Subramanian G, Pham TH, et al. A deep learning approach to denoise optical coherence tomography images of the optic nerve head. Arxiv:180910589 [Cs], 2018, http://arxiv.org/abs/1809.10589 [DOI] [PMC free article] [PubMed]
- 70. Köhler T, Brost A, Mogalle K, et al. Multi-frame super-resolution with quality self-assessment for retinal fundus videos. In: Proceedings of the medical image computing and computer-assisted intervention – MICCAI 2014 (Lecture notes in computer science), Boston, MA, 14–18 September 2014, pp. 650–657. Cham: Springer. [DOI] [PubMed] [Google Scholar]
- 71. Stankiewicz A, Marciniak T, Dabrowski A, et al. Matching 3D OCT retina images into superresolution dataset. In: Proceedings of the 2016 signal processing: algorithms, architectures, arrangements, and applications (SPA), Poznan, 21–23 September 2016, pp. 130–137. New York: IEEE. [Google Scholar]
- 72. Xu L, Lu C, Xu Y, et al. Image smoothing via L0 gradient minimization. In: Proceedings of the 2011 SIGGRAPH Asia conference – SA ’11, Hong Kong, China, 12–15 December 2015, pp. 174:1–174:12. New York: ACM. [Google Scholar]
- 73. Innamorati C, Ritschel T, Weyrich T, et al. Decomposing single images for layered photo retouching. Comput Graph Forum 2017; 36: 15–25. [Google Scholar]
- 74. Balakrishnan G, Zhao A, Sabuncu MR, et al. An unsupervised learning model for deformable medical image registration. Arxiv:180202604 [Cs], 2018, http://arxiv.org/abs/1802.02604
- 75. Diamond S, Sitzmann V, Boyd S, et al. Dirty pixels: optimizing image classification architectures for raw sensor data. Arxiv:170106487 [Cs], 2017, http://arxiv.org/abs/1701.06487
- 76. Liu D, Wen B, Liu X, et al. When image denoising meets high-level vision tasks: a deep learning approach. Arxiv:170604284 [Cs], 2017, http://arxiv.org/abs/1706.04284
- 77. Schwartz E, Giryes R, Bronstein AM. DeepISP: learning end-to-end image processing pipeline. Arxiv:180106724 [Cs, Eess], 2018, http://arxiv.org/abs/1801.06724 [DOI] [PubMed]
- 78. Sitzmann V, Diamond S, Peng Y, et al. End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging. ACM Trans Graph 2018; 37: 1141–11413. [Google Scholar]
- 79. Plötz T, Roth S. Benchmarking denoising algorithms with real photographs. Arxiv:170701313 [Cs], 2017, http://arxiv.org/abs/1707.01313
- 80. Burger H, Schuler C, Harmeling S. Image denoising: can plain neural networks compete with BM3d? In: Proceedings of the 2012 IEEE conference on computer vision and pattern recognition (CVPR), Providence, RI, 16–21 June 2012, pp. 2392–2399. New York: IEEE. [Google Scholar]
- 81. Mayer MA, Borsdorf A, Wagner M, et al. Wavelet denoising of multiframe optical coherence tomography data. Biomed Opt Express 2012; 3: 572–589. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82. Köhler T, Batz M, Naderi F, et al. Bridging the simulated-to-real gap: benchmarking super-resolution on real data. Arxiv:180906420 [Cs], 2018, http://arxiv.org/abs/1809.06420 [DOI] [PubMed]
- 83. Tao X, Gao H, Liao R, et al. Detail-revealing deep video super-resolution. Arxiv:170402738 [Cs], 2017, http://arxiv.org/abs/1704.02738
- 84. Wang W, Ren C, He X, et al. Video super-resolution via residual learning. IEEE Access 2018; 6: 23767–23777. [Google Scholar]
- 85. Marrugo AG, Millán MS, Šorel M, et al. Improving the blind restoration of retinal images by means of point-spread-function estimation assessment. In: Proceedings of the 10th international symposium on medical information processing and analysis, vol. 9287 International Society for Optics and Photonics, https://spie.org/Publications/Proceedings/Paper/10.1117/12.2073820 [Google Scholar]
- 86. Lian J, Zheng Y, Jiao W, et al. Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information. Med Biol Eng Comput 2018; 56: 1107–1113. [DOI] [PubMed] [Google Scholar]
- 87. Christaras D, Ginis H, Pennos A, et al. Intraocular scattering compensation in retinal imaging. Biomed Opt Express 2016; 7: 3996–4006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88. Burns SA, Elsner AE, Sapoznik KA, et al. Adaptive optics imaging of the human retina. Prog Retin Eye Res. Epub ahead of print 27 August 2018. DOI: 10.1016/j.preteyeres.2018.08.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89. KyongHwan Jin, McCann MT, Froustey E, et al. Deep convolutional neural network for inverse problems in imaging. IEEE Trans Image Process 2017; 26: 4509–4522. [DOI] [PubMed] [Google Scholar]
- 90. Lee B, Choi W, Liu JJ, et al. Cardiac-gated en face Doppler measurement of retinal blood flow using swept-source optical coherence tomography at 100,000 axial scans per second. Invest Ophthalmol Vis Sci 2015; 56: 2522–2530. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 91. Lee CY, Xie S, Gallagher P, et al. Deeply-supervised nets. Arxiv:14095185 [Cs, Stat], 2014, http://arxiv.org/abs/1409.5185
- 92. Bollepalli SC, Challa SS, Jana S. Robust heartbeat detection from multimodal data via CNN-based generalizable information fusion. IEEE Trans Biomed Eng. Epub ahead of print 11 July 2018. DOI: 10.1109/TBME.2018.2854899. [DOI] [PubMed] [Google Scholar]
- 93. Li S, Deng M, Lee J, et al. Imaging through glass diffusers using densely connected convolutional networks. Optica 2018; 5: 803–813. [Google Scholar]
- 94. Fei X, Zhao J, Zhao H, et al. Deblurring adaptive optics retinal images using deep convolutional neural networks. Biomed Opt Express 2017; 8: 5675–5687. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95. Jian Y, Lee S, Ju MJ, et al. Lens-based wavefront sensorless adaptive optics swept source OCT. Sci Rep 2016; 6: 27620. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96. Carpentras D, Moser C. See-through ophthalmoscope for retinal imaging. J Biomed Opt 2017; 22: 56006. [DOI] [PubMed] [Google Scholar]
- 97. DuBose T, Nankivil D, LaRocca F, et al. Handheld adaptive optics scanning laser ophthalmoscope. Optica 2018; 5: 1027–1036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98. Chou JC, Cousins CC, Miller JB, et al. Fundus densitometry findings suggest optic disc hemorrhages in primary open-angle glaucoma have an arterial origin. Am J Ophthalmol 2018; 187: 108–116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 99. Johnson CA, Nelson-Quigg JM, Morse LS. Wavelength dependent lens transmission properties in diabetics and non-diabetics. In: Proceedings of the Basic and clinical applications of vision science, 1997, pp. 217–220, Dordrecht: Springer, https://www.springer.com/in/book/9780792343486 [Google Scholar]
- 100. Zhang L, Deshpande A, Chen X. Denoising vs. deblurring: HDR imaging techniques using moving cameras. In: Proceedings of the 2010 IEEE computer society conference on computer vision and pattern recognition, San Francisco, CA, 13–18 June 2010, pp. 522–529. New York: IEEE. [Google Scholar]
- 101. Ling WA, Ellerbee AK. The effects of reduced bit depth on optical coherence tomography phase data. Opt Express 2012; 20: 15654–15668. [DOI] [PubMed] [Google Scholar]
- 102. Ittarat M, Itthipanichpong R, Manassakorn A, et al. Capability of ophthalmology residents to detect glaucoma using high-dynamic-range concept versus color optic disc photography. J Ophthalmol 2017; 2017: 8209270. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103. Yamashita H, Sugimura D, Hamamoto T. RGB-NIR imaging with exposure bracketing for joint denoising and deblurring of low-light color images. In: Proceedings of the 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), New Orleans, LA, 5–9 March 2011, pp. 6055–6059. New York: IEEE. [Google Scholar]
- 104. Hernandez-Matas C, Zabulis X, Triantafyllou A, et al. FIRE: fundus image registration dataset. J Model Ophthalmol 2017; 1: 16–28. [Google Scholar]
- 105. Xia W, Tao L. Million-pixel computational imaging model. In: Proceedings of the 2018 25th IEEE international conference on image processing (ICIP), Athens, 7–10 October 2018, pp. 425–429. New York: IEEE. [Google Scholar]
- 106. Bartczak P, Fält P, Penttinen N, et al. Spectrally optimal illuminations for diabetic retinopathy detection in retinal imaging. Optical Rev 2017; 24: 105–116. [Google Scholar]
- 107. Li H, Liu W, Dong B, et al. Snapshot hyperspectral retinal imaging using compact spectral resolving detector array. J Biophotonics 2017; 10: 830–839. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 108. Ruia S, Saxena S. Spectral domain optical coherence tomography-based imaging biomarkers and hyperspectral imaging. In: Meyer CH, Saxena S, Sadda SR. (eds) Spectral domain optical coherence tomography in macular diseases. New Delhi, India: Springer, 2017, pp. 109–114. [Google Scholar]
- 109. Kaluzny J, Li H, Liu W, et al. Bayer filter snapshot hyperspectral fundus camera for human retinal imaging. Curr Eye Res 2017; 42: 629–635. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 110. Myers JS, Fudemberg SJ, Lee D. Evolution of optic nerve photography for glaucoma screening: a review. Clin Exp Ophthalmol 2018; 46: 169–176. [DOI] [PubMed] [Google Scholar]
- 111. Palmer DW, Coppin T, Rana K, et al. Glare-free retinal imaging using a portable light field fundus camera. Biomed Opt Express 2018; 9: 3178–3192. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 112. Rivenson Y, Göröcs Z, Günaydin H, et al. Deep learning microscopy. Optica 2017; 4: 1437–1443. [Google Scholar]
- 113. Cua M, Lee S, Miao D, et al. Retinal optical coherence tomography at 1 m with dynamic focus control and axial motion tracking. J Biomed Optics 2016; 21: 026007. [DOI] [PubMed] [Google Scholar]
- 114. Fang L, Li S, Cunefare D, et al. Segmentation based sparse reconstruction of optical coherence tomography images. IEEE Trans Med Imaging 2017; 36: 407–421. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115. Schlemper J, Caballero J, Hajnal JV, et al. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans Med Imaging 2018; 37: 491–503. [DOI] [PubMed] [Google Scholar]
- 116. Ju MJ, Heisler M, Athwal A, et al. Effective bidirectional scanning pattern for optical coherence tomography angiography. Biomed Opt Express 2018; 9: 2336–2350. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 117. Köhler T, Budai A, Kraus MF, et al. Automatic no-reference quality assessment for retinal fundus images using vessel segmentation. In: Proceedings of the 2013 IEEE 26th international symposium on computer-based medical systems (CBMS), Porto, 20–22 June 2013, pp. 95–100. New York: IEEE. [Google Scholar]
- 118. Bendaoudi H, Cheriet F, Manraj A, et al. Flexible architectures for retinal blood vessel segmentation in high-resolution fundus images. J Real-Time Image Process 2018; 15: 31–42. [Google Scholar]
- 119. Saha SK, Fernando B, Cuadros J, et al. Automated quality assessment of colour fundus images for diabetic retinopathy screening in telemedicine. J Digit Imaging 2018; 31: 869–878. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 120. Zhao H, Gallo O, Frosio I, et al. Loss functions for image restoration with neural networks. IEEE T Comput Imag 2017; 3: 47–57. [Google Scholar]
- 121. Wang Z, Simoncelli EP, Bovik AC. Multiscale structural similarity for image quality assessment. In: Proceedings of the 37th Asilomar conference on signals, systems computers, 2003, vol. 2, Pacific Grove, CA, 9–12 November 2003, pp. 1398–1402. New York: IEEE. [Google Scholar]
- 122. Liba O, Lew MD, SoRelle ED, et al. Speckle-modulating optical coherence tomography in living mice and humans. Nat Commun 2017; 8: 16131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 123. Li Y, Xue Y, Tian L. Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media. Optica 2018; 5: 1181–1190. [Google Scholar]
- 124. Chui TYP, Vannasdale DA, Burns SA. The use of forward scatter to improve retinal vascular imaging with an adaptive optics scanning laser ophthalmoscope. Biomed Opt Express 2012; 3: 2537–2549. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 125. Zhang P, Manna SK, Miller EB, et al. Aperture phase modulation with adaptive optics: a novel approach for speckle reduction and structure extraction in optical coherence tomography. bioRxiv 2018: 406108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 126. Ling Y, Yao X, Hendon CP. Highly phase-stable 200 kHz swept-source optical coherence tomography based on KTN electro-optic deflector. Biomed Opt Express 2017; 8: 3687–3699. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 127. Liu Z, Tam J, Saeedi O, et al. Trans-retinal cellular imaging with multimodal adaptive optics. Biomed Opt Express 2018; 9: 4246–4262. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 128. Leitgeb RA, Werkmeister RM, Blatter C, et al. Doppler optical coherence tomography. Prog Retina Eye Res 2014; 41: 26–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 129. Dadkhah A, Zhou J, Yeasmin N, et al. A multimodal imaging platform with integrated simultaneous photoacoustic microscopy, optical coherence tomography, optical Doppler tomography and fluorescence microscopy. In: Proceedings of the photons plus ultrasound: imaging and sensing 2018, vol. 10494 International Society for Optics and Photonics, https://www.spiedigitallibrary.org/conference-proceedings-of-spie/10494/104940Z/A-multimodal-imaging-platform-with-integrated-simultaneous-photoacoustic-microscopy-optical/10.1117/12.2289211.short?SSO=1 [Google Scholar]
- 130. Emami H, Dong M, Nejad-Davarani SP, et al. Generating synthetic CTs from magnetic resonance images using generative adversarial networks. Med Phys 2018; 45: 3627–3636. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 131. Gharbi M, Chen J, Barron JT, et al. Deep bilateral learning for real-time image enhancement. ACM Trans Graph 2017; 36: 1181–11812. [Google Scholar]
- 132. Kendall A, Gal Y. What uncertainties do we need in Bayesian deep learning for computer vision? In: Guyon I, Luxburg UV, Bengio S, et al. (eds) Advances in neural information processing systems 30. Red Hook, NY: Curran Associates, Inc, 2017, pp. 5574–5584. [Google Scholar]
- 133. Lundell J, Verdoja F, Kyrki V. Deep network uncertainty maps for indoor navigation. Arxiv:180904891 [Cs, Eess], 2018, http://arxiv.org/abs/1809.04891
- 134. Leibig C, Allken V, Ayhan MS, et al. Leveraging uncertainty information from deep neural networks for disease detection. Sci Rep 2017; 7: 17816. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 135. Tanno R, Worrall DE, Ghosh A, et al. Bayesian image quality transfer with CNNs: exploring uncertainty in dMRI super-resolution. Arxiv:170500664 [Cs], 2017, http://arxiv.org/abs/1705.00664
- 136. Eaton-Rosen Z, Bragman F, Bisdas S, et al. Towards safe deep learning: accurately quantifying biomarker uncertainty in neural network predictions. Arxiv:180608640 [Cs], 2018, http://arxiv.org/abs/1806.08640
- 137. Cobb AD, Roberts SJ, Gal Y. Loss-calibrated approximate inference in Bayesian neural networks. Arxiv:180503901 [Cs, Stat], 2018, http://arxiv.org/abs/1805.03901
- 138. Yuan Y, Su W, Zhu M. Threshold-free measures for assessing the performance of medical screening tests. Front Public Health 2015; 3: 57. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 139. Boodhna T, Crabb DP. More frequent, more costly? Health economic modelling aspects of monitoring glaucoma patients in England. BMC Health Serv Res 2016; 16: 611. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 140. Li H, Zhang L. Multi-exposure fusion with CNN features. In: Proceedings of the 2018 25th IEEE international conference on image processing (ICIP), Athens, 7–10 October, pp. 1723–1727. New York: IEEE. [Google Scholar]
- 141. Hepp B, Nießner M, Hilliges O. Plan3D: viewpoint and trajectory optimization for aerial multi-view stereo reconstruction. Arxiv:170509314 [Cs], 2017, http://arxiv.org/abs/1705.09314
- 142. Chen M, Li W, Hao Y, et al. Edge cognitive computing based smart healthcare system. Future Gene Comput Syst 2018; 86: 403–411. [Google Scholar]
- 143. Davoudi A, Malhotra KR, Shickel B, et al. The intelligent ICU pilot study: using artificial intelligence technology for autonomous patient monitoring. Arxiv:180410201 [Cs, Eess], 2018. http://arxiv.org/abs/1804.10201
- 144. Mandel JC, Kreda DA, Mandl KD, et al. SMART on FHIR: a standards-based, interoperable apps platform for electronic health records. J Am Med Inform Assoc 2016; 23: 899–908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 145. Li H, Ota K, Dong M. Learning IoT in edge: deep learning for the internet of things with edge computing. IEEE Network 2018; 32: 96–101. [Google Scholar]
- 146. Chang CK, Oyama K. Guest editorial: a roadmap for mobile and cloud services for digital health. IEEE T Serv Comput 2018; 11: 232–235. [Google Scholar]
- 147. Bittman T. The edge will eat the cloud, 2017, https://blogs.gartner.com/thomas_bittman/2017/03/06/the-edge-will-eat-the-cloud/
- 148. Grand View Research, Inc. Edge computing market size, share & trends analysis report by technology (mobile edge computing, fog computing), by vertical, by organization size, by region, and segment forecasts, 2018–2025, 2018, https://www.grandviewresearch.com/industry-analysis/edge-computing-market
- 149. Wang Z, Yang Z, Dong T. A review of wearable technologies for elderly care that can accurately track indoor position, recognize physical activities and monitor vital signs in real time. Sensors 2017; 17: 341. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 150. NIH. All of us research program, 2018, https://allofus.nih.gov/
- 151. Barik RK, Priyadarshini R, Dubey H, et al. Leveraging machine learning in mist computing telemonitoring system for diabetes prediction. In: Kolhe ML, Trivedi MC, Tiwari S, et al. (eds) Advances in data and information sciences (Lecture notes in networks and systems). Singapore: Springer, 2018, pp. 95–104. [Google Scholar]
- 152. Yousefpour A, Fung C, Nguyen T, et al. All one needs to know about fog computing and related edge computing paradigms: a complete survey. Arxiv:180805283 [Csni], 2018, https://arxiv.org/abs/1808.05283
- 153. Chen Z, Lin W, Wang S, et al. Intermediate deep feature compression: the next battlefield of intelligent sensing. Arxiv:180906196 [Cs], 2018, http://arxiv.org/abs/1809.06196
- 154. Szydlo T, Sendorek J, Brzoza-Woch R. Enabling machine learning on resource constrained devices by source code generation of the learned models. In: Shi Y, Fu H, Tian Y, et al. (eds) Computational science – ICCS 2018 (Lecture notes in computer science). New York: Springer, 2018, pp. 682–694. [Google Scholar]
- 155. Nweke HF, Teh YW, Al-garadi MA, et al. Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: state of the art and research challenges. Exp Syst Appl 2018; 105: 233–261. [Google Scholar]
- 156. Yang G, Deng J, Pang G, et al. An IoT-enabled stroke rehabilitation system based on smart wearable armband and machine learning. IEEE J Transl Eng Health Med 2018; 6: 2100510. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 157. Sumi H, Takehara H, Miyazaki S, et al. Next-generation Fundus Camera with Full Color Image Acquisition in 0-lx Visible Light by 1.12-micron Square Pixel, 4K, 30-fps BSI CMOS Image Sensor with Advanced NIR Multi-spectral Imaging System, IEEE Symposium on VLSI Technology, Honolulu, HI: IEEE; 2018, pp. 163–164. doi: 10.1109/VLSIT.2018.8510698 [DOI] [Google Scholar]
- 158. Aggarwal K, Joty S, Luque LF, et al. Co-morbidity exploration on wearables activity data using unsupervised pre-training and multi-task learning. Arxiv: 1712 09527[cs], 2017, http://arxiv.org/abs/1712.09527
- 159. Yeung S, Downing NL, Fei-Fei L, et al. Bedside computer vision – moving artificial intelligence from driver assistance to patient safety. N Engl J Med 2018; 378: 1271–1273. [DOI] [PubMed] [Google Scholar]
- 160. Dubey H, Monteiro A, Constant N, et al. Fog computing in medical internet-of-things: architecture, implementation, and applications. In: Khan SU, Zomaya AY, Abbas A. (eds) Handbook of large-scale distributed computing in smart healthcare: scalable computing and communications. Cham: Springer, 2017, pp. 281–321. [Google Scholar]
- 161. Raut A, Yarbrough C, Singh V, et al. Design and implementation of an affordable, public sector electronic medical record in rural Nepal. J Innov Health Inform 2017; 24: 862. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 162. Sahu P, Yu D, Qin H. Apply lightweight deep learning on internet of things for low-cost and easy-to-access skin cancer detection. In: Proceedings of the medical imaging 2018: imaging informatics for healthcare, research, and applications, Vol. 10579 International Society for Optics and Photonics, https://www.spiedigitallibrary.org/conference-proceedings-of-spie/10579/1057912/Apply-lightweight-deep-learning-on-internet-of-things-for-low/10.1117/12.2293350.short [Google Scholar]
- 163. Rippel O, Bourdev L. Real-time adaptive image compression. Arxiv:170505823 [Cs, Stat], 2017, http://arxiv.org/abs/1705.05823
- 164. HajiRassouliha A, Taberner AJ, Nash MP, et al. Suitability of recent hardware accelerators (DSPs, FPGAs, and GPUs) for computer vision and image processing algorithms. Sig Proc Imag Commun 2018; 68: 101–119. [Google Scholar]
- 165. Fey D, Hannig F. Special issue on heterogeneous real-time image processing. J Real-time Imag Process 2018; 14: 513–515. [Google Scholar]
- 166. Pagnutti MA, Ryan RE, Cazenavette GJ, et al. Laying the foundation to use Raspberry Pi 3 V2 camera module imagery for scientific and engineering purposes. J Electron Imag 2017; 26: 013014. [Google Scholar]
- 167. Shen BY, Mukai S. A portable, inexpensive, nonmydriatic fundus camera based on the raspberry PiR computer. J Ophthalmol 2017; 2017: 4526243, 10.1155/2017/4526243 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 168. Pérez J, Rodríguez A, Chico JF, et al. Energy-aware acceleration on GPUs: findings on a bioinformatics benchmark. Sustain Comput Inform Syst 2018; 20: 88–101. [Google Scholar]
- 169. Zhao R, Ng HC, Luk W, et al. Towards efficient convolutional neural network for domain-specific applications on FPGA. Arxiv:180903318 [Cs], 2018, http://arxiv.org/abs/1809.03318
- 170. Bendaoudi H. Flexible hardware architectures for retinal image analysis. PhD Thesis, École Polytechnique de Montréal, https://publications.polymtl.ca/2518/ [Google Scholar]
- 171. Hung KW, Qiu C, Jiang J. Video restoration using convolution neural networks for low-level FPGAs. In Liu W, Giunchiglia F, Yang B. (eds) Knowledge science, engineering and management (Lecture notes in computer science). New York: Springer, 2018, pp. 255–265. [Google Scholar]
- 172. Kulkarni A, Page A, Attaran N, et al. An energy-efficient programmable manycore accelerator for personalized biomedical applications. IEEE T VLSI Syst 2018; 26: 96–109. [Google Scholar]
- 173. Jouppi NP, Young C, Patil N, et al. In-datacenter performance analysis of a tensor processing unit. Arxiv:170404760 [Cs], 2017, http://arxiv.org/abs/1704.04760
- 174. Vitish-Sharma P, Acheson AG, Stead R, et al. Can the Sensimed Triggerfish lens data be used as an accurate measure of intraocular pressure? Acta Ophthalmologica 2018; 96: e242–e246. [DOI] [PubMed] [Google Scholar]
- 175. Araci IE, Su B, Quake SR, et al. An implantable microfluidic device for self-monitoring of intraocular pressure. Nat Med 2014; 20: 1074–1078. [DOI] [PubMed] [Google Scholar]
- 176. Molaei A, Karamzadeh V, Safi S, et al. Upcoming methods and specifications of continuous intraocular pressure monitoring systems for glaucoma. J Ophthalmic Vis Res 2018; 13: 66–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 177. Najjar RP, Sharma S, Drouet M, et al. Disrupted eye movements in preperimetric primary open-angle glaucoma. Invest Ophthalmol Vis Sci 2017; 58: 2430–2437. [DOI] [PubMed] [Google Scholar]
- 178. Asfaw DS, Jones PR, Monter VM, et al. Does glaucoma alter eye movements when viewing images of natural scenes? A between-eye study. Invest Ophthalmol Vis Sci 2018; 59: 3189–3198. [DOI] [PubMed] [Google Scholar]
- 179. Najjar RP, Sharma S, Atalay E, et al. Pupillary responses to full-field chromatic stimuli are reduced in patients with early-stage primary open-angle glaucoma. Ophthalmology 2018; 125: 1362–1371. [DOI] [PubMed] [Google Scholar]
- 180. Zhu Y, Zuo Y, Zhou T, et al. A multi-mode visual recognition hardware accelerator for AR/MR glasses. In: Proceedings of the 2018 IEEE international symposium on circuits and systems (ISCAS), Florence, 27–30 May 2018, pp. 1–5. New York: IEEE. [Google Scholar]
- 181. Sarkar N. System and method for resonant eye-tracking, 2018, https://patents.google.com/patent/US20180210547A1/en
- 182. Ping P, Hermjakob H, Polson JS, et al. Biomedical informatics on the cloud. Circ Res 2018; 122: 1290–1301. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 183. Muhammed T, Mehmood R, Albeshri A, et al. UbeHealth: a personalized ubiquitous cloud and edge-enabled networked healthcare system for smart cities. IEEE Access 2018; 6: 32258–32285. [Google Scholar]
- 184. Kotecha A, Brookes J, Foster PJ. A technician-delivered ‘virtual clinic’ for triaging low-risk glaucoma referrals. Eye 2017; 31: 899–905. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 185. Caffery LJ, Taylor M, Gole G, et al. Models of care in tele-ophthalmology: a scoping review. J Telemed Telecare. Epub ahead of print 1 January 2017. DOI: 10.1177/1357633X17742182. [DOI] [PubMed] [Google Scholar]
- 186. Hong S, Xiao C, Ma T, et al. RDPD: rich data helps poor data via imitation. Arxiv:180901921 [Cs, Stat], 2018, http://arxiv.org/abs/1809.01921
- 187. Verghese A. How tech can turn doctors into clerical workers. The New York Times, 2018, https://goo.gl/6LBm27
- 188. Lerner I, Veil R, Nguyen DP, et al. Revolution in health care: how will data science impact doctor–patient relationships. Front Public Health 2018; 6: 99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 189. Rosenberg L, Willcox G, Halabi S, et al. Artificial swarm intelligence employed to amplify diagnostic accuracy in radiology. In: Proceedings of the EMCON 2018, Vancouver, BC, Canada: IEEE; p. 6. [Google Scholar]
- 190. Kilkenny MF, Robinson KM. Data quality: ‘garbage in – garbage out’. Health Inf Manag 2018; 47: 103–105. [DOI] [PubMed] [Google Scholar]
- 191. Feldman M, Even A, Parmet Y. A methodology for quantifying the effect of missing data on decision quality in classification problems. Commun Stat Theory Methods 2018; 47: 2643–2663. [Google Scholar]
- 192. Shickel B, Tighe PJ, Bihorac A, et al. Deep EHR: a survey of recent advances in deep learning techniques for electronic health record (EHR) analysis. IEEE J Biomed Health Inform 2018; 22: 1589–1604. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 193. Eisenberg RS. Shifting institutional roles in biomedical innovation in a learning healthcare system. J Inst Econ 2018: 1–24. [Google Scholar]
- 194. Thornton T. Tacit knowledge as the unifying factor in evidence based medicine and clinical judgement. Philos Ethics Humanit Med 2006; 1: E2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 195. Keane PA, Topol EJ. With an eye to AI and autonomous diagnosis. Npj Digital Medicine 2018; 1: 40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 196. DePasse JW, Carroll R, Ippolito A, et al. Less noise, more hacking: how to deploy principles from MIT’s hacking medicine to accelerate health care. Int J Technol Assess Health Care 2014; 30: 260–264. [DOI] [PubMed] [Google Scholar]
- 197. Borsci S, Uchegbu I, Buckle P, et al. Designing medical technology for resilience: integrating health economics and human factors approaches. Expert Rev Med Devices 2018; 15: 15–26. [DOI] [PubMed] [Google Scholar]





