Skip to main content
Biomedical Journal logoLink to Biomedical Journal
. 2022 Jan 10;46(1):154–162. doi: 10.1016/j.bj.2022.01.001

Angular super-resolution in X-ray projection radiography using deep neural network: Implementation on rotational angiography

Tiing Yee Siow a,c,1, Cheng-Yu Ma b,1, Cheng Hong Toh a,c,
PMCID: PMC10105049  PMID: 35026475

Abstract

Background

Rotational angiography acquires radiographs at multiple projection angles to demonstrate superimposed vasculature. However, this comes at the expense of the inherent risk of increased ionizing radiation. In this paper, building upon a successful deep learning model, we developed a novel technique to super-resolve the radiography at different projection angles to reduce the actual projections needed for a diagnosable radiographic procedure.

Methods

Ten models were trained for different levels of angular super-resolution (ASR), denoted as ASRN, where for every N+2 frames, the first and the last frames were submitted as inputs to super-resolve the intermediate N frames.

Results

The results show that large arterial structures were well preserved in all ASR levels. Small arteries were adequately visualized in lower ASR levels but progressively blurred out in higher ASR levels. Noninferiority of image quality was demonstrated in ASR1–4 (99.75% confidence intervals: −0.16–0.03, −0.19–0.04, −0.17–0.01, −0.15–0.05 respectively).

Conclusions

ASR technique is capable of super-resolving rotational angiographic frames at intermediate projection angles.

Keywords: Angular super-resolution, Interpolation, Projection radiography, Rotational angiography, Deep learning


At a glance commentary

Scientific background on the subject

Standard radiography obtains two-dimensional projections of the three-dimensional anatomy in which structures are overlaid on each other. Conventional radiographic procedures require multiple projections at different angles to delineate superimposed three-dimensional structures, but comes with the expense of increased ionizing radiation.

What this study adds to the field

We proposed a deep learning-based technique to increase the angular resolution of projection radiography, with implementation on rotational angiography. This technique can super-resolve rotational angiography frames at intermediate projection angles with non-inferior image quality. This method can have clinical potential in lessening the radiographs required for a diagnosable rotational angiography.

Standard radiography obtains two-dimensional (2D) projections of the three-dimensional (3D) anatomy in which structures are overlaid on each other without true 3D information available to the interpreter. A long-held rule in diagnostic radiology states that a minimum of 2 projections, preferably performed at orthogonal angles, should be acquired for most radiographic studies to demonstrate superimposed anatomical structures. In cerebral angiography, posterior/anterior and lateral projections are routinely obtained to evaluate the contrast-filled blood vessels. Often, several projections at different angles between the posterior/anterior and lateral views are acquired to increase diagnostic confidence and facilitate intervention planning. However, using projections in multiple angles does not always afford an adequate understanding of the complex cerebral vascular anatomy. To supplement the standard biplane angiography, rotational angiography can provide precise view of the vascular and lesion geometry. It involves rotation of X-ray tube around the patients during continuous intra-arterial contrast medium infusion. Through acquiring a series of radiographs (typically >100) during X-ray tube rotation, it allows the visualization of vessels of interest in different views, each differ by small angular increment, thus providing 3D structural information.

Despite the clear clinical benefits of multi-projection angle angiography, this advantage comes at the expense of increased exposure to ionizing radiation. As per the “as low as reasonably achievable” principle, reducing irradiation is an essential priority to minimize its potential harmful effects. A possible solution to this issue is to utilize state-of-the-art processing techniques to generate radiographic views at different angles therefore reducing the actual number of projections needed for rotational angiography. This problem can be recast as an up-sampling task to increase the angular resolution of projection radiography.

Super-resolution [[1], [2], [3], [4]] is a class of imaging processing technique that converts low-spatial resolution images to their high-resolution counterparts. Super-resolution usually consists of 3 major processes: up-sampling (interpolation), deblurring, and denoising [5]. Recently, the deep learning revolution has ushered in transformative progress in super-resolution tasks and produced unprecedented performance [[1], [2], [3], [4]]. Nevertheless, to the best of our knowledge, although super-resolution has been shown to enhance the in-plane resolution of images, its effect on improving the resolution of angular images has not been explored. We hypothesized that through leveraging of deep generative neural networks, it is feasible to super-resolve a 3D imaged volume at intermediate projection angle(s) (i.e.: synthesis of intermediate frame(s) between 2 projection angles). In this study, we used rotational angiography as a model to develop a machine learning-based angular super-resolution (ASR) method. Our results show that the intermediate frames of rotational angiograms can be super-resolved resulting in non-inferior image quality compared to the ground truth. The schematic illustration of study design is shown in Fig. 1.

Fig. 1.

Fig. 1

The schematic illustration of study design. The left side of the figure shows conventional rotational angiographic imaging, where more than 100 radiographs are acquired at different projection angles during C-arm rotation. The right side of the figure illustrates angular super-resolution technique, which is a deep learning-based algorithm capable of super-resolving the intermediate frame(s) between two input frames, thus lessening the radiographs required for a diagnosable rotational angiography. Objective and subjective assessments were performed on the generated images.

Materials and methods

This single-center retrospective study was approved by the institutional review board of Chang Gung Medical Foundation, Taiwan (approval number: 201900749B0). All research procedures adhered to the tenets of the Declaration of Helsinki. The requirement for written informed consent was waived.

Datasets

All carotid and vertebral rotational angiograms obtained at our hospital between March 2017 and December 2018 were collected for use in this study. There were no exclusion criteria. Projection radiographs of rotational angiograms stored in DICOM format were exported to a local console via a picture archiving and communication system (PACS; GE Centricity RA1000, GE Healthcare, Barrington, IL, USA). The entire dataset consisted of 107,164 projection radiographs from 1003 rotational angiograms in 723 patients, including included 214 patients who had more than 1 examination which targeted different blood vessels and/or obtained at different time points. For training and evaluation, the dataset was randomly split into training, validation, and test datasets (80% of all rotational angiograms were assigned to the training dataset). The remaining 20% were equally partitioned into validation and test datasets.

Imaging protocols

Rotational angiograms were performed on 2 different angiographic units (Infinix-I Biplane; Toshiba Medical Systems, Tochigi, Japan and Axiom Artis zee; Siemens Healthineers, Erlangen, Germany). Imaging protocols involved two 5-s rotational runs of the C-arm around the patient's head with 2 cone-beam computed tomography acquisitions (i.e.: mask and fill acquisitions). A total of 108 (Infinix-I Biplane; Toshiba Medical Systems) and 133 (Axiom Artis Zee; Siemens Healthineers) radiographs were acquired per rotation, with angular coverages of 202.2° and 198.0° respectively. The matrix size for each frame was 512 × 512 pixels. Twelve to 20 ml of 1:3 diluted iodinated nonionic contrast medium (Iohexol, 350 mg I/ml) was injected at a rate of 2–5 ml/s via catheters positioned in the examined arteries at the beginning of fill acquisition. Angiographic images were processed automatically in workstations and sent to PACS.

Model architecture and training

The current model was based on Super SloMo [6], which was originally developed for video frame interpolation. Detailed mathematical formulations and algorithms are available in the Appendix. The current model is referred to as Angular Super-resolution Network (ASRNet) to differentiate it from the original Super SloMo.

ASRNet was trained using the Adam optimizer for 15,000 epochs. The learning rate was initialized to be 0.0001 and decreased by a factor of 10 at 100 and 150 epochs. The batch size was 20. The model was implemented in Python 3.7.0 using PyTorch 1.4.0 package (https://pytorch.org/) ran on a NVIDIA Tesla V100 (32 GB memory) graphic processing unit with CUDA 10.1 and cuDNN 7.6.5.

Ten models were trained for different levels of ASR. The ASR level was named as ASRN, where for every N+2 frames, the first and last frames were submitted as inputs to super-resolve the intermediate N frames, and N = 1, 2, 3, …10. The original intermediate images obtained using angiographic machine were used as the ground truth for training the neural network. Both objective image fidelity metrics calculation and subjective image quality assessment were performed using ground truth as the reference image. The first and last frames which were used as inputs were denoted as I0 and I1, respectively. The inferred intermediate images were designated as I. G0 and G1 were the frame gaps between the inferred image I and the input image I0 and I1 respectively.

Objective image fidelity evaluation

To evaluate the ASR image fidelity, we used 3 indices: interpolation error (IE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) [7]. IE measures the pixel-wise intensity deviation between the ASR and ground truth image. PSNR represents a ratio between the maximum possible power of an image and the power of corrupting noise that distorts the image. In contrast to IE and PSNR which quantify pixel-wise variations, SSIM is a perception-based index that gauges image fidelity based on degradation of structural information. Similar images have lower IE, higher PSNR, and higher SSIM values. The mathematical definitions of the indices are as follows:

IE=[I(x,y)I(x,y)]2n (1)
MSE=1n[I(x,y)I(x,y)]2 (2)
PSNR=10log10(MAXI2MSE) (3)

where I(x,y) is the ground-truth image, I(x,y) is the inferred image, n is the number of pixels in the image. MAXI is the maximum possible pixel value of the ground truth image I(x,y).

SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μx2+μy2+c1)(σx2+σy2+c2) (4)

where subscript x and y designate the ground-truth and the predicted image respectively, μ is the average of image, σ is the variance of image, c1 = (k1L)2, c2 = (k2L)2, L is the dynamic range of the pixel values, k1 = 0.01 and k2 = 0.03 by default.

To investigate how ground truth image influences ASR image fidelity, we calculated the Gray Level Co-occurrence Matrix [8] texture features of the ground truth image, including entropy, contrast, correlation, energy, and homogeneity. The mathematical definitions of the texture features are included in the Appendix. The texture features were extracted using MATLAB version R2018b (MathWorks, Natick, MA, USA).

Subjective assessment

Two board-certified neuroradiologists with 3 and 15 years of experience in angiography interpretation independently assessed the quality of the rotational angiograms. To evaluate image quality, 100 ASR and ground truth image-pairs were randomly selected from each ASR level. All selected images were arranged in a random order and reviewed on an image-by-image basis. The readers were blinded to the method used to generate the images. Image quality was rated on a scale of 1–5 with the higher value corresponding to better quality: 1 (nondiagnostic: poor visualization of first-order arteries), 2 (poor: acceptable visualization of first-order arteries), 3 (average: acceptable visualization of the first and second-order arteries), 4 (good: acceptable visualization of the first, second, and third-order arteries), and 5 (excellent: acceptable visualization of the perforating arteries).

Statistical analysis

All data were presented as mean ± standard deviation or median as appropriate. Multivariate linear regression models were used to analyze the relationship between the selected variables and ASR image fidelity. Noninferiority tests were used to compare image quality between ASR image and ground truth. The non-inferior margins were set at −0.20 [9]. The two-sided α-level was set at 0.025. Inter-reader agreement on image quality was assessed by the intraclass correlation coefficient. Intraclass correlation coefficient of <0.50, 0.50–0.75, 0.75–0.90 and 0.90–1.00, indicated poor, moderate, good, and excellent agreement, respectively [10]. Paired continuous data were analyzed with a two-sided Student paired t-test. Statistics were computed using R (version 3.6.3; http://www.r-project.org/). Statistical plots were generated with ggplot2 package (https://ggplot2.tidyverse.org/). A P value < 0.05 was considered statistically significant. Bonferroni correction was performed for multiple comparisons.

Results

Study population characteristics

The characteristics of the training, validation, and test data sets are shown in Table 1.

Table 1.

Characteristics of patients with angiograms included in the study.

Characteristics Dataset
Training Validation Test
No. of examinations 802 101 100
No. of patients 618 100 99
 Male patients 287 (46.44) 47 (47.00) 49 (49.49)
 Female patients 331 (53.56) 53 (53.00) 50 (50.51)
Mean age (year)a
 Male patients 55.06 ± 16.00 58.56 ± 14.79 51.84 ± 16.20
 Female patients 56.17 ± 12.92 52.71 ± 15.29 54.03 ± 15.68
Imaged Vessel
 Common carotid arteries 31 (3.87) 7 (6.93) 2 (2.00)
 Internal carotid arteries 646 (80.55) 77 (76.24) 84 (84.00)
 External carotid arteries 5 (0.62) 1 (0.99) 2 (2.00)
 Vertebrobasilar arteries 120 (14.96) 16 (15.84) 12 (12.00)
Imaging platform
 Toshiba Medical System 679 (84.66) 88 (87.13) 82 (82.00)
 Siemens Healthcare 123 (15.34) 13 (12.87) 18 (18.00)

Note: Numbers in parentheses are percentages.

Training, validation, and test sets were split 80%, 10%, 10%, respectively.

a

Data are mean ± standard deviation.

ASR characterization

A representative rotational angiogram with different ASR levels [Fig. 2] shows that large arterial structures (i.e.: the internal carotid artery) are well preserved in all ASR levels with high local SSIM. Fine details such as the distal anterior cerebral artery [white arrows in Fig. 2 upper panel] as well as other small distal arterial branches are adequately visualized at lower ASR levels. Notably, as the ASR level increased, the fine structures progressively blurred out with a concomitant decrease in local SSIM. Magnified images [Fig. 2 lower panel] focusing on the perforating arteries arising from proximal anterior cerebral and middle cerebral arteries show visualization of perforating arteries at lower ASR levels with progressive blurring out as the ASR level increases.

Fig. 2.

Fig. 2

Structural visibility in angular super-resolution (ASR) images. Upper panel shows the oblique anterior-posterior projection of a normal internal carotid rotational angiogram. (A)–(F) Ground truth, ASR2, ASR4, ASR6, ASR8 and ASR10 images, respectively. (G)–(H) Corresponding structural similarity index measure (SSIM) maps. Large arterial structures (such as internal carotid artery) are well preserved in all ASR levels with high local SSIM. Fine details such as the distal anterior cerebral artery (white arrows in upper panel) as well as other small distal arterial branches are adequately visualized in lower ASR levels. Note that as ASR level increases, the fine structures are progressively blurred out with a concomitant decrease in local SSIM. (M)–(R) Corresponding magnified images of the red rectangle area in (A)–(F). Perforating arteries are clearly visible in lower ASR levels with progressive blurring out as the ASR level increase.

A carotid rotational angiogram from a patient with a posterior communicating artery aneurysm [Fig. 3 upper panel] shows that ASR images successfully reproduce the geometry of the aneurysm at different projection angles. Another carotid rotational angiogram from a patient with a segmental stenosis of the inferior division of the middle cerebral artery [Fig. 3 lower panel] shows that the ASR images faithfully visualize the stenotic artery at different projection angles.

Fig. 3.

Fig. 3

Lesion conspicuity in angular super-resolution (ASR) images. Upper panel shows representative images of an internal carotid rotational angiogram at different projection angles from a patient with posterior communicating artery aneurysm (white arrow). (A)–(E) Ground truth images and (F)–(J) are corresponding ASR5 images. ASR images successfully reproduce the geometry of the aneurysm at different projection angles. Lower panel shows images of an internal carotid angiogram showing segmental stenosis of the middle cerebral artery inferior division. (K)–(O) Ground truth images and (P)–(T) are corresponding ASR5 images. ASR images faithfully show the stenotic artery at different projection angles.

Image fidelity metrics

The IE, PSNR, and SSIM of ASR images are shown in Table 2 and visualized as boxplots [Fig. 4A–C]. All 3 metrics revealed a consistent pattern of degrading image fidelity (higher IE, lower PSNR, and lower SSIM) and dispersion of values with increasing ASR level. The heatmap plots of mean IE, PSNR, and SSIM for G0 and G1 were shown [Fig. 4D–E]. Entries which belonged to the same ASR level aligned in a “staircase” pattern in the heatmap matrix. Increasing G0 and G1 resulted in a decrease in image fidelity in all 3 metrics. Within the same ASR level, the lowest image fidelity was observed when the inferred image was roughly equidistant from I0 and I1.

Table 2.

Image fidelity metrics of ASR images in different angular super-resolution tasks.

ASR1 ASR2 ASR3 ASR4 ASR5 ASR6 ASR7 ASR8 ASR9 ASR10
IE 6.38 ± 1.47 6.87 ± 1.67 7.61 ± 1.98 8.08 ± 2.16 8.56 ± 2.35 9.05 ± 2.52 9.54 ± 2.73 9.72 ± 2.76 10.25 ± 2.95 10.53 ± 3.00
PSNR 32.24 ± 1.91 31.63 ± 2.00 30.77 ± 2.13 30.27 ± 2.18 29.78 ± 2.23 29.30 ± 2.26 28.86 ± 2.33 28.70 ± 2.31 28.25 ± 2.36 28.00 ± 2.34
SSIM 0.77 ± 0.06 0.75 ± 0.06 0.73 ± 0.07 0.71 ± 0.07 0.70 ± 0.08 0.68 ± 0.08 0.67 ± 0.08 0.67 ± 0.08 0.65 ± 0.09 0.64 ± 0.09

Abbreviations: PSNR: Peak Signal Noise Ratio; IE: Interpolation Error; SSIM: Structural Similarity Index Measure.

Data are mean ± standard deviation.

Fig. 4.

Fig. 4

Effects of angular super-resolution (ASR) level and frame gap on image fidelity. (A)–(C) Respective boxplots for interpolation error (IE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) metrics versus ASR levels. There is a consistent trend of decreasing image fidelity (i.e.: higher IE, lower PSNR and lower SSIM) with increasing ASR levels in all 3 metrics. (D)–(F) Heatmap plots of IE, PSNR and SSIM metrics for G0 and G1. Increasing G0 and G1 results in lower image fidelity. G0 = frame gap between inferred image I′ and reference image I0; G1 = frame gap between inferred image I′ and reference image.

Multivariate linear regression analysis [Table 3] included the following independent variables: frame gap (G0, G1 and G0G1 interaction term; all p < 0.001 for IE, PSNR and SSIM), texture statistics of the ground truth image (entropy, contrast, correlation, energy and homogeneity; all p < 0.001 for IE, PSNR and SSIM), type of angiographic equipment (Toshiba: p < 0.001, p = 0.193 and p < 0.001 for IE, PSNR and SSIM) and imaged vessels (external carotid artery: all p < 0.001 for IE, PSNR and SSIM, internal carotid artery: p = 0.006, p = 0.014 and p < 0.001 for IE, PSNR and SSIM respectively, vertebral artery: p < 0.001, p = 0.007, p < 0.001 for IE, PSNR and SSIM respectively). All are statistically significant predictors for the 3 image fidelity metrics, except for the type of angiographic equipment which was not predictive for PSNR.

Table 3.

Multivariable linear regression analyses of the relationships between the image fidelity metrics and other variables.

Predictors Dependent Variables
IE
PSNR
SSIM
Coefficient SE p-value Coefficient SE p-value Coefficient SE p-value
Intercept −3.933 1.408 0.005∗∗ 51.045 1.189 <0.001∗∗∗ −1.650 0.049 <0.001∗∗∗
Frame Gap
 G0 0.079 0.003 <0.001∗∗∗ −0.136 0.003 <0.001∗∗∗ −0.004 <0.001 <0.001∗∗∗
 G1 0.095 0.003 <0.001∗∗∗ −0.148 0.003 <0.001∗∗∗ −0.005 <0.001 <0.001∗∗∗
 G0 G1a 0.156 0.001 <0.001∗∗∗ −0.132 0.001 <0.001∗∗∗ −0.004 <0.001 <0.001∗∗∗
Ground Truth Image
 Entropy 3.630 0.026 <0.001∗∗∗ −3.786 0.022 <0.001∗∗∗ −0.152 <0.001 <0.001∗∗∗
 Contrast 29.843 0.560 <0.001∗∗∗ −26.103 0.472 <0.001∗∗∗ 0.561 0.019 <0.001∗∗∗
 Correlation 15.129 0.193 <0.001∗∗∗ −10.709 0.163 <0.001∗∗∗ −0.022 0.007 <0.001∗∗∗
 Energy 13.962 0.084 <0.001∗∗∗ −12.007 0.071 <0.001∗∗∗ −0.522 0.003 <0.001∗∗∗
 Homogeneity −37.643 1.428 <0.001∗∗∗ 23.154 1.206 <0.001∗∗∗ 3.594 0.049 <0.001∗∗∗
Machine
 Toshiba 0.664 0.034 <0.001∗∗∗ −0.038 0.029 0.19 0.120 0.001 <0.001∗∗∗
 Siemens Reference Reference Reference
Imaged Vessel
 ECA 0.314 0.043 <0.001∗∗∗ −0.373 0.036 <0.001∗∗∗ −0.014 0.001 <0.001∗∗∗
 ICA −0.086 0.031 0.006∗∗ 0.065 0.027 0.01∗ 0.005 0.001 <0.001∗∗∗
 VA 0.115 0.034 <0.001∗∗∗ −0.077 0.028 0.007∗∗ −0.009 0.001 <0.001∗∗∗
 CCA Reference Reference Reference

Abbreviations: IE: interpolation error; PSNR: peak signal-to-noise ratio; SSIM: structural similarity index; SE: standard error; G0: frame gap between predicted image I′ and reference image I0; G1: frame gap between predicted image I′ and reference image I1; ECA: external carotid artery; ICA: internal carotid artery; VA: vertebral artery; CCA: common carotid artery.

p < 0.05, ∗∗p < 0.01, ∗∗∗p < 0.001.

Adjusted R2 for IE, PSNR and SSIM models = 0.797, 0.831 and 0.743 respectively.

a

Interaction term.

Reader assessment

The assessment for image quality showed a moderate correlation for the 2 readers (intraclass correlation coefficient, 0.63; 95% CI 0.53–0.70). The noninferiority of the image quality was demonstrated in ASR1–4 since the lower margin of the 2-sided 99.75% CI did not cross the predefined noninferiority margin of −0.2 score points. The 99.75% CIs for ASR1–10 were: −0.16–0.03, −0.19–0.04, −0.17–0.01, −0.15–0.05, −0.28 to −0.01, −0.31 to −0.04, −0.33 to −0.04, −0.26 to −0.01, −0.28 to −0.03, and −0.37 to −0.05, respectively.

Discussion

Here we explored the utility of an ASR technique using deep learning and validated its methodology in an independent hold-out test data set. Through the implementation of the Super SloMo [6] architecture, the under-sampled rotational angiogram views were super-resolved at intermediate projection angles. In assessments of image quality by 2 neuroradiologists, the images with low-level ASR (ASR1–4) were able to maintain non-inferior quality concerning the ground-truth. To our knowledge, no similar studies have been reported in the literature.

The ASR technique could have a potential impact in clinical settings. Currently, the absorbed brain dose associated with a typical rotational angiogram is about 5–37 mGy [[11], [12], [13], [14]]. As shown in our study, over two-thirds of projections in a rotational angiogram can be synthesized retrospectively using ASR without jeopardizing their quality. This could roughly translate into approximately a two-thirds reduction in radiation.

Typical reported values for PSNR and SSIM in a single super-resolution image are 25–40 dB and 0.6–0.9 respectively [[15], [16], [17]]. However, these results should not be extrapolated and compared with the present technique since ASR involves a more complex task of inferring the whole intermediate frame instead of the intermediate pixels within a single image. In our study, several factors were shown to affect the ASR image fidelity, including: the frame gaps between the inferred and input images; the texture characteristics of the ground truth image; the angiographic equipment used; and the blood vessel imaged. As for the frame gap, we hypothesize that when moving away from the input frame the magnitude of the optical flow increases along with a concomitant error escalation. Ground truth image textural features have also been found to have a significant impact on ASR image fidelity. Generally, increased ground truth image textural complexity (higher entropy, higher contrast, higher energy, higher correlation, and lower homogeneity) has negative effects on ASR image fidelity. Angiographic equipment type is a significant predictor of ASR image fidelity. Since the X-ray tube, sensor hardware, default image acquisition protocols, and post-processing algorithms differ among vendors, the resultant images have different characteristics that influence ASR image fidelity. Lastly, the training data set suffered from an imbalanced number of imaged blood vessels. Our results reflect that the imaged blood vessels which are comprised of a higher number of samples in the training data set result in better ASR image fidelity. This is consistent with the common observation in machine learning where larger training data sets result in improved performance.

A closely related body of work is that of multi-view generation, in which models are developed to predict multiple views of a same object, given single or limited views as input. Much of the work has been done with the state-of-the-art deep learning methods. Yan et al. [18] proposed Perspective Transformer Nets that learns objects’ volumetric shape. Zhou et al. [19] used CNN to predict appearance flow field, which is the coordinate vectors specifying how to reconstruct target view using pixels of input image. Eslami et al. [20] proposed generative query network, which constructs latent representation to predict the scene at an arbitrary query viewpoint. Meanwhile, generative adversarial network (GAN) [21] has also been a popular solution for multi-view generation task [22,23]. Although the abovementioned works have achieved considerable success, it is worth noting that, some of these works have thus far been restricted to synthetic environment rather than real world images. GAN-based models are generally capable of producing realistic output; nevertheless, it attempts to fill in extraneous details and introduce hallucinated information [24]. To conquer the defect of GAN-based methodologies, Mirza and Osindero proposed conditional GAN (cGAN) [25] by simply giving an extra information to generator and discriminator to constrain the generated output. Recently, Ying et al. [26] used the concept of cGAN to construct 3D CT volume from the posterior-anterior view and lateral view of 2D X-ray images. Their results show that the synthetic CT can only generate gross structure accurately but small anatomies suffer from artifacts. In consideration of all these approaches, optical flow is probably by far the most suitable method for the ASR technique which deals with rich-information medical images. Although new techniques are emerging which offer superior performance in frame interpolation task, the performance of SuperSloMo is still comparable with these new methods [27,28] while the architecture of SuperSloMo is simple and concise. Also, the interpolation deep learning algorithm in our proposed pipeline can be replaced by more advanced approaches in the future.

There are limitations in our present study that need to be addressed. First, this is a retrospective study using data from a single center. Whether or not the methodology can be generalized to various clinical settings in other institutions needs to be explored. Second, ASR5–10 were unable to demonstrate non-inferior image quality compared to the ground truth where small vascular structures (i.e.: perforating arteries) tended to blur out in higher level ASR. Nevertheless, our work proves that ASR technique is feasible in clinical setting where lower-level ASR (ASR 1–4) results in non-inferior image quality as perceived by expert observers. With rapid progress in neural network design as well as increasing availability of data, it can be anticipated that there will be a continuous improvement in the ASR technique allowing for better quality of generated images.

In conclusion, we used and validated a ASR technique capable of super-resolving frames at intermediate projection angles from the under-sampled rotational angiograms. We anticipate that this ASR technique may have a clinical potential in lessening the number of views required in rotational angiograms thus reducing radiation doses. Integration of ASR algorithm into the image post-processing pipelines of the angiographic machines may have significant impact to the clinical practice in the future.

Funding

This work was supported by the Maintenance Project of the Center for Artificial Intelligence in Medicine (Grant CLRPG3H0012, CIRPG3H0012) at Chang Gung Memorial Hospital.

Conflicts of interest

The authors have filed U.S. (17210604) and Taiwan (109114132) patent applications related to this work.

Acknowledgements

We thank Dr. Mauricio Castillo for editing and reviewing this manuscript.

Footnotes

Peer review under responsibility of Chang Gung University.

Appendix A

Supplementary data to this article can be found online at https://doi.org/10.1016/j.bj.2022.01.001.

Appendix A. Supplementary data

The following is the Supplementary data to this article:

Multimedia component 1
mmc1.docx (44.5KB, docx)

References

  • 1.Chen Y., Xie Y., Zhou Z., Shi F., Christodoulou A.G., Li D. 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018) 2018. Brain MRI super resolution using 3D deep densely connected neural networks; pp. 739–742. [Google Scholar]
  • 2.Park J., Hwang D., Kim K.Y., Kang S.K., Kim Y.K., Lee J.S. Computed tomography super-resolution using deep convolutional neural network. Phys Med Biol. 2018;63(14):145011. doi: 10.1088/1361-6560/aacdd4. [DOI] [PubMed] [Google Scholar]
  • 3.Chaudhari A.S., Fang Z., Kogan F., Wood J., Stevens K.J., Gibbons E.K., et al. Super-resolution musculoskeletal MRI using deep learning. Magn Reson Med. 2018;80(5):2139–2154. doi: 10.1002/mrm.27178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Umehara K., Ota J., Ishida T. Application of super-resolution convolutional neural network for enhancing image resolution in chest CT. J Digit Imag. 2018;31(4):441–450. doi: 10.1007/s10278-017-0033-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Siu W., Hung K. Proceedings of the 2012 Asia Pacific signal and information processing association annual summit and conference. 2012. Review of image interpolation and super-resolution; pp. 1–10. [Google Scholar]
  • 6.Jiang H., Sun D., Jampani V., Yang M.-H., Learned-Miller E., Kautz J. Super SloMo: high quality estimation of multiple intermediate frames for video interpolation. arXiv:1712.00080 [Preprint]. 2017 [cited 2017 Nov 1] Available from: https://arxiv.org/abs/1712.00080. [Google Scholar]
  • 7.Zhou W., Bovik A.C., Sheikh H.R., Simoncelli E.P. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13(4):600–612. doi: 10.1109/tip.2003.819861. [DOI] [PubMed] [Google Scholar]
  • 8.Haralick R.M., Shanmugam K., Dinstein I. Textural features for image classification. IEEE Transact Syst Man Cybernetics. 1973:610–621. SMC-3. [Google Scholar]
  • 9.Choi J., Kim B., Choi Y., Shin N.Y., Jang J., Choi H.S., et al. Image quality of low-dose cerebral angiography and effectiveness of clinical implementation on diagnostic and neurointerventional procedures for intracranial aneurysms. AJNR Am J Neuroradiol. 2019;40(5):827–833. doi: 10.3174/ajnr.A6029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Koo T.K., Li M.Y. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15(2):155–163. doi: 10.1016/j.jcm.2016.02.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Sanchez R.M., Vano E., Fernandez J.M., Moreu M., Lopez-Ibor L. Brain radiation doses to patients in an interventional neuroradiology laboratory. AJNR Am J Neuroradiol. 2014;35(7):1276–1280. doi: 10.3174/ajnr.A3884. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Kim S., Sopko D., Toncheva G., Enterline D., Keijzers B., Yoshizumi T.T. Radiation dose from 3D rotational X-ray imaging: organ and effective dose with conversion factors. Radiat Protect Dosim. 2012;150(1):50–54. doi: 10.1093/rpd/ncr369. [DOI] [PubMed] [Google Scholar]
  • 13.Bai M., Liu X., Liu B. Effective patient dose during neuroradiological C-arm CT procedures. Diagn Interv Radiol. 2013;19(1):29–32. doi: 10.4261/1305-3825.DIR.5852-12.0. [DOI] [PubMed] [Google Scholar]
  • 14.Koyama S., Aoyama T., Oda N., Yamauchi-Kawaura C. Radiation dose evaluation in tomosynthesis and C-arm cone-beam CT examinations with an anthropomorphic phantom. Med Phys. 2010;37(8):4298–4306. doi: 10.1118/1.3465045. [DOI] [PubMed] [Google Scholar]
  • 15.Lyu Q., Shan H., Wang G. MRI super-resolution with ensemble learning and complementary priors. IEEE Transact Comput Imag. 2020;6:615–624. [Google Scholar]
  • 16.Thurnhofer-Hemsi K., López-Rubio E., Roé-Vellvé N., Domínguez E., Molina-Cabello M.A. 2018 international joint conference on neural networks (IJCNN) 2018. Super-resolution of 3D magnetic resonance images by random shifting and convolutional neural networks; pp. 1–8. [Google Scholar]
  • 17.Masutani E.M., Bahrami N., Hsiao A. Deep learning single-frame and multiframe super-resolution for cardiac MRI. Radiology. 2020;295(3):552–561. doi: 10.1148/radiol.2020192173. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Yan X., Yang J., Yumer E., Guo Y., Lee H. Perspective transformer Nets: learning single-view 3D object reconstruction without 3D supervision. arXiv:1612.00814 [Preprint] 2016 [cited 2016 Dec 1] Available from: https://arxiv.org/abs/1612.00814. [Google Scholar]
  • 19.Zhou T., Tulsiani S., Sun W., Malik J., Efros A.A. View synthesis by appearance flow. arXiv:1605.03557 [Preprint]. 2016 [cited 2016 May 1] Available from: https://arxiv.org/abs/1605.03557. [Google Scholar]
  • 20.Eslami S.M.A., Jimenez Rezende D., Besse F., Viola F., Morcos A.S., Garnelo M., et al. Neural scene representation and rendering. Science. 2018;360(6394):1204–1210. doi: 10.1126/science.aar6170. [DOI] [PubMed] [Google Scholar]
  • 21.Goodfellow I.J., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., et al. Generative adversarial networks. arXiv:1406.2661 [Preprint]. 2014 [cited 2014 June 1] Available from: https://arxiv.org/abs/1406.2661. [Google Scholar]
  • 22.Tian Y., Peng X., Zhao L., Zhang S., Metaxas D.N. CR-GAN: learning complete representations for multi-view generation. arXiv:1806.11191 [Preprint]. 2018 [cited 2018 June 1] Available from: https://arxiv.org/abs/1806.11191. [Google Scholar]
  • 23.Zhao B., Wu X., Cheng Z.-Q., Liu H., Jie Z., Feng J. Multi-view image generation from a single-view. arXiv:1704.04886 [Preprint]. 2017 [cited 2017 Apr 1] Available from: https://arxiv.org/abs/1704.04886. [Google Scholar]
  • 24.Xu X., Chen Y., Jia J. 2019 IEEE/CVF international conference on computer vision (ICCV) 2019. View independent generative adversarial network for novel view synthesis; pp. 7790–7799. [Google Scholar]
  • 25.Mirza M., Osindero S. Conditional generative adversarial Nets. arXiv:1411.784 [Preprint]. 2014 [cited 2014 Nov 1] Available from: https://arxiv.org/abs/1411.1784. [Google Scholar]
  • 26.Ying X., Guo H., Ma K., Wu J., Weng Z., Zheng Y. X2CT-GAN: reconstructing CT from biplanar X-rays with generative adversarial networks. arXiv:1905.06902 [Preprint]. 2019 [cited 2019 May 1] Available from: https://arxiv.org/abs/1905.06902. [Google Scholar]
  • 27.Lee H., Kim T., Chung T., Pak D., Ban Y., Lee S. AdaCoF: adaptive collaboration of flows for video frame interpolation. arXiv:1907.10244 [Preprint]. 2019 [Preprint] [cited 2019 July 1] Available from: https://arxiv.org/abs/1907.10244. [Google Scholar]
  • 28.Zuckerman L.P., Naor E., Pisha G., Shai B., Michal I. Across scales & across dimensions: temporal super-resolution using deep internal learning. arXiv: 2003.08872 [Preprint]. 2020 [cited 2020 March 1] Available from: https://arxiv.org/abs/2003.08872. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Multimedia component 1
mmc1.docx (44.5KB, docx)

Articles from Biomedical Journal are provided here courtesy of Chang Gung University

RESOURCES