Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Apr 1.
Published in final edited form as: Acad Radiol. 2022 Jun 9;30(4):739–748. doi: 10.1016/j.acra.2022.05.005

Deep Learning–Based Digitally Reconstructed Tomography of the Chest in the Evaluation of Solitary Pulmonary Nodules: A Feasibility Study

Ayis Pyrros a,*,#, Andrew Chen b,#, Jorge Mario Rodríguez-Fernández c,#, Stephen M Borstelmann d, Patrick Cole b, Jeanne Horowitz e, Jonathan Chung f, Paul Nikolaidis e, Viveka Boddipalli a, Nasir Siddiqui a, Melinda Willis a, Adam Eugene Flanders g, Sanmi Koyejo b
PMCID: PMC9732145  NIHMSID: NIHMS1815331  PMID: 35690536

Abstract

Rationale and Objectives

Computed tomography (CT) is preferred for evaluating solitary pulmonary nodules (SPNs) but access or availability may be lacking, in addition overlapping anatomy can hinder detection of SPNs on chest radiographs. We developed and evaluated the clinical feasibility of a deep learning algorithm to generate digitally reconstructed tomography (DRT) images of the chest from digitally reconstructed frontal and lateral radiographs (DRRs) and use them to detect SPNs.

Methods

This single-institution retrospective study included 637 patients with noncontrast helical CT of the chest (mean age 68 years, median age 69 years, standard deviation 11.7 years; 355 women) between 11/2012 and 12/2020, with SPNs measuring 10–30 mm. A deep learning model was trained on 562 patients, validated on 60 patients, and tested on the remaining 15 patients. Diagnostic performance (SPN detection) from planar radiography (DRRs and CT scanograms, PR) alone or with DRT was evaluated by two radiologists in an independent blinded fashion. The quality of the DRT SPN image in terms of nodule size and location, morphology, and opacity was also evaluated, and compared to the ground-truth CT images

Results

Diagnostic performance was higher from DRT plus PR than from PR alone (area under the receiver operating characteristic curve 0.95–0.98 versus 0.80–0.85; P < 0.05). DRT plus PR enabled diagnosis of SPNs in 11 more patients than PR alone. Interobserver agreement was 0.82 for DRT plus PR and 0.89 for PR alone; and interobserver agreement for size and location, morphology, and opacity of the DRT SPN was 0.94, 0.68, and 0.38, respectively.

Conclusion

For SPN detection, DRT plus PR showed better diagnostic performance than PR alone. Deep learning can be used to generate DRT images and improve detection of SPNs.

Keywords: Machine learning, synthetic imaging, chest radiographs, computed tomography, digital reconstruction, solitary pulmonary nodule

Summary:

Digitally reconstructed tomograms in combination with frontal and lateral radiographic projections showed higher diagnostic performance than frontal and lateral radiographic projections alone in identifying solitary pulmonary nodules.

Introduction

The two-view chest radiograph (CXR) remains one of the most common radiological examinations globally [1, 2], encoding complex three-dimensional thoracic anatomy in an overlapping two-dimensional representation. The overall reported incidence of solitary pulmonary nodules (SPNs) is 8–51% [3, 4]. In the general population, SPNs are found incidentally on 0.1–0.2% of CXRs and 13% of computed tomography (CT) scans [5]. Overall, lung cancer is the leading cause of cancer death worldwide, comprising 18.4% of all cancer deaths [6, 7].

Although SPNs are detected by conventional CXRs, CT imaging is preferred for further evaluation and clinical management. SPNs can be difficult to detect and characterize on routine CXRs, because of overlapping anatomy, and there is significant interobserver variation [8]. Moreover, as many as 40% of SPNs are missed on CXRs [810]. However, CT scans come with additional expense and increased patient radiation exposure and may not be readily available in developing countries [6]. Two-thirds of the world population has no access to diagnostic imaging [11], and CT scanners require significant infrastructure and maintenance, which creates multiple logistic issues [6]. For instance, replacement parts in many developing countries are not readily available, with a large percentage of radiology equipment rendered nonfunctional [12]. In developing countries, CXRs remain a primary diagnostic tool, given their relative lower cost [11].

Techniques that enhance CXRs by removing overlapping anatomy can potentially increase their sensitivity as a screening tool for detecting SPNs [13]. Computer-aided detection of lung nodules has a reported sensitivity of 63–74% [6, 13]. Deep learning–based image reconstruction has been increasingly used to improve the performance of CT and magnetic resonance imaging, by reducing image noise, increasing signal to noise ratio, and decreasing image acquisition times [14, 15]. Applications of deep learning to conventional CXRs have included lowering dose and removing overlapping structures such as the ribs [13, 16]. Despite work on three-dimensional reconstruction from planar CXR images [1719], its potential use in the study of SPNs remains unknown.

A rapid coronal localizing projection – the scanogram – begins a CT scan, typically after a CXR. However, consider cases in which CXR is the only available follow-up modality to CT. A radiologist’s experience is necessary to compare the CT to the CXR, and the scanogram is available for assistance as well. But scanograms are not generally considered equivalent to CXRs [20]. How could we improve this comparison?

In this study, we considered how to generate something more useful than the scanogram to facilitate comparison between CT data and follow-up CXRs. We developed and assessed the diagnostic performance of a re-encoding deep learning algorithm producing digitally reconstructed tomogram (DRT) images of the chest from digitally reconstructed frontal and lateral CXRs (DRRs) for the detection and characterization of SPNs. We provide a novel approach for the generation of DRT images from DRR projections and demonstrate the benefit and feasibility of synthetic images in augmenting the interpretation and assessment of SPNs.

Methods

Patient Characteristics

This retrospective study was approved by the Institutional Review Board of our institution. The requirement for informed consent was waived due to the anonymous and retrospective nature of the study. Electronic health records were searched for all non-contrast CT chest radiology reports from 11/13/2012 to 10/12/2021 using the regular expression “nodule”. Regular expressions were then used to identify reported nodules measuring between 10 and 30 mm. Of the 11,290 non-contrast studies that had the stem keyword “nodule” in the radiology report, only 694 consecutive studies had reported nodules >10 mm, and 637 of those had complete reconstructed coronal and sagittal images (Figure 1). For exams without lung nodules, 40 consecutive patients were selected from the database, taking care not to overlap with other training, validation, and test images. Images were all obtained from multislice CT scanners across multiple sites with the following models: GE Revolution EVO, GE LightSpeed VCT, GE Discovery CT750 HD, GE Optima CT660 (GE Healthcare, Waukesha, Wis). For this retrospective study, no specific imaging protocol was used, and all CT studies were acquired helically with 120 Kv and variable mAs. Studies were extracted and anonymized from the PACS system using a scripted method (SikuliX) [21].

Figure 1:

Figure 1:

Flowchart of patient inclusion process and image analysis.

Data Preprocessing

The dataset was first preprocessed to create image uniformity, because CTs typically have a variable number of slices dependent on body habitus and CT slice thickness. We used three-dimensional spline interpolation with all CTs with a lower limit of 128 (slices) to avoid generating potential inaccuracies from intermediate slices. All input DRRs and output CT slices were resized to 256 × 256 pixels, using a Lanczos filter for downsampling from the python imaging library (PIL). To allow radiologists direct comparison and to mitigate issues of malalignment of the planar radiographic projections and the DRT images, we generated frontal and lateral DRRs by taking the mean of the set of preprocessed coronal and sagittal CT slices. We did not use conventional CXRs in this feasibility study, because we found in most cases a recent corresponding radiograph was not available to directly compare to the CT study. By generating this planar projection, we ensured that the DRRs were registered properly with the CT study. We used this projection to create the coronal and sagittal DRRs. All programs were run in Python (Python 3.6; Python Software Foundation, Wilmington, Del.).

Model

The model was inspired by the 3D-R2N2 [22] autoencoder architecture with long short-term memory (LSTM) cells between the encoder and decoder stages. Here we made a variation of U-Net [23], a popular image segmentation network for medical images, to generate images. The modified version of U-Net takes in two input DRRs (frontal and lateral views) and encodes them separately, as seen in Figure 2. The encoder extracts the features from each input DRR. The encoder consists of two convolutional blocks, which contain a convolutional layer, rectified linear unit (ReLU), batch normalization (BatchNorm), and a max pooling layer. Batch normalization is used to reduce the covariance shift of hidden layer distributions to improve training conditions of deep neural networks [24]. The max pooling layer is used to halve the size of the image. The convolutional LSTM layer in the bottleneck stage learns the extracted features from the frontal DRR at time 0 and the lateral DRR at time 1. The LSTM is designed to use features from the first radiograph passed in to perform generation and then take the second set of features to fine-tune the generated output. At time 0, the LSTM gates and memory cells are initialized and updated for the first time. Then, at time 1, the memory cells from the first hidden stage are updated according to the new features from the second input. After the convolutional LSTM layer, the features are passed into a dilated residual bottleneck convolutional network. That network takes advantage of the sparse nature of radiographs through the dilated convolutions. The dilations are of size 2, 4, 6, and 8. A sum of the results of the different dilations allows us to capture the various features in the space of the radiograph. Then, the output of the bottleneck is passed into the decoder stage, which creates the DRT image. We use a transposed convolution to learn each upsample until the image is the desired size. The output is passed into two convolutional blocks, which decreases the number of channels. Each stage in the decoder takes in the residual from the encoder stage to ensure the previously captured features are used to further improve the output.

Figure 2:

Figure 2:

Top, DLUNet architecture at time τ with an input chest radiograph. Middle, Downsample convolutional block. Bottom, Upsample convolutional block.

The model was trained using the acquired dataset which contained the generated coronal and sagittal radiographic projections and the preprocessed coronal CT slices. No data transformations were used (i.e., rotation) as CXRs and CT studies needed to be registered and maintain orientation. The input DRRs are 256 × 256 pixels and the coronal DRT slices are 128 × 256 × 256 pixels, where 128 is the number of slices. We used the Adam optimizer and the above loss functions [25]. The network was trained for 1,000 epochs on two NVIDIA Titan X GPUs (NVIDIA Corporation, Santa Clara, Calif.) with a training batch size of 8 for approximately 20 hours; each epoch took 30 seconds. A superlinear learning rate decay was started on epoch 100. The quantitative metrics were used to measure the pixel-wise error between each DRT and the ground-truth CT.

Deep Learning Loss Function

To capture the important features of SPNs, such as size, morphology and opacity, pixel-wise error was used. The error was calculated using two metrics: L1 loss and weighted mean squared error (MSE). L1 loss is defined as:

LossL1=i=1n|YiY^ι| Equation 1

where Yi is the ground-truth value and Y^ι is the predicted value. In this model, we use the L1 loss to compare the input DRR with the DRT. Weighted MSE was used to evaluate the DRT.

The MSE metric is defined as:

LossMSE=1mni=1nj=1m(Yi,jYι,J^)2 Equation 2

where Yi,j is the ground-truth pixel value and Y^ι,JYi is the predicted pixel value. Since CT studies contain more important information towards the center of the study, we decided to give more weight to the center slices. To assign the weights, we used the following formula:

w=exp(|s2[0s1]|s*1.5) Equation 3

where s is the number of CT slices and w ∈ R. This gives a higher penalty for incorrectly predicting the intermediate slices, which contain more information than the first and last few slices. The weighted MSE loss function, with weight w defined in equation 3, is as follows:

LossweightedMSE=1mnsk=1si=1nj=1mwk(Yi,jYι,J^)2 Equation 4

Reference Standards

Reference standards for the presence of SPNs were established by radiologic report and review of the images (AP with 13 years of experience). The time for DRT image generation was recorded.

Image Analysis: Diagnostic Performance

Images were evaluated independently by two radiologists (LZ and NS with 4 and 10 years of experience, respectively, both with board certification and body fellowship training), who were blinded to the imaging report and patient information. The readers underwent a training session with a radiologist (AP with 13 years of experience). During training, the two readers were asked to review 15 sets of images that first included frontal and lateral DRRs and CT scanograms (the combination hereafter referred to planar radiography, PR), followed by DRT images from the test dataset. The readers received feedback on their performance. The 15 image sets used for reader training were not included in the final analysis.

The readers were then instructed to review the 60 validation sets of images from 60 unique patients (20 were positive for SPNs), starting with the frontal and lateral DRR images, followed by the frontal and lateral CT scanogram images, and finally the DRT (Figure 3). Scanograms ranged from 512*512 to 864*679, 512*512 being the most frequent. The readers were asked to diagnose SPN on a three-point scale (0 = absent, 1 = indeterminate, 2 = present) without and then with DRT images. Afterwards, in those 20 cases where SPNs were present, the readers evaluated nodule size and location, morphology, and opacity in the DRT on a five-point scale relative to ground-truth CT images. Scoring criteria are shown in Table 2. Evaluations were performed for each patient based on a complete set of reconstructed coronal chest images. The monitor specification used for image analysis was identical among reviewers with a resolution of 2560 × 3200 pixels. The computer used for image analysis was Intel(R) Xeon Silver 4214 @ 2.20 GHz with 32 GB of RAM. Freely available MicroDicom (version 3.1.4, MicroDicom, Sofia, Bulgaria) was utilized for image review, and the readers were allowed to adjust image size, contrast, and opacity.

Figure 3:

Figure 3:

Study readers were first asked to review frontal and lateral DRRs (A), which were derived from the ground-truth CT images, and then frontal and lateral CT scanograms (B), the combination of which we refer to as a PR. The readers then scored the PRs for lung nodules (0 = absent, 1 = indeterminate, 2 = present). Then readers were then asked to assess the PRT image (C) in conjunction with PR and again assign a score for lung nodules (0 = absent, 1 = indeterminate, 2 = present).

Table 2:

Diagnostic Performance of PR Alone and DRT Plus PR in Identifying SPNs

Parameter* AUC (CI) P value Sensitivity (CI) [ratio] Specificity (CI) [ratio] Positive predictive value (CI) [ratio] Negative predictive value (CI) [ratio]
Reader 1
PR 0.85 (0.75, 0.95) 70% (46, 88) [14/20] 100% (91,100) [40/40] 100% (77,100) [14/14] 87% (74, 95) [40/46]
PR+DRT 0.95 (0.88, 1) 0.02 90% (71, 99) [18/20] 100% (91, 100) [40/40] 100% (81, 100) [18/18] 95% (91, 100) [40/42]
Reader 2
PR 0.80 (0.69, 0.91) 60% (36, 81) [12/20] 100% (91, 100) [40/40] 100% (74, 100) [12/12] 83% (70, 93) [40/48]
PR+DRT 0.98 (0.93, 1) 0.001 95% (75, 100) [19/20] 100% (91, 100) [40/40] 100% (82, 100) [19/19] 98% (87, 100) [40/41]
*

Data are given as number (confidence interval) and [ratio] for each parameter.

PR = planar radiography (digitally reconstructed radiographs and CT scanogram), DRT = digitally reconstructed tomography, CI = Confidence Interval

Statistical Analysis

Demographic characteristics and clinical findings were compared across cohorts using two-sided χ2 tests and t-tests. Interobserver agreements for identifying SPNs were assessed by using k statistics as follows: ≤0.2, slight agreement; 0.21–0.40, fair agreement; 0.41–0.60, moderate agreement; 0.61–0.80, substantial agreement; and 0.81–1.00, excellent agreement. To evaluate the diagnostic performance of PR alone and with DRT, the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, positive predictive value, and negative predictive value were calculated. The AUCs of DRT plus PR versus PR alone were compared using the method of Delong [26]. In addition, the number of cases initially misdiagnosed or scored as indeterminate with DRRs alone and then corrected with DRT was calculated for both readers. R (version 4, R Core Team, Vienna, Austria) was used for statistical software analysis, with statistical significance defined at P < 0.05.

Results

Cohort Characteristics

Subject characteristics are shown in Table 1. The training cohort was comprised of 562 patients (age (mean +/− standard deviation (SD)) = 69 +/− 12 years; 323 women [57%]), while the test cohort had 15 patients (age 63 +/− 16 years; 5 women [33%]). Participants in the training cohort were slightly older (68.7 years) compared to the validation and testing cohorts (65.6 and 63.3 years, P < 0.005). Also, there were more females in the training cohort compared to the validation and testing cohorts (57% vs 33% vs 45%, P < 0.05). The average body mass index (BMI) of subjects in the validation cohort was higher, classified as obese, while in the testing and training cohorts, average BMIs were slightly lower and overweight (P < 0.01). In the validation cohort, 12 (20%) patients had chronic obstructive pulmonary disease (COPD) and 9 patients had lung cancer (15%), while in the training cohort, 80 (14%) had COPD and 100 (18%) had lung cancer, and in the test cohort, 1 (6%) had COPD and 2 (13%) had lung cancer. There were no statistical differences in the distribution of patients with COPD and lung cancer in the three cohorts.

Table 1:

Clinical Characteristics of Patients.

Characteristic* Training cohort (N = 562) Validation cohort (N = 60) Testing cohort (N = 15) P value
Age, mean (SD) 68.7 (16.2) 65.6 (6) 63.3 (16.2) <0.005
Sex <0.05
 Male 239 (43%) 33 (55%) 10 (67%)
 Female 323 (57%) 27 (45%) 5 (33%)
BMI, mean (SD) 28.7 (6.6) 31.1 (5.9) 28.5 (5.2) <0.01
COPD 80 (14) 12 (20) 1 (6.7) 0.419
Lung cancer (primary or metastatic) 100 (18) 9 (15) 2 (13) 0.891
*

Data are given as number (percentage) for each group, unless specified.

Diagnostic Performance of PR alone and DRT Plus PR

The diagnostic performance of PR (DRRs and CT scanograms alone) and DRT plus PR is presented in Table 2. Representative paired DRR and DRT images are seen in Figure 4 and Figure 5. There was a statistically significant improvement (P < 0.05) in diagnostic performance using DRT plus PR (AUC, 0.95; 95% CI: 0.88, 1, for reader 1; AUC, 0.98; 95% CI: 0.93, 1 for reader 2) compared to PR alone (AUC, 0.85; 95% CI: 0.75, 0.95 for reader 1; AUC, 0.80; 95% CI: 0.69, 0.91 for reader 2). Sensitivity and negative predictive values were also improved with the use of DRT, without a difference in the specificity and positive predictive value. In the identification of SPN, imaging data sets from 4 of the 60 patients (7%) for reader 1 and from 7 of the 60 patients (12%) for reader 2 were misdiagnosed or indeterminate with PR alone but correctly classified with DRT plus PR.

Figure 4:

Figure 4:

Input frontal DRR (left column) with comparison to select ground-truth CT (middle column) and DRT coronal slices (right column). A) A 58-year-old obese, female patient with a right lower lobe superior segment spiculated nodule. B) A 63-year-old male patient with a history of cardiomyopathy and COPD and a left upper lobe spiculated nodule.

Figure 5:

Figure 5:

Input frontal DRR (left column) with comparison to ground-truth CT (middle column) and DRT coronal slices (right column). A 78-year-old male patient with right upper lobe nodule demonstrating similar morphology, but suboptimal opacity on DRT images relative to the ground-truth CT image.

The DRT generation time was 1.1 seconds (+/− 0.1 SD) on an Intel Core i5-6267U CPU at 2.90GHz for the 128 images.

Bar graphs of the image assessment scores are presented in Figure 6 with scoring criteria used in Table 3. For PR alone and DRT plus PR, the interobserver agreement by k statistic between the two readers was excellent (0.82 and 0.89, respectively) for detecting SPNs. The interobserver agreement was excellent for size and position (0.94), with substantial agreement for morphology (0.68) and fair agreement for nodule opacity (0.38), relative to the ground-truth CT images.

Figure 6:

Figure 6:

Image assessments in percentage for DRT reconstruction of SNPs, when evaluated independently by two blinded readers in terms of nodule size, morphology and opacity relative to ground-truth CT image. Each color bar represents the percentage of cases with the same score.

Table 3:

Scoring Criteria for Evaluation of DRT Relative to Ground-Truth CT Images

Size and location Morphology Opacity
1 Nondiagnostic Nodule size and location are not visualized. Nodule morphology is not similar on most images. Nodule opacity is not similar on most images.
2 Limited Nodule size and location mostly does not correlate with ground truth. Nodule morphology is the same on some images. Nodule opacity is the same on some images.
3 Diagnostic Nodule size and location is mostly identical to ground truth. Nodule morphology is the same on most images. Nodule opacity is the same on most images.
4 Good Nodule size and location is nearly identical to ground truth. Nodule morphology is the same on nearly all images. Nodule opacity is the same on nearly all images.
5 Excellent Nodule size and location is identical to ground truth. Nodule morphology is the same on all images. Nodule opacity is the same on all images.

Discussion

We developed a deep learning algorithm that generated DRT images from frontal and lateral DRRs for the evaluation of SPNs. Our study demonstrated that DRT in combination with PR improved the diagnostic performance of detecting SPNs compared with PR alone (AUC for DRT plus PR: 0.95 for reader 1 and 0.98 for reader 2; AUC for PR alone: 0.85 for reader 1 and 0.80 for reader 2; P = 0.02 for reader 1, P = 0.001 for reader 2). As in conventional CT imaging, DRT can increase the detection of SPNs by removing overlapping anatomy, facilitating comparison between CT and follow-up CXRs.

The deep learning algorithm was a modified Autoencoder U-Net with an attention mechanism provided through LSTM at the bottleneck layer. Autoencoders are a dimensionality reduction technique, whereby high-dimensional data (like an imaging study, which is by nature high dimensional) is passed through an algorithm which distills the representation of such an image, ignoring unimportant or noisy features. An autoencoder allows for automatic feature selection and input data encoding, distilled into a relevant lower-dimensional representation, then re-expanded and decoded into a resultant image: the DRT. Because of the known bottleneck problem with an autoencoder, we selected instead a U-Net, which adds skip connections and convolution/deconvolutional layers to form the DRT. To enhance the U-Net’s function, a LSTM attention layer was added at the bottleneck layer. Conceptually, we felt this could provide us with a ‘better’ scanogram, the DRT, to compare with a radiograph/CXR. One issue we encountered was the ‘greediness’ of the model, likely caused by the LSTM component, which required increasing the number of training examples for the model to converge. This impacted the model training as there were no more cases at our institution to be retrieved to increase the N of the study over the sample of 637.

The primary goal of our study was to demonstrate that DRT images are a useful intervention in the detection of SPNs measuring 10–30 mm. In the DRT images, interobserver agreement for size and location of SPNs was comparable to that in ground-truth images. Nodule opacity demonstrated fair interobserver agreement, with some of the nodules showing decreased “ground-glass” attenuation on DRT images relative to the ground truth (Figure 4). We believe that normalization techniques used in the preprocessing of the DRR images with differences in patient body habitus contributed to the variable opacity of the pulmonary nodules, which could be addressed with a larger and more diverse data cohort.

In this feasibility study, our radiographic views of the chest were derived from CT images to mitigate misalignment and allow for side-by-side comparison of the resulting images. Other techniques have been described for coregistration of radiographic images and normalization and for suppression of osseous structures [27, 28]. Additionally, improvements in DRT image output resolution could be achieved with the ever-increasing memory capacity of graphical processing units, allowing for larger model inputs in training. Alternatively, dedicated low-cost two-view coregistered radiography devices could be developed in conjunction with deep learning reconstruction techniques, utilizing pre-existing equipment. Once trained, deep learning models can perform inference rapidly; in our study, it took 1.1 seconds to generate 128 images, on relatively inexpensive hardware.

This study was limited by its retrospective design and the small number of patients. The next step would be to validate this approach with conventionally acquired radiographs, with preprocessing coregistration and normalization techniques. Additional applications would include evaluating other pathologies, such as pneumonia. Our model generated a fixed number of slices per study at a limited resolution of 256 × 256 pixels. Additionally, the ambulatory nature of this study did not include patients with support devices such as endotracheal tubes or more complicated pathologies such as pleural effusions. Although our study used an algorithm with an aim to explore its clinical feasibility at a single institution, it did not include external validation from geographically different healthcare systems. A prospective study with a larger sample size is needed to validate its diagnostic value and the impact on outcomes.

Conclusions

In conclusion, we provide feasibility information on a novel technique in generating a potentially improved scanogram for the detection of SPNs, which we call the DRT. The sensitivity of DRT plus PR was higher than that of PR alone in the identification of SPN. The resultant images are of no less utility than the original scanograms, and the combination of the two images is better than one alone. DRT may be a useful adjunct for routine CXRs, although its diagnostic value and impact on outcomes must be validated in a prospective multicenter study with a larger cohort.

Highlights:

  1. Digitally reconstructed tomograms (DRTs) with frontal and lateral radiographic projections (planar radiography, PR) showed higher diagnostic performance than PR alone (area under the receiver operating characteristic curve 0.95–0.98 versus 0.80–0.85; P < 0.02) in the identification of solitary pulmonary nodules (SPNs).

  2. SPNs were diagnosed in 28% [11/40] more patients using DRTs than using PR alone.

Acknowledgments:

Special thanks to Lisansha Zahirsha, MD, Firas Bazerbashi, MD, and David C. Choe, MD for their feedback and time assisting in this project.

Funding:

The Medical Imaging Data Resource Center (MIDRC) is funded by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) of the National Institutes of Health under contracts 75N92020C00008 and 75N92020C00021.

Abbreviations:

AUC

area under the curve

BMI

body mass index

CI

confidence interval

COPD

chronic obstructive pulmonary disease

CT

computed tomography

CXR

chest radiograph

DRR

digitally reconstructed radiograph

DRT

digitally reconstructed tomogram

LSTM

long short-term memory

MSE

mean squared error

PIL

python image library

PR

planar radiography

SD

standard deviation

SPN

solitary pulmonary nodule

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.National Health Service. Diagnostic Imaging Dataset Statistical Release. Published 19 December 2019. https://www.england.nhs.uk/statistics/wp-content/uploads/sites/2/2019/12/Provisional-Monthly-Diagnostic-Imaging-Dataset-Statistics-2019-12-19-1.pdf [Google Scholar]
  • 2.Schalekamp S, van Ginneken B, Koedam E, et al. Computer-aided detection improves detection of pulmonary nodules in chest radiographs beyond the support by bone-suppressed images. Radiology. 2014;272(1):252–261. [DOI] [PubMed] [Google Scholar]
  • 3.Swensen SJ, Jett JR, Hartman TE, et al. Lung cancer screening with CT: Mayo Clinic experience. Radiology. 2003;226(3):756–761. [DOI] [PubMed] [Google Scholar]
  • 4.Gohagan J, Marcus P, Fagerstrom R, et al. Baseline findings of a randomized feasibility trial of lung cancer screening with spiral CT scan vs chest radiograph: the Lung Screening Study of the National Cancer Institute. Chest. 2004;126(1):114–121. [DOI] [PubMed] [Google Scholar]
  • 5.Wyker A, Henderson WW. Solitary Pulmonary Nodule. StatPearls. Updated 26 July 2021. https://www.ncbi.nlm.nih.gov/books/NBK556143/ [PubMed] [Google Scholar]
  • 6.Shankar A, Saini D, Dubey A, et al. Feasibility of lung cancer screening in developing countries: challenges, opportunities and way forward. Transl Lung Cancer Res. 2019;8(Suppl 1):S106–S121. doi: 10.21037/tlcr.2019.03.03 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Lubuzo B, Ginindza T, Hlongwana K. The barriers to initiating lung cancer care in low-and middle-income countries. Pan Afr Med J. 2020;35:38. doi: 10.11604/pamj.2020.35.38.17333 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Shaw NJ, Hendry M, Eden OB. Inter-observer variation in interpretation of chest X-rays. Scott Med J. 1990. Oct;35(5):140–141. doi: 10.1177/003693309003500505 [DOI] [PubMed] [Google Scholar]
  • 9.Finigan JH, Kern JA. Lung cancer screening: past, present and future. Clin Chest Med. 2013;34:365–371. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Quekel LG, Kessels AG, Goei R, et al. Miss rate of lung cancer on the chest radiograph in clinical practice. Chest. 1999;115:720–724. [DOI] [PubMed] [Google Scholar]
  • 11.Maboreke T, Banhwa J, Pitcher RD. An audit of licensed Zimbabwean radiology equipment resources as a measure of healthcare access and equity. Pan Afr Med J. 2019;34:60. doi: 10.11604/pamj.2019.34.60.18935 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Silverstein J Most of the world doesn’t have access to x-rays [Internet]. The Atlantic. 2016. [cited 2022 Jan 1]. Available from: https://www.theatlantic.com/health/archive/2016/09/radiology-gap/501803/ [Google Scholar]
  • 13.Lee SM, JSeo JB, Yun J, et al. Deep learning applications in chest radiography and computed tomography: Current state of the art. J Thorac Imaging. 2019;34(2):75–85. doi: 10.1097/RTI.0000000000000387 [DOI] [PubMed] [Google Scholar]
  • 14.Nakamura Y, Higaki T, Tatsugami F, et al. Deep learning–based CT image reconstruction: Initial evaluation targeting hypovascular hepatic metastases. Radiol Artif Intell. 2019;1:6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Brady SL, Trout AT, Somasundaram E, Anton CG, Li Y, Dillman JR. Improving image quality and reducing radiation dose for pediatric CT by using deep learning reconstruction. Radiol. 2021;298(1):180–188. [DOI] [PubMed] [Google Scholar]
  • 16.Zarshenas A, Liu J, Forti P, Suzuki K. Separation of bones from soft tissue in chest radiographs: Anatomy-specific orientation-frequency-specific deep neural network convolution. Med Phys. 2019;46(5):2232–2242. doi: 10.100e2/mp.13468 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Ying X, Guo H, Ma K, Wu J, Weng Z, Zheng Y. X2CT-GAN: reconstructing CT from biplanar X-rays with generative adversarial networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. https://openaccess.thecvf.com/content_CVPR_2019/papers/Ying_X2CT-GAN_Reconstructing_CT_From_Biplanar_X-Rays_With_Generative_Adversarial_Networks_CVPR_2019_paper.pdf [Google Scholar]
  • 18.Lewis A, Mahmoodi E, Zhou Y, Coffee M, Sizikova E. Improving Tuberculosis (TB) Prediction using Synthetically Generated Computed Tomography (CT) Images. Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. https://openaccess.thecvf.com/content/ICCV2021W/CVAMD/papers/Lewis_Improving_Tuberculosis_TB_Prediction_Using_Synthetically_Generated_Computed_Tomography_CT_ICCVW_2021_paper.pdf [Google Scholar]
  • 19.Shibata H, Hanaoka S, Nomura Y, et al. X2CT-FLOW: Reconstruction of multiple volumetric chest computed tomography images with different likelihoods from a uni-or biplanar chest X-ray image using a flow-based generative model. arXiv:2104.04179. 2021. Oct 1.
  • 20.Lee MH, Lubner MG, Mellnick VM, Menias CO, Bhalla S, Pickhardt PJl. The CT scout view: complementary value added to abdominal CT interpretation. Abdom Radiol. 2021;46:5021–5036. [DOI] [PubMed] [Google Scholar]
  • 21.Pyrros A, Flanders AE, Rodríguez-Fernández JM, et al. Predicting prolonged hospitalization and supplemental oxygenation in patients with COVID-19 infection from ambulatory chest radiographs using deep learning. Acad Radiol. 2021;28(8):1151–1158. doi: 10.1016/j.acra.2021.05.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Choy CB, Xu D, Gwak J, Chen K, Savarese S. 3D-R2N2: A unified approach for single and multi-view 3D object reconstruction. in European Conference on Computer Vision. Springer, 2016, pp. 628–644. [Google Scholar]
  • 23.Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedi- cal image segmentation. in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241. [Google Scholar]
  • 24.Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167. 2015. Mar 2.
  • 25.Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv:1412.6980. 2017. Jan 30.
  • 26.DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44(3):837–845. [PubMed] [Google Scholar]
  • 27.Mansilla L, Milone DH, Ferrante E. Learning deformable registration of medical images with anatomical constraints. arXiv:2001.07183. 2020. Jan 22. [DOI] [PubMed]
  • 28.Zarshenas A, Liu J, Forti P, Suzuki K. Separation of bones from soft tissue in chest radiographs: Anatomy-specific orientation-frequency-specific deep neural network convolution. Med Phys. 2019;46(5):2232–2242. doi: 10.1002/mp.13468 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES