Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Nov 1.
Published in final edited form as: IEEE Trans Med Imaging. 2023 Oct 27;42(11):3129–3139. doi: 10.1109/TMI.2021.3139533

Machine learned texture prior from full-dose CT database via multi-modality feature selection for Bayesian reconstruction of low-dose CT

Yongfeng Gao 1, Jiaxing Tan 2, Yongyi Shi 3, Hao Zhang 4, Siming Lu 5, Amit Gupta 6, Haifang Li 7, Michael Reiter 8, Zhengrong Liang 9
PMCID: PMC9243192  NIHMSID: NIHMS1767587  PMID: 34968178

Abstract

In our earlier study, we proposed a regional Markov random field type tissue-specific texture prior from previous full-dose computed tomography (FdCT) scan for current low-dose CT (LdCT) imaging, which showed clinical benefits through task-based evaluation. Nevertheless, two assumptions were made for early study. One assumption is that the center pixel has a linear relationship with its nearby neighbors and the other is previous FdCT scans of the same subject are available. To eliminate the two assumptions, we proposed a database assisted end-to-end LdCT reconstruction framework which includes a deep learning texture prior model and a multi-modality feature based candidate selection model. A convolutional neural network-based texture prior is proposed to eliminate the linear relationship assumption. And for scenarios in which the concerned subject has no previous FdCT scans, we propose to select one proper prior candidate from the FdCT database using multi-modality features. Features from three modalities are used including the subjects’ physiological factors, the CT scan protocol, and a novel feature named Lung Mark which is deliberately proposed to reflect the z-axial property of human anatomy. Moreover, a majority vote strategy is designed to overcome the noise effect from LdCT scans. Experimental results showed the effectiveness of Lung Mark. The selection model has accuracy of 84% testing on 1,470 images from 49 subjects. The learned texture prior from FdCT database provided reconstruction comparable to the subjects having corresponding FdCT. This study demonstrated the feasibility of bringing clinically relevant textures from available FdCT database to perform Bayesian reconstruction of any current LdCT scan.

Keywords: LdCT Imaging, FdCT Database, Bayesian Reconstruction, Texture Prior, Machine Learning, Convolutional Neural Network, Random Forest

I. Introduction

COMPUTED tomography (CT) as one non-invasive diagnostic approach has been widely applied to various clinic applications. However, the radiation related potential risk is a concern to the public, particularly to pediatric imaging and population-based screening [14]. It is desirable to reduce the CT dosage as low as achievable to relieve the safety concern and expand the clinical utility [56]. However, low dose CT (LdCT) usually results in noise and artifacts on the reconstructed images, which may degrade the image information and limit its clinical application. In the photon limited scenario, like LdCT, the statistical image reconstruction (SIR) has provided an alternative way to produce high quality images with much lower dose comparing to the analytical filtered back-projection (FBP) methods [79]. These SIR methods not only model the data statistical distribution but also introduce a prior as penalty to constraint the reconstruction for a better image reconstruction.

In the past decades, numerous penalty models have been proposed for LdCT SIR reconstruction [1018], such as total variation (TV) [1011], dictionary learning [1214], and Markov random field (MRF) priors [1518]. In our earlier work [1921], we proposed a clinically relevant tissue-specific texture prior extracted from previous full dose CT (FdCT) scans of four normal tissue types, i.e, lung, bone, fat and muscle of chest for Bayesian LdCT reconstruction. Since tissue textures have been recognized as important clinical biomarkers, the proposed texture prior demonstrated advantage in tasks of lung nodule detection and classification [2223]. However, two assumptions were made for the proposed prior in the earlier study. One assumption is that the center pixel has a linear relationship with its neighbors in terms of their gray levels, and thus the tissue-specific texture MRF weights are calculated by the linear regression strategy [2021]. The other assumption is that there are previous FdCT scans of the same subject available. In some scenarios, the FdCT scans are available, e.g. for lung biopsy [24] but may not be available in general, like lung screening [25].

By addressing the two assumptions, this work aims to improve and expand its application of the proposed prior taking advantage of the machine learning (ML) methods. Recently, the ML methods have achieved a great success in many fields [2628]. Many learned priors are also proposed for LdCT reconstruction [2934]. For example, Wu et al. proposed a k-sparse autoencoder (KSAE) priors learning a nonlinear mapping from paired FdCT and LdCT images [29]. Chen et. al proposed a learned experts’ assessment-based reconstruction network (LEARN) for sparse-data CT [30]. Chen et al. designed a network learning a regularization term that considers the denoising and deblurring effect [31]. Zheng et al. and Ye et al. proposed a clustering and learning based approach for post-log domain [32] and pre-log domain [33] reconstruction, respectively. He et al. proposed a parameterized plug-and-play ADMM (3pADMM) algorithm [34]. Different from the handcrafted prior, the learnt prior can describe much more complex information with different network design. However, these methods require the pixel level paired LdCT and FdCT in the training stage, which is hard to obtain in clinical practice. In addition, the aforementioned machine learnt prior model treated the reconstruction as denoising or deblurring problem, which didn’t take the clinically important biomarker texture information into consideration. As introduced above, the previously proposed texture model has shown clinical benefits through task-based evaluation. Therefore, this work still focuses on the tissue-specific textures from FdCT scans but expands on the previous linear texture prior model using the tissue-specific convolutional neural network texture (CNN-T) prior [35,36]. Since we employ CNN-T to learn the relationship between the center pixels and its neighbors, the paired patch is unnecessary, which is easier to toward to clinical utility. In the future, we will focus on the tissue-specific textures from the in vivo microscopic images.

Secondly, a machine learning model that uses multi-modality feature selection is proposed, to find a proper prior candidate from the database to address the scenario when a subject has no prior FdCT available. In our previous work, tissue texture similarities amongst different subjects were observed. By ranking the closeness of textures from different subjects in the database to the selected subject, we found that the prior extracted from some other subjects can be applied to the selected subject without bringing in noticeable image texture quality change by Haralick measures [37]. In other words, the feasibility of extracting the tissue texture from a previous high quality FdCT database instead of a prior FdCT from the same patient has been demonstrated quantitatively. The task of this paper is to show how to select one proper candidate from the database in practice. We explored the features that could link the subject of LdCT and subjects in the FdCT database and built a multi-modality feature selection model in random forest to fulfill this purpose. Features from three modalities are used including the subjects’ physiological factors (including body mass index (BMI), gender, age, etc.) and the CT scan protocol (like the subjects’ position one the CT table, the tissue texture extracted from the LdCT scans, etc.). Specifically, a novel feature named Lung Mark is proposed to reflect the z-axial property to improve the training efficiency and testing accuracy of the selection model. Moreover, the majority voting strategy is applied to overcome the noise effect for the texture features from the LdCT scan.

The main contribution of this work is that a database assisted end-to-end LdCT reconstruction framework was proposed. The framework includes building up a machine learned texture database and selecting the candidate from the database by a multi-modality feature strategy. More effort is made for the selection strategy. From a practical perspective, it demonstrates the feasibility of bringing tissue-specific textures from available FdCT database to perform Bayesian image reconstruction of any current LdCT from subject with/without previous FdCT scan.

The remainder of this paper is organized as follows. Section II will overview the proposed method to select the candidate. Section III describes the experiments and results. Discussion and conclusion are drawn in Section IV and V.

II. Theory and Methods

In this work, we proposed a database assisted end-to-end low dose CT reconstruction framework using prior knowledge base, including a deep learning texture prior model and a multi-modality feature based candidate selection algorithm. In this section, we will describe the prior model and selection model in details.

A. Tissue-specific CNN-T Prior from FdCT Database

As described in the introduction section, under Bayesian framework, SIR methods not only model the data statistical distribution but also introduce a prior as penalty to constraint the reconstruction for a better image reconstruction. Tissue texture reflects the pixel gray level distribution across the field of view (FOV) or the contrast between one pixel and its neighboring pixels. In our previous work [20], we proposed one MRF type tissue-specific texture prior model for the SIR reconstruction. Specifically, the prior term in the cost function of SIR can be expressed as:

Rμ=rmRegion(r)jΩmwr,j,m(μr,mμr,j)2, (1)

assuming the mth pixel has linear attenuation coefficient μm and its neighbor pixels within the region Ωm have linear attenuation coefficients μΩm. Conventionally, the MRF weights w are globally the same across the FOV and its elements are inversely proportionally to its Euclidean distance between two pixels. Considering the importance of tissue-specific textures across the FOV, the tissue-specific texture MRF across the FOV is desired. Previously, we used the linear tissue-specific texture model to determine the texture weights, which can be expressed by Eq. (2) [20].

wr=argmin|wrrmRegion(r)(μmwrμΩm)2, (2)

where wr represents the texture prior weights of tissue r. m and Ωm have the same meaning with Eq. (1), which represent the mth pixel and its neighbor pixels within the MRF window, where all pixels belong to the same tissue type r. For chest CT, r = 4 representing the four tissue types of lung, bone, fat and muscle.

This work aims to learn such a texture prior without the linear assumption by taking advantage of machine learning technology to represent any kind of relationship forms. The convolutional neural network (CNN) form is used due to its great success in representing high order information. If we denote the trained CNN model as F (.), the predication model should be able to predict the center pixel value by given its neighboring pixels according to Eq. (1). The function can be expressed as below:

μm=FμΩm. (3)

From Eq. (3), the way to train the CNN texture prior is straightforward. The purpose is to train a CNN model which can predict the center pixel values given the neighboring pixel values. We can input CNN model images with neighboring pixel values but leave center pixel as zero and ask the machine to calculate the center pixel value. More details and network design will be described later. As our texture prior is tissue-specific, we build four CNN models for lung, bone, fat and muscle respectively. In this work, we will focus on the lungs as our primary interest. Therefore, the lung tissue is used as an example for illustration purpose to demonstrate the pipeline in Fig. 1.

Fig. 1:

Fig. 1:

The pipeline of the proposed CNN-T from FdCT database.

For any subject in the database, we obtained the tissue mask by the segmentation method VQ [38]. Then on each image slice, patches of size 7×7 pixels would be extracted from each tissue region with step size 1. According to our previous work [21], the 7×7 MRF window size is sufficient, since the MRF weights beyond this window are close to zero and have nearly no impact. The extracted patches with center pixels set as zero would be used as the training input. The output was the estimated value of the center pixel. Mean square error is used as training loss. As shown in Fig. 1 and described in [35,36], a 3-layer CNN structure is designed to learn such relationship from data. Our CNN model has one convolution layer with 16 kernels of size 3×3. After the convolution, we use rectified linear unit (ReLu) as the activation function. In the last layer, one fully connected layer was applied with 784 and 1 neurons. Finally, we get a set of CNN-T corresponding to each subject and each slice. The CNN-T model itself represents the tissue texture. The abstract features learnt by CNN-T do not reflect the tissue texture directly. But it may show us which features final outputs are more sensitive to.

B. Multi-modality Feature Selection Model from Database

As described in the introduction section, this paper aims to propose a practical method to select a proper candidate from the database for the to-be-reconstructed LdCT projection data who has no previous FdCT available. A multi-modality feature selection machine learning method is proposed to fulfill this purpose. The hypothesis is there are anatomical similarities among human subjects. It is important to explore the features that could link the subject of LdCT and subjects in the FdCT database. Features from three modalities are explored in this study, including physiological factors, CT scan protocol and a novel feature named “Lung Mark”.

Physiological Factors:

The first modality features are subjects’ physiological factors including body mass index, gender, and age. These quantities reflect some physical property and may provide useful information. In most cases, these quantities will be routinely measured and recorded once adopted for both LdCT subject and FdCT subjects in the database. This information is readily to use and require no extra effort to obtain.

Lung Mark:

We propose a novel feature, named Lung Mark, to reflect the z-axial position of the human anatomical property to improve the training efficiency and testing accuracy of the selection model. The texture of the lung tissue texture varies from top to bottom due to its anatomical structure and composition. For example, the respiratory passages of lung are like a tree and called “tracheobronchial tree”. The branches are called primary bronchi, secondary bronchi and so on as shown in Fig. 2(left) [39]. Therefore, the Lung Mark is proposed to quantify the z-axial position of the CT image. Fig. 2(right) shows one human anatomy model, in which the lung range is marked by the black lines. We divide the whole volume into 30 slices evenly from the top to the bottom. The position of several Lung Marks is also displayed in the figure. It is observed that there are stronger anatomical similarities among human beings if the position of their Lung Marks along z-direction is similar. In other words, if two slices are from the same Lung Mark, they have higher chance to have similar texture. Therefore, the probability of finding a proper candidate from the database is higher using the same Lung Mark or adjacent Lung Marks.

Fig. 2:

Fig. 2:

Respiratory passage of lung (left) and definition of Lung Mark (right).

CT Scan Protocol:

The CT scan protocol features come from two sources. The first type features are the subject’s position relative to the CT table. A quantity of “Body Angle” is used to represent this position. Body Angle changes from 0 to 360 degrees with 0 degree representing a patient in supine position and Body Angle increase as the patient rotates in anticlockwise direction.

Tissue texture is another novel feature group from CT scan protocol. Tissue texture is obtained by using analytical MRF model as expressed in Eq. (2). However, directly using the MRF texture as features to link the LdCT and FdCT may result in low efficiency. To make the features more representative and reduce the dimensions, we use the principal components of the LdCT and FdCT by using the principal components analysis (PCA) method [40].

Principal Components of MRF retrieved by PCA:

PCA is a useful and widely exploited technique to retrieve major features of signals or images. When applied on the MRF weights, PCA can provide the major features of different MRF patterns. Note the MRF matrix is flattened into a 1-dimensional vector in a row-major manner. For simplicity, the vector form MRF is denoted as MRF spectrum.

Given the MRF spectrum w, the principal components can be retrieved using PCA by the following steps.

  1. Compute the average spectrum of all the spectrums by:
    w=l=1NwlN, (4)
    and obtain the centralized spectrum wi^=wlw,l=1,2,N.
  2. Calculate the covariance matrix of the centralized MRF spectrums w^ and decompose the covariance matrix C as
    C=ΦΛΦT, (5)
    where Λ=diag{λ1,λ2,λM} and Φ=ϕ1,ϕ2,ϕM. The term λm(m=1,2,M) is the eigenvalue of C, stored in descending order, ϕmm=1,2,M is the corresponding eigenvector.
  3. Retrieve principal components by the product of the transform matrix and the centralized MRF spectrum, which can be expressed as:
    qi^=ΦTwi^, (6)
    where ql,m is the mth principal component of the lth MRF spectrum, and m=1,2,M.

By applying the PCA to all the MRF spectrums in the FdCT database, we can obtain the principal components of each candidate and the eigenvectors. Then the same eigenvector is used to obtain the principal components of the LdCT texture. By using the same eigenvectors, the LdCT and FdCT are ensured to be projected into the same direction. And their principal components of each dimension represent the strength of the same direction. Additionally, because the transform matrix in Eq. (6) is precomputed from the FdCT database, it would take less than 1 second to extract the PC values of LdCT MRF weights.

After the feature engineering, we build the tree-based random forest (RF) model to perform the selection task. RF is a popular and efficient algorithm for classification and regression problems as described by Breiman [41]. We employ the R-package “RandomForest” to construct the forest and for the classification. The method to construct the forest is based on the calculation of the Gini impurity. After building the RF model with training datasets, we can generate a single score (or posterior probability) for each test point under its own feature set by voting from each tree in the forest.

Inspired by this concept, we also extend the majority voting strategy to the prediction model to overcome some random noise. Our goal is to find a proper candidate from the database for the to-be-reconstructed subject. The result of the proposed candidate selection model is subject level matching. For each LdCT subject, the scan usually contains multiple slices covering the lung volume. For each slice, the RF model will predict the candidates (subjects) from the database with their probability in descending order. We output the top 10 candidates with highest probability for each slice. We can then obtain the histogram of the candidates considering all the slices of the LdCT subject. The model suggests the final candidate based on the highest counts in the histogram. We expect this slice-based majority voting strategy will make the model robust.

Fig. 3 is the flowchart of the multi-modality candidate selection from the FdCT database model. The FdCT physiological factors are stored in the database. The Body Angle and Lung Mark are labeled based on the FdCT image. The FdCT MRF are applied by the PCA to retrieve the principal components. For any to-be-reconstructed LdCT subject, the corresponding physiological factors would be input into the model. Its sinogram data would be reconstructed initially by FBP and then segmented into four tissue types. Following, its LdCT MRF would be obtained. Using the same transforming matrix from the FdCT PCA, we extract the LdCT MRF principal components and input them to the RF model. The Body Angle and Lung Mark are labeled for the LdCT image. With all the inputs shown in Fig. 3, the model will output a validated FdCT candidate from the database for each slice. Based on the slice level majority voting, the final candidate will be predicted.

Fig. 3:

Fig. 3:

Flowchart of the multi-modality candidate selection model.

C. Problem Modeling with CNN-T Prior and Optimization

The reconstruction problem can be formulated to obtain the solution of the cost function Eq. (3):

argminμyAμTDyAμ+βRμ (7)

where, μ is the image to be reconstructed, y is the acquired projection data, A is the system projection matrix, and its element aij is calculated as the intersection length of projection ray i with pixel j. D is a diagonal matrix, where each element is treated as a weight corresponding to projection element. The calculation of D is reported in [42]. Rμ is the prior term to constrain the reconstruction. β is the parameter to balance the strength between fidelity and prior. As described in section A, μ is our trained CNN network, which has no explicit form. To solve Eq. (7), an auxiliary variable Z were introduced, which satisfies the condition μ=Z. Therefore, the optimization problem can be formatted as:

argminμyAμTDyAμ+βRZ,s.t.μ=Z. (8)

By the Half Quadratic Splitting method [43], we have:

μ=argminμyAμTDyAμ+βRZ+λ2μZ2, (9)

where, the quadratic type penalty is used, λ is the penalty parameter which varies in a non-descending order. Then Eq. (9) can be solved by the splitting iterative scheme [43],

μk+1=argminμyAμTDyAμ+λ2μZk2, (10.1)

and

Zk+1=argminZλ2μk+1Z2+βRZ, (10.2)

where k indicates the iteration number. Using Eq. (10.1) and (10.2), the fidelity term and regularization term are decoupled. The fidelity term (Eq. (10.1)) is associated with a quadratic regularized minimization problem, which can be solved using the previous PWLS method [21].

In subproblem Eq. (10.2), Z constrains Z and satisfies some neighborhood relationship, which is produced by the CNN texture prior defined in Eq. (3). In other words, for any pixel Zm, it satisfies Zm=F(μΩmk+1)s.t.μ=Z.RZ represents this constrains. Then, we use the similar parabolic penalty with the well-established half-quadratic splitting method [43], which is ZF(μΩk+1)2. Substituting RZ with ZF(μΩk+1)2, we obtain:

Zk+1=argminZλ2μk+1Z2+βZ(μΩk+1)2. (11)

The CNN-T prior F(.) is trained based on the FdCT and applied for the current LdCT reconstruction. During the iteration, it takes the intermediate of LdCT estimation and predicts the center pixel based on its neighboring pixels of LdCT using the relationship learnt from FdCT. This is the key point of incorporating prior knowledge from FdCT into LdCT reconstruction. There are two parameters λ and β in Eq. (11). It can be simplified to Eq. (12) by introducing α=2λ/β:

Zk+1=argminZμk+1Z2+αZF(μΩk+1)2. (12)

The iterative formular of Eq. (12) is:

Zk+1=μk+1+αF(μΩk+1)1+α. (13)

As summarized in Table I, our proposed workflow contains two steps, candidate matching and image reconstruction. For reconstruction, the attenuation map of FBP with “Hanning” filter was used as initials. The β was initially set 104. Empirically, we varied β in an exponentially increasing way. It means that β changes slowly at the beginning and faster at later iterations. We used β = 104 × 1.23k, where k is the iteration number. During the reconstruction, β changed around 104~106 before the convergence criteria were met. The convergence criteria are MSE between Z and μ < 1 × 106. As discussed above, α represents the balance between fidelity and prior term. We choose α = 1 which gives us current reasonably good results [36]. The time cost for each step is presented in APPNEDIX I.

TABLE I.

Workflow for the proposed reconstruction Algorithm

CNN-T Learning for Database
Learning the CNN-T for each image in the FdCT database.
CNN-T Candidate Selection
Select one proper candidate from FdCT as prior for the following reconstruction.
Image Reconstruction
 Initialize μ;
 Set parameters β, α.
 While stop criterion is not met:
  Setp1: Update μ by Eq. (10.1);
  Step2: Update Z by Eq. (13);
 End until the stop criterion is satisfied.

III. Experiments and results

A. Dataset

The experiments are carried out on real clinical data. The FdCT database consists of 8,220 image slices from 274 patients. All the subjects recruited to this study were scheduled for CT-guided lung nodule needle biopsy at Stony Brook University Hospital. All subjects provided written informed consent after approval by the Institutional Review Board. During biopsy a FdCT scan was performed first, followed by LdCT scans until the needle reached the lung nodule. For the full and low dose scans, the X-ray tube voltage was set to be 120 kVp. The tube current was set to be 100 mAs for the full-dose scan, and 20 mAs for subsequent low-dose scan. The physiological information of all the subjects was also collected as routine and stored in the database. All data used in this study is real clinical data. And there is no data augmentation performed.

The patients were scanned using a clinical CT scanner. The raw data were calibrated by the CT system and outputted as sinogram data or line integrals. The line integrals include 672 detector elements with width 1.4mm per element, and 1,160 projection views over a 360 range. The distance from the source to the system center was 570 mm, and the distance from the source to the detector was 1040 mm. The reconstructed image has 512×512 pixels with 0.74 mm ×0.74 mm per pixel.

B. CNN-T Learning for FdCT Database

Following the description in Fig. 1, we learned the CNN-T for every image in the FdCT database. We would first segment the FdCT chest image into four tissue types, i.e., lung, bone, fat and muscle. For each tissue type, patches with 7×7 pixel size were extracted and used as the training samples. To train CNN models, we selected RMSprop optimizer [44], with learning rate set to be 5e-5. And early stop [45] was adapted to prevent overfitting in training. We initialized the weights of convolution kernels with the method introduced in [46].

C. Training of Multi-modality Feature Selection Model

In this study, there are 8820 images from 274 subjects. To ensure there are no correlation and overlapping between training and testing dataset, we split the dataset into 80% training sample and 20% testing sample based on their subject label. That means all slices from the same subject will be only either training or testing samples. We trained the model based on 6750 images from 225 subjects.

As described in Fig. 3, we constructed the input for the random forest model based on proposed features. One type of feature is the principal components of the MRF spectrum. Fig. 4 shows us the energy percentage of each component out of the 48 dimensions. The first and second components account for 83% of the energy. We chose the top two PCs, i.e. PC1 and PC2, to construct the input features. Four input features were constructed, which were PC1 from LdCT, PC2 from LdCT, difference of PC1 (PC1_diff) between LdCT and FdCT, difference of PC2 (PC2_diff) between LdCT and FdCT. For other numerical features, we used their absolute difference value as input. For the category feature gender, we used 1 or 0 to represent that FdCT and LdCT have same or different gender.

Fig. 4:

Fig. 4:

Energy weight of each MRF principal component.

If one FdCT subject from the database is a proper candidate for the LdCT subject, the label is constructed based on our previous study [47]. In [47], we found that if the closeness of textures from different subjects in database to the selected subject is smaller than 0.2, the prior extracted from some other subjects can be applied to the selected subject without bringing in noticeable image texture quality change by Haralick measures. Therefore, we used the 0.2 as criteria to construct the label. In other words, the worst case of the proposed model would be that the texture closeness of the selected subject is 0.2. Fig. 5 shows the flowchart of the label construction. For any subject A, its LdCT sinogram is the to-be-reconstructed data. Its previous FdCT scan provides the ground truth of its MRF texture weights. For any subject B from the FdCT database, if the texture difference between the subject B and ground truth is smaller than 0.2, subject B is a proper candidate for A. And subject B will be labeled as 1. Otherwise, subject B is not a proper candidate for A and would be labeled as 0.

Fig. 5:

Fig. 5:

Flowchart of label construction.

In the RF package [48], there are two parameters, ntree and mtry, for optimizing the performance. Ntree is the number of independent trees in the model. Mtry is the number of features that would be randomly selected at each splitting node. Since only a few features were considered for our study, it was not necessary to tune the mtry. For the ntree, we started from 500 and observe the error convergence based on the out-of-bag (OOB) error. It was found out after 300 trees, the error stabilized. Since the variable interactions stabilize at a slower rate rather than error, we used ntree= 351 in the final model.

D. Testing of Candidate Selection Model

As shown in Fig. 3, the selection model mainly consists of two parts. One part is to train the multi-modality feature random forest model that predicts the candidate with a probability score. The other part is to use the slice-based majority voting to give the final identified candidate. We formulated the problem into a supervised way where we can observe the great benefit from the supervised training loss in experiment results. We first evaluated the RF model on the slice level data sampling. Testing data included 1470 images from 49 subjects and had no correlation and overlap with training data.

In our case, the task is to select a valid candidate with high confidence instead of finding all the candidates in the database. Therefore, the precision and recall [49] curve is used to evaluate the performance. Fig. 6 presents the precision and recall (PR) curve. For precision rate larger than 99.99%, the recall rate is around 5‰. Fig. 7 shows the precision and threshold curve (PT). It agrees with the common senses that the precision will increase when the threshold increases. The threshold of 0.9 can predict 99.99% precision rate. The encouraging results show that the trained RF model can predict the proper candidate for most subjects.

Fig. 6:

Fig. 6:

Precision and recall curve of the model.

Fig. 7:

Fig. 7:

Precision and threshold curve of the model.

To make the model more robust, we then used the majority voting strategy to find the final candidate. From Fig. 7, we can see that threshold 0.9 can give a very high precision. But this high threshold would lower the statistics. Therefore, we used the top ten candidates from the candidates for the voting instead of the threshold. For the 49 testing subjects, 41 subjects found the proper candidate, where the accuracy was 84%. Fig. 8 presents examples of predicted candidates from FdCT database comparing to the ground truth of to-be-reconstructed LdCT subject. It is observed that there are differences (not registered) between the selected candidate and the ground truth of the slice to be reconstructed. That is the one advantage of the proposed tissue-specific texture model. The registration is not required. The similar texture from each tissue region is the key point. By visual inspection, their appearance and textures have a lot of similarity between the to-be-reconstructed subject and the candidates from the database.

Fig. 8:

Fig. 8:

Comparison of predicted candidates from FdCT database comparing to the ground truth of the to-be-reconstructed LdCT of the subject.

E. The Learnt CNN-T and Image Reconstruction Results

As mentioned in the method section, the CNN-T model itself represents a learnt tissue texture type. The abstract features do not represent the tissue texture directly. To have a more vivid understanding of the CNN-T, we present the activation maps right before the fully connected layer. Fig. 9 shows the maps given some typical patches (7×7) of lung region. Note, the activation map has 784 elements and is reshaped as a matrix of size 28×28 for display purpose. From Fig. 9, activation maps have more changes on the 4th and 5th rows given different input patches. It implies the output more sensitive to features from the 60th one to the 125th one. One advantage of the CNN-T is that it can recognize some substructures, which should be seen in the activation maps. According to Fig. 9, there are clearly differences of activation maps given different patches. That emphasizes the model can recognize some substructures of lung and adjust the relationship accordingly as we expected.

Fig. 9.

Fig. 9.

A set of typical patches and their corresponding activation maps at the last layer. The activation map has 784 elements and is reshaped as a matrix of size 28×28 for display purpose.

The image reconstruction was performed to evaluate the model. One subject was chosen from the testing sample to perform the evaluation. Its sinogram of the 20mAs was used to do the reconstruction. The CNN-prior learnt from FdCT scan of itself is the ground truth prior and treated as the reference. Using the proposed selection model, one CNN-T from the database was found for the reconstruction evaluation.

Fig. 10 presents the reconstructed images using the ground truth prior (c) and candidate’s prior (d). For comparison purpose, the reconstruction with FBP (a), SIR with Huber prior (b) [21] are also presented. The SIR-Huber method was also optimized in terms of the hyperparameter. The β value equals to 5 ×104. The threshold of pixel difference is 5×10-3. The zoomed ROIs are also shown in Fig. 10. When compared to the FBP, all the SIR methods can suppress the noise well. Both tissue-specific texture prior models outperform the Huber prior model in terms of texture preservation. The texture prior from the database can provide comparable image quality to the texture prior from the subject itself.

Fig. 10.

Fig. 10.

Reconstructed images using FBP (a), Huber prior (b), ground truth prior (c) and candidate’s prior (d). The display window is [0 0.035] mm−1.

To demonstrate the benefit of texture preservation of the proposed method, we plotted the normal vector flow (NVF) of the ROI-2 which includes the pulmonary nodule. The plot is shown in Fig. 11. The NVF of the FdCT scan is treated as the reference, which is labeled as (R) in Fig. 11. From (a) to (d) in Fig. 11, they are the LdCT reconstruction using methods corresponding with Fig. 10. It is clearly observed that the texture is swallowed in the noise in the FBP reconstruction according to (a). From (b), the Huber prior suppressed the noise well, but barely preserved the texture. The two tissue-specific texture models not only suppress the noise well, but also preserve the most texture as shown in (c) and (d). And the performance using the candidate’s prior is very similar with that using the ground truth prior.

Fig. 11.

Fig. 11.

Normal vector flow images of the ROI-2 shown in Fig. 10. (a)-(d) are corresponding with (a)-(d) in Fig. 10, which are the reconstruction using FBP, Huber prior, ground truth prior and candidate’s prior.

For quantitative evaluation, the ROIs drawn in the same anatomical location displayed in Fig. 10 are chosen for comparison. The results are summarized in TABLE II. The reconstruction using the ground truth prior serves as our reference. According to Table II, the proposed model has the lowest RMSE and the highest PSNR. The SSIM and FSIM are also employed to measure the structure similarity and feature similarity between the references and reconstructed images. In TABLE II, the proposed method obtains the greatest SSIM and FSIM [50] values than other competing methods.

TABLE II.

quantitative comparison for different reconstruction methods. The ROIs below are the region marked in Fig. 10.

RMSE PSNR SSIM FSIM
ROI 1 FBP 1.60E-03 26.641 0.9969 0.9927
Huber prior 8.71E-04 30.168 0.9989 0.9946
The Proposed 9.19E-05 50.309 1 0.9999
ROI 2 FBP 2.00E-03 24.276 0.9950 0.9919
Huber prior 7.05E-04 31.997 0.9994 0.9988
The Proposed 6.39E-05 53.249 1 1
ROI 3 FBP 1.60E-03 24.932 0.9965 0.9883
Huber prior 7.29E-04 30.506 0.9992 0.9919
The Proposed 1.64E-04 43.686 1 0.9998
Whole Body FBP 1.50E-03 29.416 0.9973 0.9839
Huber prior 7.44E-04 34.017 0.9993 0.9980
The Proposed 4.34E-05 59.382 1 1

We also evaluated the proposed model on the data with a small nodule adjacent to the chest wall. The experiment setup was the same as above. The reconstructed images are shown in Fig. 12, displayed in lung window. From (a)-(d), the reconstruction methods are FBP, SIR with Huber prior, SIR with ground truth prior and SIR with candidate’s prior. Similarly, parameters of SIR-Huber method were also optimized. The β equals to 5 ×104. The threshold of pixel difference is 6×10-3. Comparing with FBP, the noise has been suppressed well by the other three SIR models. Similar with Fig. 10 and Fig. 11, both tissue-specific texture prior models outperformed the Huber prior model in terms of texture preservation. There is no noticeable difference between using the ground truth prior and the candidate’s prior by visual judgement. Fig. 13 presents the profiles of two trajectories within the small nodule marked by blue lines in Fig. 12. Due to the noise effect, FBP results have the most fluctuations. The results of candidate’s prior agree well with the ground truth.

Fig. 12.

Fig. 12.

Reconstructed images using FBP (a), Huber prior (b), ground truth prior (c) and candidate’s prior (d). The display uses lung window [0 0.022] mm−1. The yellow box showed the position of a small nodule adjacent to the chest wall.

Fig. 13.

Fig. 13.

Profiles of reconstructed images using FBP (a), Huber prior (b), ground truth prior (c) and candidate’s prior (d). The trajectories of two profiles are shown by blue lines in Fig. 12.

IV. Discussion

In this study, we learned the CNN-T prior for every subject in the FdCT database and used a multi-modality feature model to link the to-be-reconstructed LdCT to the FdCT in database. This idea relies on two points. Firstly, there are similarities among human subjects, so that we can link one to another. Secondly, there are also differences among human subjects. It is hard to learn a super texture prior proper for any LdCT subject. Considering both the similarity and individuality properly is one of our future interests on the database-driven machine learning for LdCT reconstruction.

This work proposed a novel Lung Mark feature to quantitatively reflect the scan position along the z-axis. It improved the training efficiency and model accuracy. As just mentioned above, the difference among human subjects may dominate from the texture point of view. In other words, for any subject, it is very likely there are only a few proper candidates in the database. If we used the entire data from the database to generate the training samples, most candidates from the database would be improper and their labels would be 0. This imbalance in training samples might make the model less efficient. Therefore, when we constructed the training samples, we used two criteria to narrow down the samples from the FdCT database. One criterion is that their BMI difference should be smaller than eight. The other criterion is their Lung Mark difference should be smaller than six. Using the two criteria, only around 1/3 FdCT data would be used in accordance with the to-be-reconstructed subject.

Moreover, we exported the feature importance based on the mean decrease of Gini and summarized it in Table III. From the importance rank, we can see which feature plays a more important role for the selection model. Despite the LdCT noise effect, the influence of each physiological factors is in the following order: BMI > Lung Mark > Age > Body Angle > Gender. It shows the effectiveness of the proposed Lung Mark feature. From Table III, we can see that PC1_diff and PC2_diff have the most influence on the model. It demonstrates that the PCA method can reduce the dimensions of MRF spectrum when analyzing the tissue texture, which makes the model more robust and efficient. Currently, the prediction accuracy is 84% based on the proposed features. Exploring more efficient features is another of our future interests to further improve the model performance. Moreover, as dose decreased, image noise will increase which degrades image quality. The proposed features like physiological factors, Lung Mark, patient positions and CT scan angle should not be affected. Features from the CT scan image should be affected as expected. In our previous study [51], we quantified the image texture change from 100 mAs to 5mAs. It was observed that the critical texture change is around 6~8mAs. We expected the texture related features including the used Principal components may remain effective before 10mAs. It is a very meaningful task to explore and validate the effect of dosage on this method. It is also one of our future research interests.

TABLE III.

Importance rank of features. The first row is the importance rank. The second row is the corresponding feature.

1 2 3 4 5 6 7 8 9
PC_diff PC_diff BMI Lung Mark PC1 Age PC2 Body Angle Gender

In addition to the GINI importance, we further explored the correlation to make a more comprehensive exploration on the influence of each feature. We can see PC1 and PC2 are two representative values of the tissue textures (Fig. 4) and have the highest importance in the selection model. Therefore, PC1 and PC2 were used to present the tissue texture. Fig. 14 shows the plot of the distribution of PC1 and PC2 along with different features. If PC1 and PC2 can be distinguished better in one feature, that feature will have higher influence. The mean value of PC1 decreased with the increase of Lung Mark, and the mean value of PC2 showed no correlation. On the other hand, the mean value of PC2 decreased when BMI became high, and the PC1 showed no correlation with BMI. It should be noted that the statistical fluctuation of BMI correlation is a little high when BMI is beyond 42, which is due to low sampling point in that region. Comparing the boxplot of PC1 and PC2 depending on the gender, it is indicated that the gender has weak correlation with PC2 and no correlation with PC1. As for the age and body angle, both factors didn’t show evident correlation with PC1 and PC2. Features of stronger correlation means they can better represent the tissue texture. Therefore, they should have higher importance in the trained RF model. It agrees very well with the model importance output, which showed the effectiveness of the trained model.

Fig. 14:

Fig. 14:

Correlation analysis with FdCT PC1 and PC2.

V. CONCLUSION

In this study, we proposed a learned tissue-specific texture prior from available FdCT database for current LdCT reconstruction. A multi-modality feature selection model was built in the machine learning to select the proper candidate from the database. This work extended our previous proposed tissue-specific texture prior model by two aspects: (1) it eliminated the linear assumption of the center pixel and its neighboring pixels by using the learned CNN-T prior model, and (2) it obviated the need of a previous FdCT scan for reconstruction of a subject’s current LdCT. Experiments demonstrated the effectiveness of the proposed database assisted LdCT reconstruction method. The texture features, BMI and Lung Mark have highest importance to select the proper candidate. The proposed novel Lung Mark feature can effectively improve the training efficiency and model accuracy. Expanding the current database size and exploring more efficient features are our future work to further improve the performance of the presented machine-learned clinically relevant texture prior for Bayesian LdCT reconstruction.

TABLE IV.

summary of time cost for each step.

Step Time Cost
Training
Selection model training 30 min
CNN-T training 10 min for 15 epochs
Candidate prediction < 1 min
Reconstruction per iteration
Feature construction of LdCT subject < 1 s
Patch generating 30 s
Update μ < 1 s
Update Z < 1 s

VI. ACKNOWLEDGEMENT

The authors appreciate Ms. Danielle Giulietti for collecting the physiological data to construct the database.

This work was partially supported by the NIH/NCI grant #CA206171.

Footnotes

APPENDIX I

Table IV summarized the time cost of each step for the proposed model. In each iteration, the most time-consuming step is to generate patches. In our future work, the patch generation will be speed up using the GPU technics.

Contributor Information

Yongfeng Gao, Department of Radiology, State University of New York at Stony Brook, NY 11974 USA..

Jiaxing Tan, Department of Radiology, State University of New York at Stony Brook, NY 11974 USA..

Yongyi Shi, Department of Radiology, State University of New York at Stony Brook, NY 11974 USA..

Hao Zhang, Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA..

Siming Lu, Department of Radiology, State University of New York at Stony Brook, NY 11974 USA..

Amit Gupta, Department of Radiology, State University of New York at Stony Brook, NY 11974 USA..

Haifang Li, Department of Radiology, State University of New York at Stony Brook, NY 11974 USA..

Michael Reiter, Department of Radiology, State University of New York at Stony Brook, NY 11974 USA..

Zhengrong Liang, Departments of Radiology and Biomedical Engineering, State University of New York at Stony Brook, NY 11794, USA..

References

  • [1].Brenner D and Hall E, “CT -- An Increasing Source of Radiation Exposure,” New England Journal of Medicine, vol. 22, pp.2277–2284, 2007. [DOI] [PubMed] [Google Scholar]
  • [2].Huda W and Vance A, “Patient Radiation Doses from Adult and Pediatric CT,” American Journal of Roentgenology, vol. 188, no. 2, pp. 540–546, 2007. [DOI] [PubMed] [Google Scholar]
  • [3].Kalender W, “Dose in X-ray CT,” Physics in Medicine and Biology, vol. 59, no. 3, pp. 129–150, 2014. [DOI] [PubMed] [Google Scholar]
  • [4].Spanne P, “X-ray Energy Optimization in Computed Microtomography,” Physics in Medicine and Biology, vol. 34, no. 6, pp. 679–690, 1989. [DOI] [PubMed] [Google Scholar]
  • [5].McCollough C, Primak A, Braun N, Kofler J, Yu L, and Christner J, “Strategies for Reducing Radiation Dose in CT,” Radiologic Clinics of North America, vol. 47, no. 1, pp. 27–40, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Aberle D, Adams A, Berg C, Black W, Clapp J, Fagerstorm R, et al. , (The National Lung Screen Trial Research Team), “Reduced Lung-Cancer Mortality with LdCT Screening,” New England Journal of Medicine, vol. 365, pp. 395–409, 2011.21714641 [Google Scholar]
  • [7].Singh S, Kalra M, Hesieh J, Licato P, Do M, Pien H, et al. , “Abdominal CT: Comparison of Adaptive Statistical Iterative and Filtered Back Projection Reconstruction Techniques,” Radiology, vol. 257, no. 2, pp. 373–383, 2010. [DOI] [PubMed] [Google Scholar]
  • [8].Wang J, Li T, Lu H, and Liang Z, “Penalized Weighted Least-squares Approach to Sinogram Noise Reduction and Image Reconstruction for Low-dose X-ray CT,” IEEE Transactions on Medical Imaging, vol. 25, no. 10, pp. 1272–1283, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Liang Z, La Riviere P, El Fakhri G, Glick SJ, and Siewerdsen JH, “Low-Dose CT: What Has Been Done, and What Challenges Remain”, Guest Editorial in IEEE Transactions on Medical Imaging, December Issue of 2017, vol.36, no.12, 2409–2416, and the entire special volume. [Google Scholar]
  • [10].Xu Q, Yu H, Bennett J, He P, Zainon R, Doesburg et R al, “Image reconstruction for hybrid true-color micro-CT,” IEEE transactions on biomedical engineering, vol. 59, no. 6, pp. 1711–1719, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Gong C, Han C, Gan G, Deng Z, Zhou Y, Yi et J al, “Low-dose dynamic myocardial perfusion CT image reconstruction using pre-contrast normal-dose CT scan induced structure tensor total variation regularization,” Physics in Medicine & Biology, vol. 62, no. 7, pp. 2612–2635, 2017. [DOI] [PubMed] [Google Scholar]
  • [12].Xu Q, Yu H, Mou X, Zhang L, Hsieh J, Wang G, “Low-dose X-ray CT reconstruction via dictionary learning,” IEEE transactions on medical imaging, vol. 31, no. 9, pp. 1682–1697, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Zhao B, Ding H, Lu Y, Wang G, Zhao J, Molloi S, “Dual-dictionary learning-based iterative image reconstruction for spectral computed tomography application,” Physics in Medicine & Biology, vol. 57, no. 24, pp. 8217–8229, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Bai T, Yan H, Jia X, Jiang S, Wang G, Mou X, “Z-index parameterization for volumetric CT image reconstruction via 3-D dictionary learning,” IEEE transactions on medical imaging, vol. 36, no. 12, pp. 2466–2478, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Zoran D and Weiss Y, “From Learning Models of Natural Image Patches to Whole Image Restoration,” Proceedings of International Computer Vision, pp.479–486, 2011. [Google Scholar]
  • [16].Yu G, Sapiro G and Mallat S, “Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity,” IEEE Transaction on Image Process, vol.21, no.5, pp.2481–2499, 2012. [DOI] [PubMed] [Google Scholar]
  • [17].Zhang R, Ye D, Pal D, Thibault JB, Sauer KD and Bouman, “A Gaussian Mixture MRF for Model-based Iterative Reconstruction with Applications to Low-dose X-ray CT,” IEEE Transactions on Computational Imaging, vol. 2, no. 3, pp.359–374, 2016. [Google Scholar]
  • [18].Zhang H, Wang J, Zeng D, Tao X, and Ma J “Regularization strategies in statistical image reconstruction of low‐dose x‐ray CT: A review.” Medical physics vol. 45, no. 10, pp. 886–907, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Ma J et al. , “Low-dose CT Image Restoration Using Previous Normal Dose Scan,” Medical Physics, vol. 38, pp. 5713–5731, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Zhang H, Han H, Wang J, Ma J, Liu Y, Moore W et al. , “Deriving Adaptive MRF Coefficients from Previous Normal-dose CT Scan for Low-dose Image Reconstruction via Penalized Weighted Least-squares Minimization,” Medical Physics, vol. 41, no. 4, 2014, 041916–1 /041916–15 (PMCID: PMC3971828). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Zhang H, Han H, Liang Z, Hu Y, Moore W, Ma J, et al. , “Extracting Information from Previous Full-Dose CT Scan for Knowledge-Based Bayesian Reconstruction of Current Low-Dose CT Images,” IEEE Transactions on Medical Imaging, vol. 35, no. 3, pp. 860–870, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Liang Z, Zhang H, Gao Y, Yang J, Ferretti J, Bilfinger T, et al. , “Different Lung Nodule Detection Tasks at Different Dose Levels by Different Computed Tomography Image Reconstruction Strategies,” IEEE Nuclear Science Symposium and Medical Imaging Conference, Australia, November 2018. [Google Scholar]
  • [23].Gao Y, Liang Z, Zhang H, Yang J, Ferretti J, Bilfinger T, et al. , “A Task-dependent Investigation on Dose and Texture in CT Image Reconstruction.” IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 4, no.4, pp. 441–441,2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Kothary N, Lock L, Sze DY, and Hofmann LV “Computed tomography–guided percutaneous needle biopsy of pulmonary nodules: impact of nodule size on diagnostic accuracy.” Clinical lung cancer, vol 10, no. 5, pp.360–363, 2009. [DOI] [PubMed] [Google Scholar]
  • [25].National Lung Screening Trial Research Team. “The national lung screening trial: overview and study design.” Radiology, vol. 258, no. 1 pp. 243–253, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Krizhevsky A, Sutskever I , and Hinton GE , “Imagenet classification with deep convolutional neural networks ,” Communications of ACM , vol. 60, no. 6, pp. 84–90, 2017. [Google Scholar]
  • [27].Zhao H, Shi J, Qi X, Wang X , and Jia J , “Pyramid scene parsing network ,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 2881–2890, 2017.. [Google Scholar]
  • [28].Wang G, Ye JC, Mueller K , and Fessler Jeffrey A. “Image reconstruction is a new frontier of machine learning .” IEEE transactions on medical imaging , vol. 37, no. 6, pp. 1289–1296, 2018. [DOI] [PubMed] [Google Scholar]
  • [29].Wu D, Kim K, Fakhri GE, Li Q, “Iterative low-dose CT reconstruction with priors trained by artificial neural network,” IEEE Transactions on Medical Imaging, vol. 36, no. 12, pp. 2479–2486, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Chen H, Zhang Y, Chen Y, Zhang J, Zhang W, Sun H, et al. , “Learn: learned experts’ assessment-based reconstruction network for sparse-data ct,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp.1333–1347, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Chen B, Xiang K, Gong Z, Wang J, and Tan S “Statistical iterative CBCT reconstruction based on neural network.” IEEE transactions on medical imaging, vol. 37, no. 6, pp.1511–1521, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Zheng X, Ravishankar S, Long Y, and Fessler Jeffrey A. “PWLS-ULTRA: An efficient clustering and learning-based approach for low-dose 3D CT image reconstruction.” IEEE transactions on medical imaging, vol. 37, no. 6, pp.1498–1510, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [33].Ye S, Ravishankar S, Long Y, and Fessler Jeffrey A. “SPULTRA: Low-dose CT image reconstruction with joint statistical and learned image models.” IEEE Transactions on Medical Imaging, vol. 39, no. 3 pp. 729–741, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [34].He J, Yang Y, Wang Y, Zeng D, Bian Z, Zhang H, et al. , “Optimizing a parameterized plug-and-play ADMM for iterative low-dose CT reconstruction.” IEEE Transactions on Medical Imaging, vol. 38, no. 2, pp. 371–382, 2018. [DOI] [PubMed] [Google Scholar]
  • [35].Gao Y, Tan J, Shi Y, Lu S, and Liang Z “A machine learning approach to construct a tissue-specific texture prior from previous full-dose CT for Bayesian reconstruction of current ultralow-dose CT images.” In 15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, vol. 11072, p. 1107204. International Society for Optics and Photonics, 2019. [Google Scholar]
  • [36].Gao Y, Tan J, Shi Y, Lu S, Gupta A, Li H, et al. , “Constructing a tissue-specific texture prior by machine learning from previous full-dose scan for Bayesian reconstruction of current ultralow-dose CT images.” Journal of Medical Imaging, vol. 7, no. 3, p.032502, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [37].Haralick R, Shanmugam K, and Dinstein I “Textural features for image classification.” IEEE Transactions on systems, man, and cybernetics, vol.6, pp. 610–621, 1973. [Google Scholar]
  • [38].Han H, Zhang H, Moore W, and Liang Z, “Texture-preserved PWLS Reconstruction of Low-dose CT Image via Image Segmentation and High-order MRF Modeling,” SPIE Medical Imaging, in CD-ROM, 2016. [Google Scholar]
  • [39].https://www.physio-pedia.com/Lung_Anatomy, accessed Nov. 2nd, 2020.
  • [40].Wold Svante, Esbensen Kim, and Geladi Paul. “Principal component analysis.” Chemometrics and intelligent laboratory systems, vol. 2, no. 1–3, pp. 37–52, 1987. [Google Scholar]
  • [41].Breiman L, “Random Forest”, Machine Leraning. No. 45, pp. 5–32, 2002. https://link.springer.com/article/10.1023/A:1010933404324 [Google Scholar]
  • [42].Ma J, Liang Z, Fan Y, Liu Y, Huang J, Chen et W al., “Variance analysis of X-ray CT sinograms in the presence of electronic noise background,” Med. Phys, vol. 39, pp. 4051–4065, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [43].Zhang K, Zuo W, Gu S and Zhang L, “Learning Deep CNN Denoiser Prior for Image Restoration”, IEEE Conference on Computer Vison and Patten Recognition, pp. 3929–3938, 2017. [Google Scholar]
  • [44].Hinton G, Srivastava N, and Swersky K, Neural Networks for Machine Learning Lecture, Overview of Mini-Batch Gradient Descent. 14: 8, 2012. [Google Scholar]
  • [45].Girosi F, Jones M, and Poggio T, “Regularization Theory and Neural Networks Architectures”, Neural computation, vol. 7, no. 2, pp. 219–269, 1995 [Google Scholar]
  • [46].He K, Zhang X, Ren S, and Sun J, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition”, IEEE Transactions On Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1904–1916, 2015. [DOI] [PubMed] [Google Scholar]
  • [47].Gao Y, Liang Z, Moore WH, Zhang H, Pomeroy MJ, Ferretti JA, et al. , “A feasibility study of extracting tissue textures from a previous full-dose CT database as prior knowledge for Bayesian reconstruction of current low-dose CT images,” IEEE transactions on medical imaging, vol. 38, no. 8, pp. 1981–1992, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [48]. [accessed October 1st, 2020]. https://cran.r-project.org/web/packages/randomForest/randomForest.pdf.
  • [49].Davis Jesse, and Goadrich Mark. “The relationship between Precision-Recall and ROC curves.” In Proceedings of the 23rd international conference on Machine learning, pp. 233–240. 2006. [Google Scholar]
  • [50].Zhang L, Zhang L, Mou X, and Zhang D, “FSIM: A feature similarity index for image quality assessment.” IEEE transactions on Image Processing, vol. 20, no. 8, pp. 2378–2386, 2011. [DOI] [PubMed] [Google Scholar]
  • [51].Y Gao Z, Liang Y. Xing, Zhang H, Pomeroy M, Lu S, et al. , “Characterization of tissue-specific pre-log Bayesian CT reconstruction by texture-dose relationship,” Medical Physics, vol. 47, no. 10, pp. 5032–5047, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES