Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2023 Feb 8;83:104637. doi: 10.1016/j.bspc.2023.104637

An optimized EBRSA-Bi LSTM model for highly undersampled rapid CT image reconstruction

AVP Sarvari 1,, K Sridevi 1
PMCID: PMC9904992  PMID: 36776947

Abstract

COVID-19 has spread all over the world, causing serious panic around the globe. Chest computed tomography (CT) images are integral in confirming COVID positive patients. Several investigations were conducted to improve or maintain the image reconstruction quality for the sample image reconstruction. Deep learning (DL) methods have recently been proposed to achieve fast reconstruction, but many have focused on a single domain, such as the image domain of k-space. In this research, the highly under-sampled enhanced battle royale self-attention based bi-directional long short-term (EBRSA-bi LSTM) CT image reconstruction model is proposed to reconstruct the image from the under-sampled data. The research is adapted with two phases, namely, pre-processing and reconstruction. The extended cascaded filter (ECF) is proposed for image pre-processing and tends to suppress the noise and enhance the reconstruction accuracy. In the reconstruction model, the battle royale optimization (BrO) is intended to diminish the loss function of the reconstruction network model and weight updation. The proposed model is tested with two datasets, COVID-CT- and SARS-CoV-2 CT. The reconstruction accuracy of the proposed model with two datasets is 93.5 % and 97.7 %, respectively. Also, the image quality assessment parameters such as Peak-Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Structural Similarity Index metric (SSIM) are evaluated, and it yields an outcome of (45 and 46 dB), (0.0026 and 0.0022) and (0.992, 0.996) with two datasets.

Keywords: Computed tomography (CT), Image reconstruction, Deep learning, Under-sampling, K-space data

1. Introduction

COVID-19 is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). This disease must be detected early to save a person’s life [1]. The most commonly used COVID-19 analysis approach is the reverse transcription-polymerase chain reaction (RT-PCR) test. However, it suffers from high false negative rates and a highly time-consuming process [2]. CT is a frequently used medical imaging method focussed on minimizing the X-ray dose to diminish patients’ risk. CT scan approach possesses a greater sensitivity over COVID-19 diagnosis, which is less resource intensive than traditional RT-PCR tests [3]. Scholars have suggested many studies, including hardware-based scanning protocols and software-based image reconstruction algorithms, to improve the image quality and diminish the radiation dose of CT scans [4]. In clinical circumstances, the significant objective of CT imaging is to offer relevant data, especially the determination of crucial features [5].

In an image reconstruction strategy, the CT images are inevitably degraded by noise, which may degrade quality. For image reconstruction, many advanced methods are evaluated [6], [7]. However, the radiological decision certainty relies upon the image quality, including resolution, artefacts, noise, and contrast [8]. The quality of the image is influenced by patient constitution, positioning and acquisition parameters. Initially, low-complexity filtering techniques are presented to suppress the noise. Still, the key structural details are ignored in the filtering technique due to the lack of noise modelling [9]. During the acquisition process of CT images, the noise effects, offset field effects and motion effects are occurred in the CT images due to environmental influences. The movement of patients during CT image acquisition will decline the image quality [10].

The reconstruction algorithms are broadly categorized into three stages: classical algorithm, iterative algorithm, and learning-based algorithms. The classical-based reconstruction algorithms provide fast and marvellous outcomes, but they are not appropriate for extreme imaging due to the presence of artifacts [11]. The next stage is the iterative-based reconstruction method to enhance image quality [12]. The iterative statistical reconstruction (SIR) algorithms accomplish the data’s statistical characteristics to overwhelm the noise and streak artifacts [13]. The objective functions of iterative methods are composed of two regularisation terms: a data fidelity term and a penalty term [13]. Data fidelity has emerged from the statistical measurement model, and the penalty term is designed by contemplating the properties of the CT image [14]. Image reconstruction with penalties was suggested to overcome the reconstruction issues from the under sampled images. The reconstruction approaches are integrated with penalty functions to utilize certain characteristics of medical images, such as smoothness, edge sparsity, low-rank, etc. [15], [16], [17].

Moreover, the total variation (TV) image reconstruction methods are implemented to handle the missing data and improve quality [18]. Although the TV methods are effective in many cases, TV minimization constraints are limited. Also, it only considers the sparsity of the gradient-transformed image. Therefore, the TV-based reconstruction algorithm is difficult for hyperparameter tuning [19]. To attenuate the difficulty of hyperparameter tuning for TV-based reconstruction algorithms, researchers developed optimization algorithms to recognize the set of optimal hyperparameters, which is desirable. Recently, an ant colony optimization (ACO) with TV in a CT image has been suggested to select the optimal parameters with high-quality reconstructed images [20].

Compared to classical and iterative-based reconstruction methodologies, learning-based algorithms are advisable with an effective outcome. Several machines (ML) and DL approaches have been conducted to attain a promising outcome. ML approaches are effectively used for task classification in various domains, such as face recognition, image segmentation and tumour classification, hand gesture recognition and, it yields better recognition accuracy. ML algorithms learn a non-linear mapping of a training database, and it is embraced with low and high-resolution image pairs. The random forest ML strategy is currently considered for CT image reconstruction. These approaches are easily trapped with low-quality images, mainly depending on the selected features [21].

On the other hand, deep learning networks spontaneously learn their features during network training. The application of DL methods is explored in many fields. Therefore, the network-based sonogram synthesis is suggested for sparse view CT image reconstruction with an array size 512×512 [22]. The convolutional neural network (CNN) model is also fine-tuned for CT image reconstruction by integrating gradient descent to enforce measurement consistency [23].

1.1. Motivation

Medical modalities such as CT, MRI, X-ray, and ultrasound examine body anomalies, tissues, and other organs. Images captured from these modalities may lead to a low signal-to-noise ratio (SNR) and low contrast-to-noise ratio (CNR). To overcome these issues, image reconstruction techniques are introduced for better understanding and prediction. Therefore, conventional film techniques have emerged. However, these techniques do not provide more information for a particular problem. So, the iterative-based reconstruction (IR) strategy is used with complicated mathematical models to gather details from objects and their neighbouring materials. An IR strategy named algebraic reconstruction was the first CT image reconstruction strategy.

Moreover, this strategy consumes more time due to the complex model; therefore, it is impossible to use it in clinical implementation. Hence, this strategy is replaced with another strategy known as filtered back projection (FBP). The FBP strategy can quickly generate good diagnostic quality images compared to IR strategies. This strategy also shows various limitations in CT image reconstruction. To overcome these issues, the researchers started to use artificial intelligence (AI) in CT image reconstruction. Machine learning (ML) techniques are also widely used in this domain to achieve the best result. These techniques also show various limitations in CT image reconstruction. After that, deep learning (DL) techniques were used by researchers in various real-time applications. There are various DL networks. One of the commonly used networks is CNN. This network consists of different layers, where layers process the input data through different stages like feature extraction and feature aggregation, thus providing reasoning and feature interpretation before making a detection. Since CNNs have supervised learning networks and are often trained with labelled data. Rather than choosing supervised learning networks, authors were attracted to using unsupervised learning networks. Some of the unsupervised learning networks used by the authors are adversarial networks (AN) and auto-encoder (AE). The main aim of AE is to reduce reconstruction error for a particular parameter. Generative adversarial networks (GANs) are also used in CT image reconstruction, which is frequently trained with two networks, such as a discriminator and a generator. The generator used to train the network needs to be improved to enhance the created data sets. Considering these issues, the proposed work used an enhanced optimized attention-based DL COVID-19 CT image reconstruction model to maintain image quality and accuracy.

1.2. Objectives

The major contributions are given below:

  • This study suggests an improved CT image reconstruction model enhances image quality and facilitates visual interpretation from the under-sampled image by varying the sampling rate.

  • The reconstruction model is named enhanced battle royale self-attention based bi-directional long short term (EBRSA-bi LSTM). Using the Fourier transform, the generation of k-space is highlighted, and the centre frequency in the k-space domain determines the brightness and contrast of the image.

  • The extended cascaded filter (ECF) is emphasized to suppress artifacts in the under-sampled image and maintain reconstruction accuracy.

  • By capturing long-range relationships across image areas, the inclusion of the self-attention mechanism in the network model reveals its capacity to improve the quality of reconstruction outcomes. Enhanced battle royale optimization (EBRO) is introduced to update weights in each neural network layer and reduce loss function.

The organization of the proposed research work is as follows. Section 2 includes the related works for reconstruction and filtering approaches. Section 3 briefly explains the proposed methodology. Section 4 contains the results and discussion of the proposed model. Along with this, the potential applications, challenges, limitations and future directions are also discussed. Section 5 provides the overall conclusion for the proposed work.

2. Related work

Due to the development and originality of deep learning technology, it was applied in many fields, such as image processing, video processing, data mining, etc. This paper focuses on an optimized DL model to improve the image quality and reconstruction from the under sampled images. Some of the recently developed methods related to CT image reconstruction from the under-sampled image are reviewed below:

2.1. Noise and artifact reduction

The noise reduction in an image is generally termed image de-noising, and it is an essential task for image processing to improve image quality. In this filtering stage, noise components in the image will be eliminated without causing any degradation to the original image. The artifact reduction methodologies are widely used to eliminate motion artifacts on diagnostic images.

Chen et al. [24] developed a low-dose CT with a CNN model for noise reduction. The low-dose CT issues were resolved with three stages, namely patch encoding, non-linear filtering, and reconstruction. The patches were extracted from the training images with fixed slide sizes in the patch encoding stage. In the second non-linear filtering stage, the dimensional feature vector was procured from the extracted patch. The third stage was the final complete stage, termed reconstruction. In this stage, the overlapped patches are merged, and the weight updation of the overlapped patches also occurs.

Du et al. [25] developed a stacked competitive network (SCN) model for noise reduction in low-dose CT. The network model was integrated with successive competitive blocks (CB). The multi-scale processing block was processed for each competitive block. The combination function was designed for each competitive block, and the feature map was generated as an input to the next layer. The reduction of noise was estimated by minimizing the loss functions. The network model’s overfitting issues were handled by introducing the regularization term. The regularization or weight decay term had integrated to diminish the magnitudes of the weights.

Kang et al. [26] developed DCNN network model directional wavelets for CT reconstruction. This approach effectively suppresses the presence of noise in CT images. The contourlet transform was introduced for noise removal. The network model was performed by minimizing the loss function and integrating the regularization term. The cost function reduction was accomplished by conventional error backpropagation with the mini-batch stochastic gradient descent (SGD) and the gradient clipping approach. Random Gaussian distributions estimated the initialization of kernel functions.

Kumar et al. [27] developed an optimization-based network filter model to remove Gaussian noise from the CT images. The author adopted an evolutionary non-linear adaptive filter approach by utilizing a cat swarm functional Link artificial neural network (CS-FLANN) to remove noises. In future, the adapted model was implemented in real-time biomedical images to eliminate the wide range of noises such as salt and pepper, Rician, and speckle noise.

Wang et al. [28] developed modified smooth patch ordering (MSPO) to remove noise in low-dose CT images. The non-local means (NLM) algorithm replaced the Leclerc robust function with the enhanced bi-square robust function in the patch ordering method. The replacement of the robust function estimates the weight function of each pixel value. The utilization of the total-variation (TV) filter also eradicates the residual noise of the image. The image processing scheme was validated with three stages. Initially, the prewhitening linear filter was applied, and the prewhitened image was attained. The de-noised CT image was emphasized in the second stage via NLM-based SPO. Finally, the TV was utilized to eliminate the residual noise and artifacts.

Kumar et al. [29] proposed non-linear tensor diffusion-based filtering (NTDF) model for COVID-19 CT images. In the pre-processing stage, the major objective is minimizing noise, removing artifacts and aliasing effects. In an unsharp mask filter, the edge information has prevailed based on the difference between the smoothed versions of an image. Here, the author intended a novel filtering model for image noise removal.

Dakua et al. [30] proposed a stochastic resonance (SR) based de-noising technique in maximal overlap discrete wavelet transform (MODWT). The advantage of this de-noising technique is it can handle any sized samples. The detailed coefficients and smoothness of the technical analysis are related to zero-phase filters. It is also said to transform invariant and generate many asymptotically effective wavelet variance estimators than the traditional DWT technique. Dakua [31] also proposed RS based de-noising technique for left ventricle segmentation. It is stated that traditional filtering techniques remove the noise from the images. Using these filtering techniques may lead to degradation of image quality, such as smoothing edges and blurring. So, the author constructively considered the noise to enhance the image contrast. This method of pre-processing can only enhance image contrast. Only considering the image contrast does not provide an accurate solution and cannot satisfy the need to proceed with further processes. Also, traditional single filtering can deviate the features of the image during pre-processing. Still, in this proposed work, two filters are combined to achieve the best-pre-processed image to continue with the other process. The adaptive WF filters not only remove the noise but also reduce the mean square error between the restored image and the original image, thus conserving the edges and high-frequency regions of the image. The DMF can replace the corrupted pixel with the median value of their k-nearest noise-free pixels, that are not achieved by the traditional WF technique. Therefore, combining these two effective filters provided betters results than other techniques for pre-processing.

The comparative analysis of filtering techniques and reconstruction methods are shown in Table 1 and Table 2 respectively.

Table 1.

Comparative analysis of filtering methods.

Author Filtering method Purpose Merits Demerits Performance
Chen et al. [24] CNN
CT image
Noise reduction Highly recommended technique for noise reduction in exterior CT image. Lack of metal artefacts reduction, high training cost. For chest CT image,
PSNR-41.68
RMSE- 0.008
SSIM − 0.97
For the abdomen CT image,
PSNR-38.99, RMSE-0.01
SSIM −0.92
Du et al. [25] SCN-CB

CT image
Noise reduction High robustness and shows flexible outcomes in suppressing noise from the image. Fails to preserve structural features in the CT image. PSNR-41.80
RMSE-0.008
SSIM-0.96
Kang et al. [26] DCNN
CT image
Noise reduction Removes all kinds of noises from the image. It shows better performance in removing Racian and speckle noise from the image. Fails to recover spatial features from the noise-suppressed image. Consumes high computational time and space. PSNR-43.80
RMSE-0.005
SSIM-0.91
Kumar et al. [27] CS-FLANN
CT image
Noise reduction Removes salt and pepper noise, Rician and speckle noise. Prevents gradient insufficiency. Blur the crucial edge information from the CT image.
Wang et al. [28] MSPO-NLM
CT image
Noise reduction It eliminates the need for a separate training set. Low computation cost. Fails to order the smooth patch of the pixel image. PSNR-45.80
RMSE-0.007
SSIM-0.83
Kumar et al. [29] NTDF
CT image
Noise reduction It does not require any pre-processing stage to filter the noise from the image. Lack of edge preservation. JND, DE and AMBE

Table 2.

Comparative analysis of reconstruction methods.

Author Model Purpose Merits Demerits Performance
Kida et al. [32] DCNN Image reconstruction Can handle larger training data. Preserve the anatomical structure on the CBCT image. Highly prone to false image prediction. PSNR-50.4
SSIM-0.96
Wu et al. [33] ANN Image reconstruction It can maintain a smooth constraint and does not destroy the complex feature of the image. Major differences occur between the reference image and the reconstructed image. SSIM-0.50
Wu et al. [34] CNN Image reconstruction Time and cost are reduced. It can work well under small-sized images. The texture of the image gets reduced compared to the reference image. RMSE-26.88
SSIM-0.902
Ye et al. [35] SUPER-FBP Image reconstruction Prevents overfitting problems. Preserve the structural details of the image. Requires larger training data. This technique is highly suffered due to fixed point iteration problem RMSE-13.3
SSIM-0.748
SNR-35
Qui et al. [36] MWBRW Image reconstruction Lack of up-sampling operation to mitigate the overfitting issue Failure to detect the difference between the high-resolution and low-resolution images. PSNR-33.53
SSIM-0.96
Tan et al. [37] SGRAN and VGG-16 Image reconstruction and classification Low error and shows enhanced performance in confirming COVID positive for the patients. Highly affected due to texture loss in the reconstructed image.

2.2. Reconstructed CT image quality improvement

Kida et al. [32] developed a Deep CNN network model for the quality improvement of CT images. For the defined model, MAE was termed as an error function. Here, the network model was trained with adam approach. Wu et al. [33] developed an artificial neural network model (ANN) for CT image reconstruction. The developed network model could improve the image reconstruction quality. For feature learning, a K-sparse autoencoder was developed.

Wu et al. [34] developed a deep neural network model for CT image reconstruction. This approach aims to reduce the time consumption of the reconstruction network model. To overcome the local-minimum problem caused by greedy learning, the quadratic surrogate with ordered subsets is incorporated with the DCCN network for data fidelity and improved reconstructed image quality. The goal of the developed model is to diminish the computational cost of the training time of the reconstructed network model.

Ye et al. [35] developed a unified supervised–unsupervised (SUPER) learning model for CT image reconstruction. The developed model filtered back projection (FBP) was integrated with unsupervised learning-based priors and supervised network-based priors. The evaluation measures of SSIM and RMSE are estimated for a reconstructed image.

Qui et al. [36] introduced multi-window back projection residual networks (MWBRW) for reconstructing COVID-19 CT images. Initially, the multi-window approach was introduced to enhance the feature maps for generating high or low-frequency data, which are fused and filtered out the features. Then, a back-projection network with up-projection and down-projection modules was used to extract the features from the image. Finally, the multi-window and back projection networks were combined and processed the input image for reconstruction.

Tan et al. [37] defined the SGRAN network model for the reconstruction of the chest CT image and the VGG model for CT image classification. In the first stage, the super-resolution generative adversarial network (SGRAN) model was introduced to reconstruct the super-resolution image from the original CT image. Then, DL based VGG-16 network model was introduced to classify the presence or absence of COVID from the reconstructed image.

2.3. Problem formulation

With the growth of medical image processing in recent years, CT image reconstruction possesses vital to progress. For efficient disease detection and surgical preplanning, noise pre-processing is compulsory for signal and image acquisition processing. Various methods of filtering, like Median and Gaussian filters, are involved in studies, but they neglect the grey values of non-noisy pixels and prevention of edges seems to be poor. Deep learning techniques are a significant method in reconstructing images when considering under-sampled data. Supervised learning techniques work with paired data to study mappings of deep neural networks for image reconstruction, but the outcome concerns less quality. For effective backpropagation, intermediate outcomes must be gathered during neural network training. However, in the case of deep neural networks, the memory needed to gather the intermediate outcomes is 100 times more than the size of input CT images. This process becomes too challenging for the CT images due to their maximized resolution. In addition, the diagnostic accuracy of COVID-19 has a greater significance in dealing with the problem of low-quality image data.

As a result, the CNN-based supervised technique normally faces problems losing high-frequency information due to larger-sized images.

The supervised training is implemented in deep learning methods to acquire high-grade reference images under-sampled pairs. In general, the images are reconstructed through training of the network with the under-sampled pattern. However, the deepening of the network layer increases the training cost gradually. At the same time, the increase in the channel, filter size and step size and other parameters make the design process complex. The supervised and unsupervised learning methods are superior in producing high-grade image reconstruction. The similarity between the training and testing samples generates outcomes in unsupervised learning methods. In the case of supervised learning, there is a requirement for a huge amount of data for training, which is unrealistic in medical imaging.

Moreover, overfitting issues arise in the case of a limited training dataset, and it is not easy to reconstruct the CT image efficiently. An enhanced optimization self-attention-based network model is implemented in this paper for image reconstruction to overcome such drawbacks. The proposed self-attention module can upgrade the reconstruction outcome with a good image by modelling long-range dependencies.

3. Proposed methodology

Medical image reconstruction aims to acquire high-quality medical images for clinical usage at a minimal cost. Deep learning models play a major role in medical imaging, particularly image reconstruction. The image that occurred from different image modalities may suffer from low SNR and CNR with image artifacts. The optimized CT image reconstruction model is proposed in this research to improve image quality with better visual interpretation. The proposed reconstruction model is named EBRSA-bi LSTM. Before reconstructing CT images, the extended cascaded filter (ECF) is introduced to suppress the presence of noises or artifacts. In CT, the image reconstruction is processed using data obtained in the frequency domain, which is also termed k-space. The generation of k-space is emphasized by applying Fourier transform. The low-frequency signals are coordinated at the centre of the k-space with contrast information. The high-frequency signals in the k-space data are spaced outside the centre with information regarding spatial resolution or sharpness. In the k-space domain, the centre frequency defines the image contrast and brightness. Then, with the rid of inverse Fourier transform, the under-sampled k-space data get back to the image domain and fed as input to the network model.

A novel EBRSA-BiLSTM model is proposed for CT image reconstruction. LSTM is an advanced model for recurrent neural networks (RNN). Bi-LSTM is the intrusion of both forward and backward directions. Then the normalization layer is introduced to estimate the mean and variance of the input image. This layer improves the network convergence, reduces the training time and eradicates overfitting issues. The integration of the self-attention mechanism in the network model demonstrates its ability to enhance the quality of reconstruction outcome by capturing long-range dependencies across image regions. To reinforce the data consistency in k-space, the data consistency layer is incorporated at the end of each module. The battle royale optimization (BrO) is introduced for loss function reduction in the network model and weight updation of each layer in the neural network. In our work, the major objective is to explore the optimization method and deep learning to increase the accuracy and improve the steadiness among exploration and exploitation of optimization methods. Finally, in the experimental scenario, the image quality metrics such as PSNR, SSIM and RMSE are examined with two datasets.

The global architecture of the proposed model is shown in Fig. 1 .

Fig. 1.

Fig. 1

Global architecture of the proposed model.

3.1. Pre-processing

The initial stage of the proposed image reconstruction model is pre-processing. Here, the under-sampled CT images are acquired and pre-processed by the cascaded filtering technique. The new extended cascaded filter (ECF) is proposed to eradicate the presence of noise in the under-sampled CT image. The proposed filter is emphasized by cascading the decision-based median filtering (DMF) with the adaptive wiener filter (WF). This filtering technique intends to eliminate the salt and pepper noise in the images by preserving edges. The median filters are highly effective in pre-processing the under-sampled images by removing salt and pepper noise. DMF and adaptive WF filters are cascaded to enhance the visual appearance and quality of the CT image.

Initially, the images are fed to the DMF filtering to eliminate the salt and pepper noise. The DMF filter utilizes a sliding window to move through the image, pixel by pixel, to detect the salt and pepper noise. The pixel’s median value is evaluated and interchanged once if the noise is identified in the pixel location. After this approach, the image from the DMF filter is fed to the adaptive WF. The steps followed in ECF for de-noising the input images can be revealed as follows:

  • Step 1: Initially, a two-dimension 3×3 sliding dimension window is chosen to attain the image’s pixel values.

  • Step 2: When contemplating all the pixel values of an image, one-pixel value (xy) is selected. The selected pixel is compared between the pixel limits to determine the noisy pixels (i.e. if0<xy<255, then the pixel value is considered as a noisy pixel). Then the pixel values are sorted and placed in a one-dimensional array.

  • Step 3: If both conditions were not satisfied by any pixel value, the particular value is referred to as noiseless, which tends to be an unprocessed value.

  • Step 4: The median value of the sliding window is evaluated for the noisy pixel values, and the particular value is used to replace those pixel values.

  • Step 5: The sliding window is moved to the succeeding pixels in the image.

  • Step 6: The above steps 2 to 5, are iterated until all the image pixels are processed.

  • Step 7: After eradicating salt and pepper noise, the images are fed to the adaptive WF to remove the blur of an image and preserve the edge information. The expression of WF can be represented as follows:

Ip,q=μ+σ2-σg2σ2np,q-μ (1)

where p,q specifies the pixel values of the image, np,q is the original noisy image, Ip,q is the de-noised image, μ is the mean of each pixel, σ is the local variance and σg is the global variance.

3.2. Reconstruction network model

3.2.1. k-space Generation

After pre-processing, the EBRSA-bi LSTM network model is proposed for under-sampled CT image reconstruction. Initially, the k-space is generated by performing Fourier transform (FT) in the CT image. The major intention of this research is to reconstruct the CT from the under-sampled image. The CT image is estimated from the discrete set of k-space environment, and it can be expressed in equation (2):

d=FsU+N (2)

Here, d specifies the measured k-space data, U represent the image to be reconstructed, Fs=ME specifies the under-sampled Fourier encoding matrix and N denotes the additive white Gaussian noise. M resembles the Fourier transform applied to each frame in a sequence. E specifies the under-sampling mask selecting lines in k-space to be sampled for each frame. In k-space, the frequency with high coefficients is located in the peripheral region, while the frequency with low coefficients is assembled in the central region. The central region provides information on the image contrast. The peripheral region gives the structure details like image contour and the illustration of k-space generation shown in Fig. 5. If the change in intensity is less, then the particular region is embedded as low spatial frequency. If the change in intensity is big, then the particular region is corresponded by high spatial frequency.

Fig. 5.

Fig. 5

k-space generation.

The partial separability (PS) and sparsity are used to reconstruct CT images from the under-sampled (k, t)-space data. The combination of PS and sparsity constraints are written using below,

ρ^=argminρCNM×1d-φρ22+Rrρ+Rsρ (3)

This PS-sparse Rr. and Rs. indicates the penalty functions, which are utilized to merge the PS and sparse constraints, respectively, and it is explained below.

PS constraints: PS constraints assumes that ρ(x,t) is spatiotemporally partially separable, and it is given below.

ρx,t=l=1LulXvlt (4)

Here,L indicates the order of PS constraint ulXl=1L and vltl=1L denotes the set of spatial and temporal functions, respectively. The PS model is introduced to capture spatiotemporal correlation, and it is often found in dynamic image sequences. If L is too low, then PS cannot capture the temporal features, and the PS model fitting problem gets well-accompanied. If L is high, then the model fitting problem gets ill-conditioned, which can amplify error and measurement noise in the model.

The PS constraints can be executed using multiple ways. The one way to execute the PS constraints “explicitly” with the presence of L using,

Rrρ=0,ifrankCL,else (5)

Based on the above equation, the C have the following decomposition,

C=UsVt (6)

Here, UsCN×L indicates the spatial subspace of C and VtCL×M depicts the temporal subspace ofC.

Sparsity constraints: The spatiotemporal features present in the dynamic image sequence generates approximate spatial representation. Let us consider ψρ as the sparse within a particular sparsifying transformψ. Here, ψ is considered application dependent. This research introduces the temporal Fourier transform to explain the defined model. In addition, the norm l1 is utilized to accompany the sparsity constraint, and the penalty function isRsρ=λsψρ1.

Joint PS and sparse constraint reconstruction: Based on the above assumptions, the solution for equation (3) can be written as,C^=U^sVt, here, U^s can be formulated as,

U^s=argminUsCN×Ld-ΩFsUsVt22+λvecUsVf1 (7)

Here, Ω:CN×MCD×1 indicates the k,t space sampling operator, FsCN×N signifies the spatial Fourier matrix Vf=VtFt and FtCM×M denotes the orthogonal temporal Fourier matrix. The solution for equation (8) eliminates the fundamental PS-constrained reconstruction if λ=0 it is mathematically formulated as,

U^s=argminUsCN×Ld-ΩFsUsVt22 (8)

The basic x,f domain sparsity constrained reconstruction is often called basic sparse if L=M and rankVt=M or it can be written as,

C^=argminUsCN×Md-ΩFsC22+λvec(CFt)1 (9)

The combination of both PS and sparsity constraints is manipulated in the single formulation. The proposed scheme compromises the limitation in sparsity constraint, which is served as the effective regularizer. When k,t space is under-sampled, some limitations, like spatiotemporal blurring, may occur. This is because the various sparse solution is closely matched with k,t space data. By integrating PS constrain, the developed scheme can diminish the spatiotemporal correlation in the data. Using this method, an enhanced reconstruction outcome is attained by eliminating the blur effect in the sparsity-constrained reconstruction. Then, the under-sampled k-space data is given input to the self-attention-based network model.

3.2.2. Reconstruction using optimized self-attention-based network model

The SA-bi LSTM is proposed for CT image reconstruction. In the proposed network model, the data consistency (DC) layer is added after each module to reinforce the data consistency in k-space. Finally, the sequence is reconstructed by consecutively applying inverse FT to each frame. The proposed network model is insisted with a bi-LSTM layer, normalization layer, self-attention layer, data consistency layer and softmax layer. The proposed reconstruction model is illustrated in Fig. 2 .

Fig. 2.

Fig. 2

Proposed reconstruction network model.

3.2.2.1. Bi-LSTM layer

The extension of RNN is termed LSTM. Also, it solves the gradient vanishing or exploding problems in the RNN network model. LSTM model is employed with three gates and a cell memory state. The basic parameters of the LSTM model are the input gate, forget gate and output gate. The basic LSTM structure is shown in Fig. 3 .

Fig. 3.

Fig. 3

Basic structure of LSTM.

The expression of these gates and cell memory state is given below:

Fg=σWF.ht-1+RFKd+BF (10)
Ig=σWI.ht-1+RIKd+BI (11)
Og=σWO.ht-1+ROKd+BO (12)
Cg=Fg*Cg-1+Ig*tanhWC.ht-1+RCKd+BC (13)

Here,WF,WI,WO, WC andBF,BI,BO, BC specifies the weight matrices and bias of the forgetting gate, input gate, output gate and cell memory state. σ resembles the sigmoid function. The symbol * defines element-wise multiplication. Cg resembles the memory cell gate and ht-1 defines the hidden vector.RF,RI, RO and RC defines the correlation coefficient. Kd specifies the k-space data fed to the input of the LSTM layer.

In our proposed model, the Bi-LSTM is employed, which is the sum up information of both forward and backward layers. Finally, the information gathered from the two layers is merged and attains a single output from the Bi-LSTM layer. For each period, based on the previously hidden vector Fht-1 and input k-space dataKd, the forward LSTM layer assesses the hidden vectorFht. Based on the opposite previous hidden vector Bht-1 and input k-space dataKd, the backward LSTM layer assesses the hidden vectorBht. Finally, the forward hidden vector Fht and backward hidden vector Bht are fused to attain a final hidden vector for the proposed Bi-LSTM model. The backward and forward hidden layer vector is represented as Bh1,Bh2.....Bhn and Fh1,Fh2.....Fhn consecutively. Therefore, the final hidden vector of the Bi-LSTM model is given in equation (14).

ht=Fht,Bht (14)
3.2.2.2. Normalization layer

Bi-LSTM model uses the normalization layer to speed up network convergence and avoid overfitting issues. The intrusion of the normalization layer is to evaluate the mean and variance of the presence of neurons in the hidden layer. With the introduction of the batch normalization layer, the training time and sensitivity of the network also decrease. The below equations can compute the mean and variance:

M=1Hi=1Hhti (15)
V=1Hi=1Hhti-M2 (16)
3.2.2.3. Self-attention (SA) layer

The attention mechanism was introduced and mainly focussed on feature information during the training stage. The SA mechanism is the extension of the attention mechanism. It captures the internal correlation of data and diminishes the external information. The Bi-LSTM network model with self-attention weight is represented in equation (17). Here, the softmax function is used as a normalization operation. The hidden state of Bi-LSTM is weighted by SA weight.

Watt=softmax(L3(tanh(L2tanh(L1awt)) (17)

L1, L2 and L3 specifies the weight vectors. The weighted attenuation vector is represented as:

Ov=Wattht (18)

The weighted attenuation vectors are merged, and it is expressed as

δ=Ov1,Ov2,Ov3 (19)
3.2.2.4. Data consistency layer (DC) and softmax layer

The DC layer is embedded to reconstruct CT images. The DC layer is adapted at the end of each module to reconstruct k-space data. The image domain is obtained in the final reconstruction by applying the inverse Fourier transform. The formulation of data consistency enforcement is expressed in equation (20).

DCi=IFTdki,kl (20)

Here, ki describes the measured k-space data, kl resembles the estimated k-space data attained from the network predictions at the current iteration. The below equation, I0 specifies the indicator function. The function of data consistency enforcement d is represented in equation (21).

dki,kl=I0ki*kl+1-I0ki*ki (21)

The final output of the DC layer is given below:

U=IFTVint(k)+λV(k)1+λ (22)

Here, λ denotes the positive constant, Vint represents the intermediate prediction of each block and k specifies the k-space data. The softmax layer is the final layer, and the mathematical expression is given in equation (23).

Z=softmaxWδatt+B (23)
3.2.2.5. Loss function

The reconstruction accuracy gets degraded due to the loss function of the proposed network model. The depreciation of the loss function improves the reconstruction accuracy. The error of the proposed network model is defined below:

LF=i=1Tk=1Lyiklogρik+λθ2 (24)

Here,T denotes the training dataset, L specifies the number of labels of data, ρ resembles the predicted data, y denotes the actual data, λθ2 resembles the regular term, λ represents the hyper regularization parameter and θ resembles the set of parameters in the Bi-LSTM network model. In this research, the battle royale (BR) meta-heuristic strategy is introduced to update weight parameters and minimize the error or loss function.

3.2.3. Loss function reduction using battle royale optimization (BrO)

Digital games inspire the nature of the BrO method, and this optimization model is adapted to the reconstruction network model to regulate the weight update parameters and diminish the error function. The minimization of the error function is established as a fitness evaluation or objective function of the BrO method. Several optimization methods are developed and mainly based on species’ social behaviour. In addition, BrO optimization is based on digital games and it is intended to the proposed network model. In this approach, the soldier or player acts as an individual and emigrates towards the best place to survive. The BrO optimization maintains both the exploitation and exploration stages concurrently. In the exploitation stage, the injured soldier migrates towards the best position to pinpoint the elite players. The mathematical representation is given below:

xdamage,d=xdamage,d+r(xbest,d-xdamage,d) (25)

Here, r specifies the random number, and the range is between 0 and 1. In the exploration stage, when the soldier’s damage level exceeds the threshold value, the soldier enters the death stage and moves back to the feasible problem space in the exploration stage. The expression of the exploration stage is defined as:

xdamage,d=r(UBd-LBd)+LBd (26)

Here, UBdandLBd signifies the upper and lower bounds in a d dimensional search space. If the weight parameter is optimal, then the learning process in the neural network gets enhanced, and the error function gets diminished, respectively. The pseudocode for the BrO algorithm is shown below.

Pseudo code for BrO algorithm
Start
Initialize the population xn and initialize all the parameters;
Ifxd,dam<threshold
Update the position of the injured soldier by equation (1);
Else
The loser re-spawn in the current safe area using equation (2);
if
Re-evaluate the fitness functionxd;
xd,dam=0;
End if
End

4. Results and discussions

The performance evaluation and the results obtained to reconstruct the CT image are interpreted in detail in this section. The implementation of the proposed work is carried out in the Matlab Simulink platform to estimate the performance of the reconstruction model. The under-sampled CT images are collected from the COVID-CT- dataset [38] for evaluation. The dataset was embraced with 349 under-sampled COVID CT images collected from 216 patients. The data samples are not much sufficient to boost the model in a better way. The large dataset SARS-CoV-2 CT-scan [39] with 1252 samples is used to deal with the overfitting issues. The developed model is evaluated based on two phases: pre-processing and reconstruction. The generation of k-space data executes the process of reconstruction. In k-space data, the Fourier and inverse FT are applied to reconstruct the under-sampled CT image. For implementation, the dataset is split into 80 % for training and 10 % for testing, and 10 % for validating the model. Table 3, Table 4 displays the proposed work’s hyper-parameter setting and system configuration.

Table 3.

Hyper-parameter setting for the proposed framework.

Sl. No Hyper-parameters Bi-ENN
1. Learning algorithm Battle royale (BR)
2. Initial learning rate 0.01 %
3. Mini batch size 50
4. Max epochs 100
5. No of input units 256
6. Input size 40
7. No of neurons in a hidden layer 125,100
8. No of hidden layers 2
9 No of the output layer 1
10 Dropout 0.5
11 Activation function Sigmoid

Table 4.

System configuration for the proposed work.

Sl. No Parameters Configuration
1. Full device name ssm606.smg.local
2. Processor Intel(R) Core(TM) i5-4670S CPU @ 3.10 GHz
3. Installed RAM 16.00 GB
4. Device ID EDC4E5AD-886D-4C2C-89BF-E55798928F69
5. System type 64-bit operating system
6. Pen and touch No pen or touch input is available for the display

4.1. Image quality metrics

The image quality metrics evaluated the quality of the reconstructed image. The image quality metrics, including PSNR, RMSE, SSIM, EPI, SI, JND, DE and AMBE, are considered. The performance comparison will be carried out with the existing state-of-the-art techniques related image reconstruction model to prove the efficacy of the proposed methodology.

  • a.

    RMSE

MSE defines the difference between the average square of an image’s original and de-noised pixel values. Therefore, RMSE is evaluated by taking the square root of MSE. The expression of RMSE is expressed in equation (27).

RMSE=1Nn=1Nun-uoriginal2 (27)

un and uoriginal denotes the pixel values of filtered and original Under sampled CT images. N denotes the total number of pixels in the reconstructed image.

  • b.

    PSNR

For MSE and RMSE, the PSNR value is evaluated and expressed in decibels. The mathematical expression of PSNR is defined in equation (28):

PSNR=10log10max(un,uoriginal)21Nn=1Nun-uoriginal2 (28)
  • c.

    SSIM

The SSIM incorporates the luminance term (l), the contrast term (c) and the structural (s) term to compute the SSIM value. The SSIM is determined in the below equation:

SSIM(x,y)=L(x,y)α.C(x,y)β.S(x,y)γ (29)

where,

L(x,y)=2μxμy+C1μx2+μy2+C1 (29.1)
C(x,y)=2σxσy+C2σx2+σy2+C2 (29.2)
S(x,y)=σxy+C3σxσy+C3 (29.3)

μx and μy denotes the intensity along X and Y direction. σx and σy represents the standard deviations, σxy specifies the cross variance of the image in both directions. To avoid instability, the constant ofC1, C2 and C3 are added. Therefore, the simplified SSIM is given below:

SSIM(x,y)=(2μxμy+C1)(2σxσy+C2)(μx2+μy2+C1)(σx2+σy2+C2) (30)
  • d.

    EPI

The EPI designates the preserved amount of edges in the test image. In medical image processing, the edges are essential and embraced with significant information. Therefore, the EPI between both the reference and test image is shown below equation:

EPI(μn,μoriginal)=(Δμn-χΔμn)(Δμoriginal-ΔΔoriginal)(Δμn-χΔμn)2(Δμoriginal-ΔΔoriginal)2 (31)
  • e.

    JND

Based on local average brightness and local spatial frequency, the threshold is evaluated in these performance metrics. The metrics are evaluated based on the difference between brightness (B) and threshold (T). The mathematical expression of JND is defined in equation (32):

JND=H(D(ΔB-T))D (32)

Here, D denotes the decision map of the image structure, and the canny detector estimates it. H represents the heavy side function.

  • f.

    DE

The discrete entropy depends on overall image contents and delineates the overall quality of an image. The mathematical representation of DE is defined in equation (33):

Entropy=-kpklog2(pk) (33)
  • g.

    AMBE

The mathematical expression of AMBE is defined in equation (34):

AMBE=E(p)-E(q) (34)

Here, E(p) and E(q) signifies the entropy values of both input and processed images.

E(p)=i=0i=Iw-1j=0j=Ih-1p(i,j) (34.1)
E(q)=i=0i=Iw-1j=0j=Ih-1q(i,j) (34.2)

4.2. Performance evaluation

The performance evaluation of the proposed reconstruction model is presented in this section. The proposed reconstruction model is evaluated with existing techniques such as artificial neural network (ANN), convolutional recurrent neural network (CRNN), residual encoder-decoder CNN (RED-CNN) and Self-attention CNN (SA-CNN) and Adaptive VGG to validate the effectiveness of the proposed model. The simulation outcome of the proposed model undergoes two phases, namely, pre-processing and image reconstruction.

4.2.1. Pre-processing and k-space data generation

The CT images are collected from the real-time environment, and the collected images are deliberated with several noises. A new cascaded filtering model is proposed in this research to enhance an image’s quality and visible appearance. The images are transformed into grey-scale images in the pre-processing stage for better reliability. The pre-processing stage is carried out for the removal of noise and distortions. The resolution of images may get degraded during the collection of CT images in a real-time environment. So, the reconstruction accuracy gets declines. Therefore, the ECF filtering model is proposed to eradicate the noise and boost image quality. This level takes the proposed model to attain better results. The input and filtered images for the two datasets are shown in Fig. 4 . The reconstructed network model reconstructs the image from the highly under-sampled k-space data. The experimental evaluation can be validated by varying the sampling rate to 20 %, 30 %, 40 %, 50 %, 60 %, 70 %, 80 %, 90 % and 100 %, respectively. Therefore, the k-space generation is shown in Fig. 5.

Fig. 4.

Fig. 4

Input and Filtered image for both datasets.

4.2.2. Image reconstruction

Image reconstruction is an essential stage of the proposed model. In this stage, the under-sampled CT images are reconstructed with the EBRSA-bi LSTM network model. Here, the k-space generated data is fed to the network model after applying Fourier transform. This section discusses the training, testing and validation loss and accuracy.

  • A

    Training, testing and validation measures for loss and accuracy under dataset I

Our proposed model is analyzed with training, testing and validation data. Under dataset I, the accuracy and validation curve are examined. From the dataset, 80 % data is used to train, 10 % is used to test, and 10 % is used to validate the model. The training, testing and validation accuracy of the proposed reconstruction model for a dataset I is illustrated in Fig. 6 (a). By varying the epoch size, all the accuracies are evaluated. The accuracy for all three cases is similar. While observing the training, testing and validation accuracy, only slight variations are captured with high accuracy. The increase in accuracy may occur due to the increasing epoch size. If the epoch size is 50, the proposed model procures a training, testing and validation accuracy in the range of 80 to 85 % consecutively. If the epoch size is 250, the model retains accuracy in the 80 to 90 % range, and if the epoch size is 450, the network model establishes an accuracy in the range of 90 to 100 %.

Fig. 6.

Fig. 6

(a) Analysis of training, testing and validation accuracy, (b) Analysis of training, testing and validation loss.

Fig. 6 (b) illustrates the training, testing and validation loss curve with varying epoch sizes. The Bi-LSTM network model is emphasized with forward and backward layers. The model is trained in both the forward and backward directions. In addition, the self-attention layer is embedded to enhance the model’s accuracy with better reconstruction quality. The training, testing and validation loss is procured for the proposed network module, and the network has been trained for 450 epoch size. The increase in epoch size decreases the loss. If the epoch size is 50, the model achieves a training, testing and validation loss from 0.4 to 0.5. If the epoch size is 150, the model acquires a loss of 0.3 to 0.4. For epoch size 250, the value ranges from 0.2 to 0.5; if the size is 450, the loss value ranges from 0.1 to 0.3 consecutively.

  • B

    Training, testing and validation measures for loss and accuracy under dataset 2

The training, validation and testing accuracy loss curve for Dataset II is illustrated in Fig. 7 . Only the usage of 349 samples is not sufficient to push the model to its best performance. The SARS-CoV-2 CT-scan dataset is evaluated to deal with overfitting issues and contains 1252 samples. Suppose the training and validation loss or accuracy decreased or increased and stabilized at a specific point. In that case, it indicates the optimal fit (i.e. the model does not overfit or underfit). By varying the epoch size, both the accuracy and loss are examined. Fig. 7a shows that the training and validation accuracy increases linearly and remains stable after a particular point for a prolonged time interval. It indicates the model is optimally fit (no over or under fit). Fig. 7b illustrates the loss curve, and it shows that there will be a slight variation in training and validation loss. It seems that the model is fit, and from Fig. 7, it is clear that the proposed model achieves maximum accuracy with loss. Also, it remains stable for prolonged iterations.

  • C

    Analysis of image quality measures (PSNR, SSIM, RMSE, EPI and SI) by varying sampling rate

Fig. 7.

Fig. 7

(a) Analysis of training, testing and validation accuracy, (b) Analysis of training, testing and validation loss.

The image quality metrics of PSNR, SSIM, RMSE, EPI and SI are evaluated and compared with existing techniques of ANN, CRNN, RED-CNN and SA-CRNN. The image quality metrics highlight the effectiveness of the CT image reconstruction model.

Fig. 8 illustrates the PSNR and SSIM quality metrics for varying the sampling rate. The metric PSNR is defined as the ratio between the maximum possible values to the noise. The measurement of quality between the under-sampled and reconstructed image is termed PSNR, and the high PSNR value achieves the better quality of the CT reconstructed image. The PSNR value of the proposed model is compared with existing reconstruction techniques of artificial neural network (ANN), convolutional recurrent neural network (CRNN), residual encoder-decoder CNN (RED-CNN) and Self-attention CNN (SA-CNN). Fig. 8 shows that the proposed model establishes a higher PSNR value than the other methods. ANN is an essential CT reconstruction model, but this approach degrades image reconstruction quality due to the reliance on the k-space of data. It is not much popular for image analysis. Therefore, ANN procures a PSNR of 20 dB and 25 dB for 60 and 100 sampling rates. CRNN suffers due to low convergence speed and high computation time. However, the CRNN model is also compared with the proposed model, reaching a PSNR of 27 dB and 28 dB for 60 and 100 % of the sampling rate, respectively. The main intuition behind the RED-CNN network is at the initial layers, the receptive field of the filters is small, and it will not recover the fully sampled image from the under-sampled k-space data effectively. Therefore, RED-CNN procures a PSNR of 30 dB and 32 dB for 60 and 100 sampling rates. SA-CNN model is also compared with the proposed work, yielding an outcome PSNR of 34 dB and 38 dB for 60 and 100 % of the sampling rate, respectively.

Fig. 8.

Fig. 8

Analysis of PSNR and SSIM metrics with various sampling rates.

The proposed model consecutively achieves a PSNR value of 37 dB, 38 dB, and 41 dB for 40, 60 and 100 % sampling rates.

The SSIM is an essential metric of image reconstruction and quantifies the degradation of image quality during image reconstruction. The value of SSIM ranges from 0 to 1. 1 represents the good outcome of SSIM metrics. The Figure shows that the proposed model attains an SSIM value of 0.95, 0.96 and 0.97 for various sampling rates 40, 80 and 100 %. In contrast, the existing models of ANN, CRNN, RED-CNN and SA-CRNN procure an SSIM value of (0.63, 0.75, 0.79), (0.65, 0.79, 0.87), (0.76, 0.87, 0.90) and (0.82, 0.92, 0.95) for 40, 80 and 100 % of sampling rate, respectively.

Fig. 9 illustrates the RMSE and EPI metrics for the sampling rate. The RMSE is defined as the error measure in the proposed reconstruction model. The value with low RMSE signifies a better reconstruction model. RMSE provides statistical information regarding the short-term performance of the model. The RMSE value is computed by measuring the difference between estimated and measured values. The Figure shows that the proposed model attains an RMSE value of 0.03, 0.02 and 0.021 for 40, 80 and 100 % sampling rates. The existing ANN establishes an RMSE of 0.12, 0.05 and 0.04 for 40, 80 and 100 % sampling rates. CRNN yields the RMSE value of 0.09 for a 40 % sampling rate, 0.05 for 80 % and 0.04 for 100 %, respectively. The existing RED-CNN establishes an RMSE of 0.07, 0.05 and 0.04 for 40, 80 and 100 % sampling rates. SA-CRNN yields the RMSE value of 0.05 for a 40 % sampling rate, 0.04 for 80 % and 0.03 for 100 %, respectively. The EPI computes the amount of edge preserved after the pre-processing or noise removal stage and maintains the edge preservation details of an undersampled image. Therefore, it tends to enhance the quality of a reconstructed image. The EPI of the proposed model retains results of 0.41 for 40 %, 0.69 for 80 % and 0.76 for 100 %, respectively. Fig. 10 illustrates the SI image quality metric by varying the sampling rate. The metric sharpness index is defined as the intensity distribution of an image and belongs to the family of image sharpness functions. The existing method suffers from computational complexities and weight function updation. The proposed reconstruction model attains better results due to the intrusion of the self-attention mechanism and data consistency enforcement in the reconstruction model. The proposed model yields an SI of 2644 for 40 %, 3431 for 80 % and 3609 for 100 %, respectively.

Fig. 9.

Fig. 9

Analysis of RMSE and EPI metrics with various sampling rate.

Fig. 10.

Fig. 10

Analysis of SI with various sampling rates.

Fig. 11 illustrates the input, k-space data generation and reconstructed image for various sampling rates for both datasets. The k-space data is generated from the input under the sampled CT image with the varying sampling rate. After k-space generation, the reconstruction process is executed with network mode, and the reconstructed images with varying sampling rates are shown in the same Figure.

  • D

    Analysis of overall image quality metrics (PSNR, SSIM and RMSE)

Fig. 11.

Fig. 11

Input image, k-space data and reconstructed image with varying sampling rate (a) Dataset 1, (b) Dataset 2.

The overall image quality metrics such as PSNR, SSIM and RMSE are computed and analyzed with existing techniques. The ANN, CRNN, RED-CNN, SA-CNN, RED-CNN-VGG and Adaptive VGG are compared with the proposed reconstruction model to evaluate the effectiveness of the proposed methodology. The existing methods attain a very low PSNR and SSIM due to their worst resolution and high reconstruction error. Fig. 12, Fig. 13, Fig. 14 illustrate the proposed model’s PSNR, SSIM and RMSE with other existing methods. The proposed model reaches a PSNR value of 45 and 46 dB under two datasets, whereas the existing methods procure low values due to their longer computation time. The proposed reconstruction model establishes an SSIM value of 0.9 under two datasets and an RMSE value of 0.002 under two datasets and the model is highly successful for CT image reconstruction. The image quality metrics of the proposed and existing approaches are shown in Table 5 . The image quality metrics of the average 10-fold cross-validation run are shown in Table 6 .

  • E

    Training and validation measures of accuracy and loss by varying patch size

Fig. 12.

Fig. 12

Performance analysis of PSNR.

Fig. 13.

Fig. 13

Performance analysis of SSIM.

Fig. 14.

Fig. 14

Performance analysis of RMSE.

Table 5.

Image quality metrics (PSNR, SSIM and RMSE).

PSNR (Peak-Signal to Noise Ratio)
ANN CRNN RED-CNN SA-CNN RED-CNN - VGG ADAPTIVE - VGG Proposed
Dataset 1 33.8278 39.5959 42.7613 41.7831 43.9774 43.1503 45.152
Dataset 2 34.6358 39.9868 43.589 40.8975 45.879 44.897 46. 123



SSIM (Structural Similarity Index metric)
ANN CRNN RED-CNN SA-CNN RED-CNN - VGG ADAPTIVE - VGG Proposed
Dataset 1 0.7415 0.9261 0.9626 0.9639 0.9685 0.968 0.9928
Dataset 2 0.7632 0.9358 0.9789 0.9869 0.9798 0.9832 0.996



RMSE (Root Mean Square Error)
ANN CRNN RED-CNN SA-CNN RED-CNN - VGG ADAPTIVE - VGG Proposed
Dataset 1 0.0209 0.0107 0.0074 0.0083 0.0065 0.0071 0.0026
Dataset 2 0.0232 0.0115 0.0081 0.0092 0.0069 0.0078 0.0022
Table 6.

Ablation study.

Dataset I (mean±std)
Modules PSNR SSIM RMSE
Bi-LSTM 32.542 (± 0.89) 0.7915 (± 0.0236) 0.0205 (± 0.0197)
SA-Bi-LSTM 39.968 (± 1.59) 0.8529 (± 0.0249) 0.0105 (± 0.0185)
SA-Bi-LSTM-DC 43.938 (± 1.65) 0.9721 (± 0.0419) 0.0051 (± 0.0160)
EBRSA-Bi-LSTM-DC 45.152 (± 1.76) 0.9928 (± 0.0548) 0.0026 (± 0.0105)
Dataset II (mean±std)
Bi-LSTM 34.396 (± 0.95) 0.7985 (± 0.0242) 0.0201 (± 0.0192)
SA-Bi-LSTM 40.924 (± 0.99) 0.8796 (± 0.0255) 0.0095 (± 0.0155)
SA-Bi-LSTM-DC 44.528 (± 1.85) 0.9862 (± 0.0431) 0.0035 (± 0.0116)
EBRSA-Bi-LSTM-DC 46. 123 (± 1.89) 0.996 (± 0.0553) 0.0022 (± 0.0101)

Fig. 15 illustrates training and validation data loss and accuracy values for different patch sizes. The patch size requires more hardware resources, and the smaller patches will rapidly enhance the number of samples. By varying different patch sizes 48 × 48 × 16, 64 × 64 × 32,

Fig. 15.

Fig. 15

Accuracy and loss values for training and validation data with varying patch sizes. (a)Training accuracy, (b) Training loss, (c) Validation accuracy, (d) Validation loss.

80 × 80 × 48 the accuracy and loss are evaluated. Fig. 15a shows the training accuracy with varying patch sizes. Fig. 15b illustrates the training loss with varying patch sizes. Fig. 15c and 15d illustrates the validation accuracy and loss with varying patch size. The training and validation accuracy remains stable for prolonged iterations and attains an accuracy of 80 to 90, respectively. The training and validation loss decreased from 0.6 to 0.2 for all three patch sizes. Therefore, the proposed reconstruction model procures high accuracy with decreasing loss.

Table 7 summarises the execution time performance of the proposed model over existing approaches. The execution time is the time taken to execute the CT image reconstruction model. In the reconstruction model, initially, the k-space generation takes place. In addition to this, the reconstruction model is intended with BrO bionic optimization strategy. The main desire for choosing this BrO model is to lessen the time complexity of the learning process. Compared with existing models, the time taken to execute the proposed is very low, which is listed in the table below. Image quality metrics by varying the pixel size are listed in Table 8 . The input image, filtered image, k-space data and reconstructed image are shown in Fig. 16 .

Table 7.

Execution time analysis.

Reconstruction models Execution time (sec)
Proposed 0.085
ANN 1.03
CRNN 1.21
RED-CRNN 1.32
SA-CRNN 1.65
RED-CNN-VGG 1.72
ADAPTIVE VGG 1.80
Table 8.

Image quality analysis by varying resolution size.

Quality metrics 512 X 512 256 X 256 128 X 128 64 X 64 32 X 32
PSNR 49.831 48.693 46.551 48.584 47.927
RMSE 0.003 0.0031 0.0031 0.00315 0.003
SSIM 0.999 0.999 0.9997 0.9997 0.999
EPI 0.772 0.7680 0.7368 0.723 0.745
SI 3965.12 3923.076 3896.073 3756.035 3589.026
Fig. 16.

Fig. 16

Reconstructed results. Input image, filtered image, k-space data and reconstructed image for two datasets (Row1: Dataset1, Row 2: Dataset 2).

4.3. Potential applications

Image reconstruction in CT scans can produce tomographic images from X-ray projection data collected from various angles around the patient. Fundamental impacts in image reconstruction affect radiation dose and image quality. CT has recently experienced impressive medical industry expansion due to technological advancement and new therapeutic applications. There are currently various therapy options for liver cancer, depending on location, size, shape, and liver function. Extrahepatic clinicians typically adopt various feasible alternative treatments such as radiation therapy, chemotherapy, and ablation for those hepatocellular carcinoma (HCC) patients. However, it has recently been shown that radiofrequency ablation (RFA) can be suggested as the recommended standard treatment for HCC. It should be mentioned that there are several issues for which RFA is not appropriate, haemorrhage being one among them, and the technical challenges are: (i) there is a significant increase in lesion recurrence; (ii) there is an increased risk of tumour seeding and thermal injury to perihepatic structures, (iii) complex tumour location, portal hypertension, and patient obesity can significantly change the outcomes. Image fusion is necessary to diagnose, treat, and monitor some HCCs. Computer vision and image processing are required to enhance the visualization of lessions. Therefore, computer vision is beneficial in three major areas: (1) speed—which helps to accomplish work more quickly; (2) accuracy—which helps to achieve results with greater accuracy; and (3) urgency— whenever there is any urgency that needs to be addressed. By comparing the ultrasound (US) image and the computed tomography (CT) image, mental fusion is currently the only method for diagnosing liver abnormalities. However, image fusion (combining two distinct imaging modalities) may offer improved visualization of liver lesions [40]. Multimodal registration is crucial in remote sensing, cross-modal learning, and medical imaging. In particular, treatment planning, computer-aided diagnosis, multimodal diagnostics, surgery simulation, radiation, image-guided interventions, assisted/guided surgery, and illness follow-up are some of the clinically significant applications of deformable registration of CT and MRI images [41]. Evaluating treatment success is a vital stage in the curative treatment of liver neoplasms. Pre-ablation planning that includes volumetric assessment using CT/MR imaging can enhance the outcomes of the treatment. The viability and effectiveness of fusion imaging systems provide a quick evaluation of the therapeutic response to thermal ablation in liver neoplasms [42].

With the advancement of medical imaging technology in recent years, the clinical application of CT reconstruction imaging in patients has become increasingly significant in diagnosing tracheal stenosis before anaesthesia. It can serve as a crucial resource for diagnosing and treating diseases by intuitively and clearly understanding the physical link between the airway and surrounding tissues. The airway status should be assessed before surgery to guarantee that the operation and anaesthetic will have the desired effects. Due to its high-density resolution and quick imaging speed, CT reconstruction imaging is beneficial in determining the degree of airway stenosis, and the use of reconstruction technology can lessen the effects of overlapping projection and external tissue structure. A ground-glass opacity and soft tissue mediastinal nodule are typically visible on a chest computed tomography (CT) scan. The deep learning-based reconstruction model proposed in this paper achieved a successful reconstruction in terms of fast reconstruction time and high PSNR. Especially the soft tissue mediastinal nodules and ground-glass opacity are quite evident after reconstruction. As a result, the radiologist will find this to be very useful in making a diagnosis.

4.4. Challenges, limitations and future directions

The proposed network model is considered more effective in CT image reconstruction. However, it possesses some minor limitations. Already DL image reconstruction technique is used in the proposed work, and computer scientists are identifying techniques and strategies to design reconstruction algorithms for human understanding. Even though the proposed DL model can generate a quality image, it might be by wrong reasoning. A particular region in the image may be blurred or removed because it was underrepresented in training the data and therefore regarded as noise. Also, the reconstruction algorithm may hallucinate the non-existing region in the reconstructed images. This unreliability problem in the proposed work is very difficult to solve. Therefore, in future, effective algorithms must be innovated to get a solution for unreliability. In future, imaging systems must be enhanced to be used in all aspects of the imaging system. Such smart imaging systems must learn from big datasets available locally in hospitals and the cloud and real-time patient records. These input records should be optimized to obtain low-dose images and optimize the network models for efficient reconstruction and other analytical applications. In today’s clinics, medical imaging is essential for providing direction on disease diagnosis and therapy. One of the most essential and significant elements of medical imaging is medical image reconstruction, the main goal of which is to get high-quality medical images for clinical use at the lowest possible cost and patient risk. Although deep learning-based models dominate medical imaging, deep modelling still has several challenges that prevent their widespread use and adoption in clinical practice.

Most deep learning techniques for image reconstruction were created using supervised learning; however, label acquisition is still a significant challenge. To create new deep models for medical imaging, there aren’t many labelled data sets accessible. It takes a lot of time, and doctors need specialized knowledge to annotate medical images. Therefore, it is important to create efficient learning models that utilize both the (relatively more plentiful) unlabelled data and the (very few) labelled data. Appropriate inductive biases or domain-specific priors/constraints such as a learned dictionary, low-dimensional manifold, or a deep prior, which may be used to enhance image reconstruction, are required to enable successful weakly supervised learning. Radiologists don’t just rely on images to make clinical decisions. A doctor’s knowledge from their years of study in medical school and additional information from patients are both essential in making decisions. As a result, deep modelling is crucial for combining data from several sources and enhancing system performance. Collections of patterns from several modalities can offer more varied features than individual variables, which can be used to improve the learning of DL models. More precise quantification, TOF imaging, system modelling, motion correction, and dynamic reconstruction are current difficulties in CT image reconstruction. The application of CT imaging in patient care, clinical research investigations of pathophysiology, and therapeutic treatments may be improved by advancements in these areas. The proposed network can be further developed for direct CT 3D reconstruction and other imaging modalities and translated into several clinical applications. Also, there are several challenges, such as generalizability, stability, unreliability, and training data. Among these challenges, unreliability is the only challenge that must be considered while introducing a novel image reconstruction algorithm in future. The proposed method must be validated for real-time clinical use and considered for future work. Also, the proposed method is expanded to provide a solution to other reconstruction problems, including magnetic resonance imaging and positron emission tomography.

5. Conclusion

Image reconstruction plays a major role in transforming the acquired k-space data into images. It is also an inverse problem that recovers the original input image from various conditions such as noise, blurring due to atmospheric turbulence, and other damages. A novel EBRSA-bi LSTM reconstruction network model is proposed and implemented in this research to attain a maximum better image reconstruction quality. The k-space under-sampled data generation processes the reconstruction of the CT image. The network model is jointly incorporated with the self-attention, normalization, and data consistency layers to enhance the reconstructed image quality. Before reconstruction, the filtering process is defined to eradicate unwanted noises. The simulation outcome shows that the presented reconstructed model with data consistency attains a feasible solution with better reconstruction images. In addition, the presented network model could reconstruct the image with an aggressive under-sampling rate. Simulation results outperform that the suggested model procures better PSNR, RMSE, SSIM, EPI and SI compared with existing approaches. The various applications achieved by the proposed work are generalizability, stability, and training data. The only application reliability is not satisfied, which should be satisfied shortly. Recently, due to the advancement of technology and novel clinical applications, CT has shown remarkable growth in the medical domain. Also, recently, accurate and quick image reconstruction has been identified its application in the domain of robotics, medicine, city planning, the film industry, gaming, earth observation, reverse engineering, virtual environment, human–computer interaction, and animation.

Funding

No funding is provided for the preparation of the manuscript.

Data availability statement

Data sharing does not apply to this article.

CRediT authorship contribution statement

A.V.P. Sarvari: Writing – original draft. K. Sridevi: Writing – review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Data availability

Data will be made available on request.

References

  • 1.Alazab M., Awajan A., Mesleh A., Abraham A., Jatana V., Alhyari S. COVID-19 prediction and detection using deep learning. Int. J. Comput. Inf. Syst. Ind. Managem. Appl. 2020;12:168–181. [Google Scholar]
  • 2.Waller J.V., Allen I.E., Lin K.K., Diaz M.J., Henry T.S., Hope M.D. The limited sensitivity of chest computed tomography relative to reverse transcription polymerase chain reaction for severe acute respiratory syndrome coronavirus-2 infection: a systematic review on COVID-19 diagnostics. Invest. Radiol. 2020;55(12):754–761. doi: 10.1097/RLI.0000000000000700. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Mossa-Basha M., Medverd J., Linnau K.F., Lynch J.B., Wener M.H., Kicska G., Staiger T., Sahani D.V. Policies and guidelines for COVID-19 preparedness: experiences from the University of Washington. Radiology. 2020;296(2):E26–E31. doi: 10.1148/radiol.2019201326. [DOI] [PubMed] [Google Scholar]
  • 4.Hu Z., Gao J., Zhang N., Yang Y., Liu X., Zheng H., Liang D. An improved statistical iterative algorithm for sparse-view and limited-angle CT image reconstruction. Sci. Rep. 2017;7(1):1–9. doi: 10.1038/s41598-017-11222-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Wang L., Lin Z.Q., Wong A. Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Sci. Rep. 2020;10(1):1–12. doi: 10.1038/s41598-020-76550-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Zhang H., Zeng D., Zhang H., Wang J., Liang Z., Ma J. Applications of non-local means algorithm in low-dose X-ray CT image processing and reconstruction: A review. Med. Phys. 2017;44(3):1168–1185. doi: 10.1002/mp.12097. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Gu P., Jiang C., Ji M., Zhang Q., Ge Y., Liang D., Liu X., Yang Y., Zheng H., Hu Z. Low-Dose Computed Tomography Image Super-Resolution Reconstruction via Random Forests. Sensors (Basel) 2019;19:207. doi: 10.3390/s19010207. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Yang Q., Yan P., Zhang Y., Yu H., Shi Y., Mou X., Kalra M.K., Zhang Y., Sun L., Wang G. Low-dose CT image de-noising using a generative adversarial network with Wasserstein distance and perceptual loss. IEEE Trans. Med. Imaging. 2018;37(6):1348–1357. doi: 10.1109/TMI.2018.2827462. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.S. Anitha, L. Kola, P. Sushma and S. Archana, Analysis of filtering and novel technique for noise removal in MRI and CT images. In 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), IEEE, (2017) 1-3.
  • 10.Higaki T., Nakamura Y., Tatsugami F., Nakaura T., Awai K. Improvement of image quality at CT and MRI using deep learning. Jpn. J. Radiol. 2019;37(1):73–80. doi: 10.1007/s11604-018-0796-2. [DOI] [PubMed] [Google Scholar]
  • 11.N. Milickovic, D. Baltas, S. Giannouli, M. Lahanas and N. Zamboglou, CT imaging based digitally reconstructed radiographs and its application in brachytherapy. [DOI] [PubMed]
  • 12.Kim I., Kang H., Yoon H.J., Chung B.M., Shin N.-Y. Deep learning–based image reconstruction for brain CT: improved image quality compared with adaptive statistical iterative reconstruction-Veo (ASIR-V) Neuroradiology. 2021;63(6):905–912. doi: 10.1007/s00234-020-02574-x. [DOI] [PubMed] [Google Scholar]
  • 13.Hahn D., Thibault P., Fehringer A., Bech M., Koehler T., Pfeiffer F., Noël P.B. Statistical iterative reconstruction algorithm for X-ray phase-contrast CT. Sci. Rep. 2015;5(1):1–8. doi: 10.1038/srep10452. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Stiller W. Basics of iterative reconstruction methods in computed tomography: a vendor independent overview. Eur. J. Radiol. 2018;109:147–154. doi: 10.1016/j.ejrad.2018.10.025. [DOI] [PubMed] [Google Scholar]
  • 15.Tilley S., Jacobson M., Cao Q., Brehler M., Sisniega A., Zbijewski W., Stayman J.W. Penalized-likelihood reconstruction with high-fidelity measurement models for high-resolution cone-beam imaging. IEEE Trans. Med. Imag. 2017;37(4):988–999. doi: 10.1109/TMI.2017.2779406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Kim K., Ye J.C., Worstell W., Ouyang J., Rakvongthai Y., El Fakhri G., Li Q. Sparse-view spectral CT reconstruction using spectral patch-based low-rank penalty. IEEE Trans. Med. Imag. 2015;34:748–760. doi: 10.1109/TMI.2014.2380993. [DOI] [PubMed] [Google Scholar]
  • 17.Kim K., El Fakhri G., Li Q. Low-dose CT reconstruction using spatially encoded non-local penalty. Med. Phys. 2017;44 doi: 10.1002/mp.12523. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Zibetti M.V.W., Lin C., Herman G.T. Total variation superiorized conjugate gradient method for image reconstruction. Inverse Prob. 2018;34(3) [Google Scholar]
  • 19.Lohvithee M., Biguri A., Soleimani M. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms. Phys. Med. Biol. 2017;62:9295–9321. doi: 10.1088/1361-6560/aa93d3. [DOI] [PubMed] [Google Scholar]
  • 20.Lohvithee M., Sun W., Chretien S., Soleimani M. Ant Colony-Based Hyperparameter Optimization in Total Variation Reconstruction in X-ray Computed Tomography. Sensors. 2021;21(2):591. doi: 10.3390/s21020591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ding Q., Nan Y., Gao H., Ji H. Deep learning with adaptive hyper-parameters for low-dose CT image reconstruction. IEEE Trans. Comput. Imaging. 2021;7:648–660. [Google Scholar]
  • 22.Lee H., Lee J., Kim H., Cho B., Cho S. Deep-neural-network-based sinogram synthesis for sparse-view CT image reconstruction. IEEE Trans. Radiat. Plasma Med. Sci. 2018;3(2):109–119. [Google Scholar]
  • 23.Gupta H., Jin K.H., Nguyen H.Q., McCann M.T., Unser M. CNN-based projected gradient descent for consistent CT image reconstruction. IEEE Trans. Med. Imag. 2018;37(6):1440–1453. doi: 10.1109/TMI.2018.2832656. [DOI] [PubMed] [Google Scholar]
  • 24.Chen H., Zhang Y., Zhang W., Liao P., Li K., Zhou J., Wang G. Low-dose CT via convolutional neural network. Biomed. Opt. Express. 2017;8(2):679–694. doi: 10.1364/BOE.8.000679. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Du W., Chen H., Wu Z., Sun H., Liao P., Zhang Y. Stacked competitive networks for noise reduction in low-dose CT. PLoS one. 2017;12(12):e0190069. doi: 10.1371/journal.pone.0190069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Kang E., Min J., Ye J.C. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med. Phys. 2017;44(10):e360–e375. doi: 10.1002/mp.12344. [DOI] [PubMed] [Google Scholar]
  • 27.Kumar M., Mishra S.K., Sahu S.S. Cat swarm optimization based functional link artificial neural network filter for Gaussian noise removal from computed tomography images. Appl. Comput. Intell. Soft Comput. 2016;2016 [Google Scholar]
  • 28.Wang Y., Shao Y., Zhang Q., Liu Y., Chen Y., Chen W., Gui Z. Noise removal of low-dose CT images using modified smooth patch ordering. IEEE Access. 2017;5:26092–26103. [Google Scholar]
  • 29.Kumar S.N., Fred A.L., Miriam L.R.J., Padmanabhan P., Gulyas B., Kumar H.A. Computational intelligence Methods in COVID-19: Surveillance, Prevention, Prediction and Diagnosis. Springer; Singapore: 2021. Non Linear Tensor Diffusion Based Unsharp Masking for Filtering of COVID-19 CT Images; pp. 415–436. [Google Scholar]
  • 30.Dakua S.P., Abinahed J., Zakaria A., Balakrishnan S., Younes G., Navkar N., Al-Ansari A., Zhai X., Bensaali F., Amira A. Moving object tracking in clinical scenarios: application to cardiac surgery and cerebral aneurysm clipping. Int. J. Comput. Assist. Radiol. Surg. 2019;14(12):2165–2176. doi: 10.1007/s11548-019-02030-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Dakua S.P. LV segmentation using stochastic resonance and evolutionary cellular automata. Int. J. Pattern Recognit Artif Intell. 2015;29(03):1557002. [Google Scholar]
  • 32.Kida S., Nakamoto T., Nakano M., Nawa K., Haga A., Kotoku J., Yamashita H., Nakagawa K. Cone beam computed tomography image quality improvement using a deep convolutional neural network. Cureus. 2018;10(4) doi: 10.7759/cureus.2548. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Wu D., Kim K., Fakhri G.E., Li Q. Iterative low-dose CT reconstruction with priors trained by artificial neural network. IEEE Trans. Med. Imag. 2017;36(12):2479–2486. doi: 10.1109/TMI.2017.2753138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Wu D., Kim K., Li Q. Computationally efficient deep neural network for computed tomography image reconstruction. Med. Phys. 2019;46(11):4763–4776. doi: 10.1002/mp.13627. [DOI] [PubMed] [Google Scholar]
  • 35.Ye S., Li Z., McCann M.T., Long Y., Ravishankar S. Unified supervised-unsupervised (super) learning for x-ray ct image reconstruction. IEEE Trans. Med. Imag. 2021 doi: 10.1109/TMI.2021.3095310. [DOI] [PubMed] [Google Scholar]
  • 36.Qiu D., Cheng Y., Wang X., Zhang X. Multi-window back-projection residual networks for reconstructing COVID-19 CT super-resolution images. Comput. Methods Programs Biomed. 2021;200 doi: 10.1016/j.cmpb.2021.105934. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Tan W., Liu P., Li X., Liu Y., Zhou Q., Chen C., Gong Z., Yin X., Zhang Y. Classification of COVID-19 pneumonia from chest CT images based on reconstructed super-resolution images and VGG neural network. Health Inf. Sci. Syst. 2021;9(1):1–12. doi: 10.1007/s13755-021-00140-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.X. Yang, X. He, J. Zhao, Y. Zhang, S. Zhang and P. Xie, COVID-CT-dataset: a CT scan dataset about COVID-19. arXiv preprint arXiv:2003.13865 (2020).
  • 39.Soares E., Angelov P., Biaso S., Froes M.H., Abe D.K. SARS-CoV-2 CT-scan dataset: A large dataset of real patients CT scans for SARS-CoV-2 identification. MedRxiv. 2020 [Google Scholar]
  • 40.Dakua S.P., Nayak A. A review on treatments of hepatocellular carcinoma—role of radio wave ablation and possible improvements. Egyptian Liver Journal. 2022;12(1):1–10. [Google Scholar]
  • 41.Mohanty S., Dakua S.P. Toward computing cross-modality symmetric non-rigid medical image registration. IEEE Access. 2022;10:24528–24539. [Google Scholar]
  • 42.Rai P., Dakua S., Abinahed J., Balakrishnan S. Feasibility and Efficacy of Fusion Imaging Systems for Immediate Post Ablation Assessment of Liver Neoplasms: Protocol for a Rapid Systematic Review. Int. J. Surg. Protocols. 2021;25(1):209. doi: 10.29337/ijsp.162. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing does not apply to this article.

Data will be made available on request.


Articles from Biomedical Signal Processing and Control are provided here courtesy of Elsevier

RESOURCES