Abstract
It is well-known that the Tseng algorithm and its modifications have been successfully employed in approximating zeros of the sum of monotone operators. In this study, we restored various thoracic diseases’ computerized tomography (CT) images, which were degraded with a known blur function and additive noise, using a modified Tseng algorithm. The test images used in the study depict calcification of the Aorta, Subcutaneous Emphysema, Tortuous Aorta, Pneumomediastinum, and Pneumoperitoneum. Additionally, we employed well-known image restoration tools to enhance image quality and compared the quality of restored images with the originals. Finally, the study demonstrates the potential to advance monotone inclusion problem-solving, particularly in the field of medical image recovery.
Introduction
Medical imaging is essential in the diagnosis and treatment of diseases as it provides direct guidance to medical personnel to cure diseases, and over the past decade, advancements in technology have brought about faster, more accurate, and less invasive medical devices [1]. Mathematical models in medical image restoration are a fundamental component of medical imaging, aiming to acquire high-quality images for clinical use while minimizing costs and risks to patients [2]. Intense data-driven models are highly flexible for extracting valuable information from massive data sets yet lack theoretical foundations [3]. Biomedical computing relies on mathematical models. Image data is fundamental in experimental, clinical, biomedical, and behavioural research [4]. Medical imaging problems, such as Magnetic Resonance Imaging (MRI), can be modelled as inverse problems. A modern and practical methodological approach, which has been widely applied, is based on the assumption that most real-life images have a low-dimensional nature. This method is highly effective and has proven to be successful [5]. The analysis of medical datasets through image processing techniques is a critical aspect of modern medical research. The development of algorithms for either partial or fully automatic analysis is essential in this context [6]. Image enhancement stands as a vital and complex method within the realm of image processing technology. Its fundamental goal is to improve the visual quality of an image or present a more polished representation of the picture.
Image restoration remains a critical area within medical image processing. It focuses on removing or reducing degradations in an image that may occur during the acquisition process. The ability to restore a medical image is essential for facilitating more accurate diagnosis and treatment [7]. Various types of medical images, including Computerized Tomography (CT) scans, Magnetic Resonance Imaging (MRI) scans, X-Ray images, microscopic images, and Ultrasound images, are prone to additive noise and blurring during the acquisition process [8–10]. Sources of image blurring may include optical distortions, motion during imaging, or atmospheric turbulence. Degradation of medical images can occur during transmission and acquisition, significantly impacting the analysis and processing of these images. Medical images with low spatial resolution and additive noise can lead to the misclassification of tumours or foreign objects, thus potentially compromising diagnosis and treatment outcomes [11]. Therefore, there is a need to explore additional evolutionary algorithms for addressing a wide range of medical imaging challenges [12]. The modified Tseng algorithms focus on removing image degradation that might occur during acquisition. Established methods such as deep learning are TV-based regularization approaches that effectively use gradient information to solve sparsity-constrained problems. It tends to improve the image quality of low-dose reconstructed images [13]. The key limitation associated with TV-based models lies in their tendency to encourage the recovery of images with sparse gradients. This characteristic can be beneficial for certain types of images but often results in an undesirable visual effect known as a “staircase”. CT scan images acquired are affected by ionizing radiations, which produce mottle noises that lead to the degradation of the images [14]. Similarly, such images can be restored using recent algorithms like Tseng. Multi-modality medical image fusion is a crucial technique in medical image processing. It is utilized extensively for diagnostic purposes with the aid of a co-occurrence filter and local extrema in a non-subsampled shearlet transform domain [15]. This approach integrates features from various imaging modalities, such as CT (Computed Tomography) and MRI (Magnetic Resonance Imaging), to create a new, composite medical image and enhance the information that clinicians can derive from the images [16].
Thoracic diseases (TD) pose significant health challenges, impacting a considerable number of individuals. Chest X-rays, widely utilized as a diagnostic method, play a crucial role in healthcare and are also referred to as computed tomography [17]. TD encompass a range of serious illnesses and health conditions, many of which exhibit a high prevalence. One illustrative example is pneumonia, which annually afflicts millions of individuals globally. In the United States alone, approximately 50,000 people succumb to pneumonia each year [18]. The chest X-ray (CXR) stands out as a widely used and cost-effective diagnostic instrument for identifying chest and thoracic diseases. Deciphering chest X-rays demands substantial expertise and careful visual scrutiny. Despite radiologists undergoing extensive clinical training and professional guidance, errors can still occur due to the intricate nature of diverse lung lesions and the subtle textural differences present in the images [19]. Precise chest X-ray (CXR) interpretations necessitate expert knowledge and medical experience. There have been extensive endeavours to automatically detect thoracic diseases (TD) using CXR data. However, increasing image volumes and subtle texture changes can lead to errors, even by experienced radiologists [20]. Statistical learning methods, such as Support-vector networks [21, 22], Bayesian classifiers [23, 24], and k-nearest neighbour algorithms [19], are not adept at directly handling high-dimensional pixel-level features in medical images. The patterns of diseases found in chest X-rays are numerous, and their occurrence follows a long-tailed (LT) distribution [25]. Therefore, it is essential to have an efficient and reliable mathematical algorithm that can restore the images and easily detect different types of diseases.
The image restoration can be achieved in various forms, such as image denoising [21], deblurring [26], inpainting’ [27, 28], dehazing [29], and de-raining [30]. Numerical simulations should be the best option in solving problems related to medical imaging analysis, as factors such as specular noise can affect the interpretation of the image obtained. Therefore, complex mathematical models using various algorithms are needed to study medical images. In this study, various thoracic diseases computerized tomography (CT) images degraded with known blur function and additive noise were restored using a modified Tseng algorithm. Furthermore, we utilized well-known image restoration tools to enhance image quality and compare the quality of the restored images with the original images.
Methodology
The thoracic disease patients’ CT scan images were collected from an archive of medical images of cancer (https://www.cancerimagingarchive.net); they represent various disease conditions, and ethical approval is not applicable [26].
Mathematical models used for image restoration problems are often formulated using Eq (1), and illustrated in Fig 1:
| (1) |
where y is the observed image, D is the degradation function, x is the original image and η is noise.
Fig 1. Image degradation.
The objective of this study is to restore a degraded image as illustrated in Fig 1 using mathematical algorithms. Since the solution may not be unique for any degraded image, this problem inherits ill-posedness. To restore well-posedness, regularization techniques are employed. The l1 regularization method is known to be a powerful technique for image denoising and deblurring problem. The formulation is given by
| (2) |
where μ is the regularizing term. By simple mathematical reformulation, solutions of the minimization problem (2) are equivalent to solutions of the inclusion problem:
| (3) |
where H is a real Hilbert space, ∇f is the gradient of f and ∂g is the subdifferential of g, with
Then,
In the literature, several algorithms introduced for approximating zeros of sum of two monotone operators have been used to solve the inclusion problem (3) (see, e.g., [31–40] and the references therein). We propose a modification of the popular Tseng algorithm for approximating solutions of problem (3). Our propose method is the following:
Algorithm 1 Step 1. Given x0 = Dx + η, λ = 0.001, μ = 0.3 and set k = 1.
Step 2. Compute wk and xk+1,
| (4) |
where I is the identity mapping.
Step 3 Set k ← k + 1, and go to Step 2.
Convergence analysis
Theorem 2 The sequence {xk} generated by our proposed method converges to a solution of problem 2.
Proof. Since ∇f is Lipschitz and ∂g is monotone, the proposed method can be viewed as a corollary of the famous Tseng algorithm [36]. Hence, the convergence analysis follows using a similar argument given in [36].
Experimental results and discussion
In this section, we use the proposed method (Algorithm 1) in the restoration process of some CT scan images obtained from thoracic disease patients [26] We label the images as follows: Images 1, 2, 3, 4 and 5 represent Calcification of the Aorta, Subcutaneous Emphysema, Tortuous Aorta, Pneumomediastinum and Pneumoperitoneum, respectively. We study the behaviour and properties of the restored images when they are degraded using MATLAB’s built-in motion blur function (P = fspecial(′motion′, 30, 60)), and we added Gaussian noise (GN) and poison noise (PN) with scaling factor σ = 0.001 and 0.05, respectively. The results of the simulations are presented in the Figs 2–6 below:
Fig 2. Restoration process via Algorithm 1.
(a) Analysis Using Image 1 with Gaussian Noise, (b) Analysis Using Image 1 with Poison Noise.
Fig 6. Restoration process via Algorithm 1.
(a) Analysis Using Image 5 with Gaussian Noise, (b) Analysis Using Image 5 with Poison Noise.
Fig 3. Restoration process via Algorithm 1.
(a) Analysis Using Image 2 with Gaussian Noise, (b) Analysis Using Image 2 with Poison Noise.
Fig 4. Restoration process via Algorithm 1.
(a) Analysis Using Image 3 with Gaussian Noise, (b) Analysis Using Image 3 with Poison Noise.
Fig 5. Restoration process via Algorithm 1.
(a) Analysis Using Image 4 with Gaussian Noise, (b) Analysis Using Image 4 with Poison Noise.
Discussion
Our algorithm was applied to restore computerized tomography (CT) images depicting various thoracic diseases. These images had been intentionally degraded with known blur and additive noise. The implementation of our algorithm led to an enhanced restoration performance for the degraded images. Despite this, the models we introduce exhibit effectiveness specifically in the areas of deblurring and denoising operations, ultimately yielding accurate restoration results.
Clearly, looking at the images in Figs 2–6, one can easily see that the proposed Algorithm 1 restored the test images effectively. However, to validate this claim, there are tools used for measuring the quality of restored images. We will use three (3) different metrics to analyze the qualities of the restored images. These tools are, namely, the structural dissimilarity index measure (SSIM), the Improvement in signal-to-noise ratio (ISNR) and the signal-to-noise ratio (SNR) index. These metrics are expressed, respectively, as follows:
| (5) |
where x and y represent the original and restored images, μx and μy denote the mean values of x and y, σx and σy are the standard deviations of x and y, σxy is the covariance between x and y, and c1 and c2 are small constants introduced to prevent division by zero.
| (6) |
where x, y, and xn denote the original, observed, and estimated images at iteration n, respectively.
The SSIM value ranges from 0 to 1, with 1 denoting perfect recovery while higher values for SNR and ISNR indicate superior restoration. The performance of our proposed Algorithm 1 using these metrics is detailed in Table 1 below.
Table 1. SSIM, ISNR and SNR for the test images.
| Algorithm 1 (GN) | Algorithm 1 (PN) | |||||||
|---|---|---|---|---|---|---|---|---|
| Test image | SSIM | ISNR | SNR | Time | SSIM | ISNR | SNR | Time |
| Image 1 | 0.89 | 7.67 | 61.19 | 24.36 | 0.89 | 4.30 | 53.40 | 26.31 |
| Image 2 | 0.92 | 4.72 | 59.96 | 26.21 | 0.92 | 2.36 | 52.93 | 24.45 |
| Image 3 | 0.94 | 7.05 | 66.48 | 25.36 | 0.95 | 2.92 | 55.73 | 24.02 |
| Image 4 | 0.93 | 7.26 | 68.58 | 27.89 | 0.94 | 2.72 | 56.32 | 25.97 |
| Image 5 | 0.91 | 6.92 | 62.01 | 26.28 | 0.91 | 3.66 | 53.94 | 25.70 |
Remark 1 From the metrics of the restored images presented in Table 1, it is convincing that our proposed Algorithm 1 restored the test images with high quality.
Conclusion
Overall, this study successfully restored computerized tomography (CT) images of various thoracic diseases that had been degraded with known blur and additive noise. The restoration was achieved through the implementation of a mathematical algorithm, specifically a modified Tseng algorithm. Furthermore, we employed established image restoration tools to enhance the quality of the images and conducted a comprehensive comparison between the restored images and the original ones. This approach not only showcases the efficacy of the applied algorithm but also underscores the importance of combining mathematical models with established tools in the field of medical image restoration.
Data Availability
All data is included in the manuscript.
Funding Statement
The author(s) received no specific funding for this work.
References
- 1. Wang Y., Liu T.: Quantitative susceptibility mapping (QSM): decoding MRI data for a tissue magnetic biomarker. Magn. Reson. Med. 73(1), 82–101 (2015) doi: 10.1002/mrm.25358 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Alvarez L, Guichard F, Lions PL, Morel J-M. Axiomes etéquations fondamentales du traitement d’images. C. R. Acad. Sci. Paris. 1992;315:135–138. MR1197224 (94d:47066). [Google Scholar]
- 3. Chabat F, Hansell DM. and Yang Guang-Zhong, Computerized decision support in medical imaging. IEEE Engineering in Medicine and Biology Magazine. 2000;19(5):89–96. doi: 10.1109/51.870235 [DOI] [PubMed] [Google Scholar]
- 4. Yan Ke, Wang Xiaosong, Lu Le, Summers Ronald M. DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. Journal of Medical Imaging (2018). doi: 10.1117/1.JMI.5.3.036501 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Lustig M, Donoho D, Pauly JM. Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn Reson Med. 2007;58:1182–95. doi: 10.1002/mrm.21391 [DOI] [PubMed] [Google Scholar]
- 6. Chambolle A, Lions P. Image recovery via total variation minimization and related problems. Numer Math. 1997;76:167–88. doi: 10.1007/s002110050258 [DOI] [Google Scholar]
- 7.A. F. Sheta, “Restoration of Medical Images Using Genetic Algorithms,” 2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 2017, pp. 1-8.
- 8.S. Ahn, J. Park and J. Chong, “Blurring image quality assessment method based on histogram of gradient”, Proceedings of the 19th Brazilian Symposium on Multimedia and the Web, pp. 181-184, 2013.
- 9. Kavaz D.; Abubakar A.L.; Rizaner N.; Umar H. Biosynthesized ZnO Nanoparticles Using Albizia lebbeck Extract Induced Biochemical and Morphological Alterations in Wistar Rats. Molecules 2021, 26, 3864. doi: 10.3390/molecules26133864 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.T. Yuasa, T. Takeda, T. Zeniya, Y. Hasegawa, K. Hyodo, Y. Hiranaka, et al., “Improvement of image quality in transmission computed tomography using synchrotron monochromatic X-ray sheet beam”, 2001 Conference Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 3, pp. 2367-2370, 2001.
- 11. Umar H. “Morphological Changes Caused by Synthesized Zinc Oxide Nanoparticles in MDA-MB 231 Cells and Prediction with Multi-Linear Regression” Tropical Journal of Natural Product Research. 2023, 7 (12), p5616–5622. 7p. [Google Scholar]
- 12.G. Dong, R. Bayford, H. Liu, Y. Zhou and W. Yan, “Eit images with improved spatial resolution using a realistic head model”, 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 1134-1137, Aug 2006. [DOI] [PubMed]
- 13. Chen H., Li Q., Zhou L., & Li F. (2024). Deep learning-based algorithms for low-dose CT imaging: A review. European Journal of Radiology, 111355. [DOI] [PubMed] [Google Scholar]
- 14. Diwakar M., Singh P., Karetla G. R., Narooka P., Yadav A., Maurya R. K., et al. (2022). Low-dose COVID-19 CT image denoising using batch normalization and convolution neural network. Electronics, 11(20), 3375. doi: 10.3390/electronics11203375 [DOI] [Google Scholar]
- 15. Diwakar M., Singh P., & Shankar A. (2021). Multi-modal medical image fusion framework using co-occurrence filter and local extrema in NSST domain. Biomedical Signal Processing and Control, 68, 102788. doi: 10.1016/j.bspc.2021.102788 [DOI] [Google Scholar]
- 16. Wang L., Li B., & Tian L. F. (2014). Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients. Information fusion, 19, 20–28. doi: 10.1016/j.inffus.2013.04.005 [DOI] [Google Scholar]
- 17.Mao C, Pan Y, Zeng Z, Yao L, Luo Y. Deep Generative Classifiers for Thoracic Disease Diagnosis with Chest X-ray Images. Proceedings (IEEE Int Conf Bioinformatics Biomed). 2018 Dec;2018:1209-1214. [DOI] [PMC free article] [PubMed]
- 18. Murphy K, van Ginneken B, Schilham AM, De Hoop B, Gietema H, and Prokop M, “A large-scale evaluation of automatic pulmonary nodule detection in chest ct using local image features and k-nearest-neighbour classification,” Medical image analysis, vol. 13, no. 5, pp. 757–770, 2009. doi: 10.1016/j.media.2009.07.001 [DOI] [PubMed] [Google Scholar]
- 19.Bar Y, Diamant I, Wolf L, Lieberman S, Konen E, and Greenspan H, “Chest pathology detection using deep learning with non-medical training.” in ISBI. Citeseer, 2015, pp. 294–297.
- 20. Cortes C and Vapnik V, “Support-vector networks,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995. doi: 10.1007/BF00994018 [DOI] [Google Scholar]
- 21. Chang C-C and Lin C-J, “Libsvm: a library for support vector machines,”, ACM transactions on intelligent systems and technology (TIST), vol. 2, no. 3, p. 27, 2011. [Google Scholar]
- 22.Domingos P and Pazzani M, “Beyond independence: Conditions for the optimality of the simple bayesian classi er,” in Proc. 13th Intl. Conf. Machine Learning, 1996, pp. 105–112.
- 23. Wang X-Z, He Y-L, and Wang DD, “Non-naive bayesian classifiers for classification problems with continuous attributes,” Cybernetics, IEEE Transactions on, vol. 44, no. 1, pp. 21–39, 2014. doi: 10.1109/TCYB.2013.2245891 [DOI] [PubMed] [Google Scholar]
- 24. Larose DT and Larose CD, “k-nearest neighbor algorithm,” Discovering Knowledge in Data: An Introduction to Data Mining, Second Edition, pp. 149–164, 2006. [Google Scholar]
- 25. Prasath V.B.S.: Quantum noise removal in X-ray images with adaptive total variation regularization. Informatica 28(3), 505–515 (2017). doi: 10.15388/Informatica.2017.141 [DOI] [Google Scholar]
- 26. Zhang R., E H., Yuan L., He J., Zhang H., Zhang S., et al.: MBNM:multi-branch network based on memory features for long-tailed medical image recognition. Comput.MethodsProgramsBiomed. 212,106448 (2021). [DOI] [PubMed] [Google Scholar]
- 27. Mamaev N.V., Yurin D.V., Krylov A.S.: Finding the parameters of a nonlinear diffusion denoising method by ridge analysis. Comput. Math. Model. 29, 334–343 (2018). doi: 10.1007/s10598-018-9413-6 [DOI] [Google Scholar]
- 28. Pang Z.-F., Zhang H.-L., Luo S., Zeng T.: Image denoising based on the adaptive weighted TVp regularization. Signal Process. 167, 107325 (2020). doi: 10.1016/j.sigpro.2019.107325 [DOI] [Google Scholar]
- 29. Abbass M., Kim H., Abdelwahab S., Haggag S., El-Rabaie E., Dessouky M., et al.: Image deconvolution using homomorphic technique. Signal Image Video Process. 13(4), 703–709 (2019). doi: 10.1007/s11760-018-1399-1 [DOI] [Google Scholar]
- 30. Liu L., Pang Z.-F., Duan Y.: Retinex based on exponent-type total variation scheme. Inverse Prob. Imaging. 12(5), 1199–1217 (2018). doi: 10.3934/ipi.2018050 [DOI] [Google Scholar]
- 31.Grigoras, R., Ciocoiu, I.B.: Comparative analysis of deraining algorithms. In: International Symposium on Signals, Circuits and Systems, Romania (2017).
- 32. Rudin L., Osher S., Fatemi E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1990). doi: 10.1016/0167-2789(92)90242-F [DOI] [Google Scholar]
- 33. Adamu A., Kitkuan D., Kumam P., Padcharoen A., Seangwattana T.: Approximation method for monotone inclusion problems in real Banach spaces with applications. J. Inequal. Appl. 2022(1), 70, 1–20 (2022). doi: 10.1186/s13660-022-02805-0 [DOI] [Google Scholar]
- 34. Chidume C.E., Adamu A., Kumam P., Kitkuan D.: Generalized hybrid viscosity-type forward-backward splitting method with application to convex minimization and image restoration problems. Numer. Funct. Anal. Optim. 42, 1586–1607 (2021) doi: 10.1080/01630563.2021.1933525 [DOI] [Google Scholar]
- 35. Lions P.L., Mercier B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979) doi: 10.1137/0716071 [DOI] [Google Scholar]
- 36. Tseng P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000) doi: 10.1137/S0363012998338806 [DOI] [Google Scholar]
- 37. Muangchoo K., Adamu A., Ibrahim A. H., Abubakar A. B.: An inertial Halpern-type algorithm involving monotone operators on real Banach spaces with application to image recovery problems. Computational and Applied Mathematics, 41(8), 364 (2022). doi: 10.1007/s40314-022-02064-1 [DOI] [Google Scholar]
- 38. Dechboon P., Adamu A., Kumam P.: A generalized Halpern-type forward-backward splitting algorithm for solving variational inclusion problems. AIMS Mathematics, 8(5), 11037–11056 (2023). doi: 10.3934/math.2023559 [DOI] [Google Scholar]
- 39. Adamu A., Kumam P., Kitkuan D., Padcharoen A.: Relaxed modified Tseng algorithm for solving variational inclusion problems in real Banach spaces with applications. Carpathian Journal of Mathematics, 39(1), 1–26 (2023). [Google Scholar]
- 40. Wang Z. B., Sunthrayuth P., Adamu A., Cholamjiak P.: Modified accelerated Bregman projection methods for solving quasi-monotone variational inequalities. Optimization, 1–35 (2023). doi: 10.1080/02331934.2023.2230994 [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
All data is included in the manuscript.






