Abstract
Background:
Image enhancement, including image de-noising, super-resolution, registration, reconstruction, in-painting, and so on, is an important issue in different research areas. Different methods which have been exploited for image analysis were mostly based on matrix or low order analysis. However, recent researches show the superior power of tensor-based methods for image enhancement.
Method:
In this article, a new method for image super-resolution using Tensor Ring decomposition has been proposed. The proposed image super-resolution technique has been derived for the super-resolution of low resolution and noisy images. The new approach is based on a modification and extension of previous tensor-based approaches used for super-resolution of datasets. In this method, a weighted combination of the original and the resulting image of the previous stage has been computed and used to provide a new input to the algorithm.
Result:
This enables the method to do the super-resolution and de-noising simultaneously.
Conclusion:
Simulation results show the effectiveness of the proposed approach, especially in highly noisy situations.
Keywords: Image enhancement, super-resolution, rank incremental, tensor ring decomposition
Introduction
Image enhancement is an important issue in different image processing areas.[1,2,3] Images achieved from different devices, usually do not have the desired quality and enhancement is an essential step for better exploiting of them. This becomes more important when working with biomedical images where accurate diagnosis should be done using images.[4,5,6,7]
Super-resolution of low resolution images for deriving high resolution ones is also an important issue in image processing area. This has been studied for different images such as natural, hyperspectral, or biomedical images. Among these, super-resolution of biomedical images achieves high attention due to its especial application and has been studied in different papers such as.[8,9,10,11,12] Deep learning methods have been widely used for image super-resolution.[12,13,14,15,16] However, these methods usually need large volume of training data for achieving desired performance which is not always available. Statistical modeling has also been exploited for image super-resolution.[17,18] Recently, tensor-based approaches have been also proposed for image super-resolution, including biomedical, hyperspectral, and natural images and show their high performance.[8,19,20,21,22,23,24]
In general, tensors are higher order arrays used for representing higher order datasets, such as RGB images, hyperspectral datasets, biomedical datasets, and so on.[25,26] Data recorded in a tensor can be analyzed in different ways. A common approach is to decompose a tensor into smaller matrices or lower order core-tensors, called tensor decomposition.[25,26] There are several tensor decomposition approaches. The well-known ones are CANDECOMP/PARAFAC (CP)[27,28,29,30] and Tucker decompositions,[31,32,33] an illustration for each has been shown in Figure 1.
Figure 1.

CANDECOMP/PARAFAC (first row) and Tucker (second row) decompositions of a third order tensor
CP and Tucker decompositions have been widely used for different applications, however, algorithms of CP decompositions are usually unstable and Tucker decomposition suffers from curse of dimensionality.[25] For overcoming these problems, other tensor decompositions, called tensor networks, have been proposed.[25] In tensor networks, the number of elements resulting from tensor decomposition increases linearly with the tensor order.[25] This property is highly important when working with higher order datasets.
Tensor Train (TT) and Tensor Ring (TR) decompositions are two members of tensor networks. In TT, an I1 × I2 × … × IN tensor decomposes into a series of third order core tensors interconnected to each other. The core tensors are of size
, where Rn is the n-th TT rank and R0 = RN = 1, so the first and the last core tensors are two matrices.[34] TR decomposition can be considered as a generalized version of TT decomposition in which an I1 × I2 × … × IN tensor decomposes into a series of third order core tensors interconnected in a loop. The n-th core tensor, i.e., G(n), is of size Rn-1 × In × Rn where Rn is the n-th TR rank, but with R0 = Rn > 1.[35] TT and TR decompositions are denoted as
where X is an N-th order tensor to be decomposed. An example for TR (TT) decomposition has been illustrated in Figure 2.
Figure 2.

Tensor ring decomposition of a 4-th order tensor. For tensor train decomposition, R0 = R4 = 1
TT and TR decompositions are more effective for analyzing higher order datasets. Due to this reason and also for better exploiting the correlations of image pixels, the original low order image is usually reformatted into higher order tensors using different methods. A common approach is Hankelization in which a raw dataset reformatted into a matrix with Hankel structure. Different Hnakelization methods are Multi-way delay-embedding transform (MDT),[24] patch Hankelization[36] and overlapped patch Hankelization.[37] Considering this, several TT or TR based methods Hankelize the input datasets before applying TT or TR decompositions.[36,37] By applying Hankelization with overlapped patches, an I1 × I2 image transferred into a 6th order tensor of size P × P × T1 × D1 × T2 × D2, where P is the patch size, Ti’s are window sizes,
with
and O is the overlap between patches.[37]
Determining proper ranks (i.e., Ri’s) is an important issue when working with TT or TR decompositions. The ranks indeed determine the size of the core tensors and can highly affect the performance of the decomposition. Several papers using predetermined fixed ranks for TT\TR decomposition. Some of other papers use incremental approaches in which the ranks are increased during iterations. However, selecting very low ranks, results in losing small details while too much increasing the ranks results in a degraded output, especially in noisy cases. Therefore, when the original image is noisy, the TR ranks usually limited to a maximum value to prevent appearing the noise in the final image.
In this paper, we have proposed a new method for super-resolution of noisy images which allows the ranks to be increased to higher values without highly degrading the final image. To allow the ranks to be increased more, in this new method for the available noisy pixels, a weighted combination of the original input and the output of the previous stage have been used as a new input for the next stage, while the ranks are also increased. The approach can be considered as a generalized version of[38] which has been previously proposed for super-resolution of noisy images. This results in a more accurate super-resolution method comparing to the other existing methods when the input is noisy (details will be discussed in Section III).
The remaining of this paper has been organized as: Notations and preliminaries are reviewed in Section II. The proposed super-resolution algorithm has been proposed in Section III. Simulation results have been presented in Section IV and finally Section V concludes the paper.
Notations and Preliminaries
Notations in this paper are basically the same as.[25] Tensors and matrices are denoted by underlined bold capital (X) and bold capital letters (X), respectively. Unfolding of a tensor of size I1 × I2 × … × IN into a matrix of size In × I1 I2 × … In-1 In+1…IN is called mod-n matricization and is denoted by X(n). Mod-{N} canonical unfolding of an I1 × I2 × … × IN tensor, denoted by X[n], results in an I1 I2…In × In+1 … IN matrix. Hadamard or elements wise product of two tensors or two matrices of the same size has been denoted by ⊛. Frobenius norm and trace of a matrix are denoted by ‖.‖F and tr(.), respectively.
Proposed Super-resolution Approach
In this section, a new approach for super-resolution of noisy low resolution images using TR decomposition has been proposed.
Classic TR (Tucker)-based completion algorithms (super-resolution can be considered as a special case of tensor completion) can be summarized as follows:[24,38]
(Patch) Hankelize the incomplete low resolution input image which results in X
Compute the TR (Tucker) decomposition of input X with rank vector r = [R0, R1,…, RN] which results in X̂
Update the input as

Increase the rank vector as r = r + inc, where inc can be any integer value
Repeat the procedure until desired accuracy or the maximum rank value (Rmax) achieved
De-Hankel the output.
Note that Ω is a binary mask tensor with the same size as X whose entries are 1 for the observed and 0 for the missing elements of the input incomplete image.
In many cases, the input low resolution image is noisy. This can affect the quality of the final result, especially in highly noisy cases. Pre or post de-noising can increase the simulation time and also results in removing small details. In this paper we have proposed to modify the previous de-noising and super-resolution methods and derive a new algorithm for super-resolution of noisy images as
Patch Hankelize the incomplete low resolution noisy image with overlapped patches which results in a 6-th order tensor X
Compute the TR decomposition of input X with rank vector r which results in X
Update the input as

Increase the rank vector as r = r + inc
Repeat the procedure until desired accuracy or the maximum rank value (Rmax) achieved
De-Hankel the output.
In the above procedure, α is a small nonnegative constant less or equal to 1. It is clear that putting α = 1 changes the update rule to the previous format.
The proposed algorithm allows the TR ranks to be increased to higher values without decreasing the quality of final result (see the simulations). This is a very important point, since in classic approaches, when the input image is noisy, the ranks (TR or Tucker ranks) cannot be highly increased. This is due to the fact that increasing the ranks allows the noise to be appeared in the final image. For preventing this issue, the maximum rank should be limited which from other side results in losing small details in the output. However, using the proposed approach enables us to increase the ranks to larger values without highly decreasing the output quality.
The value of α can affect the quality of output, since selecting smaller α allows the ranks to be increased to higher values but can also increase the simulation time. Larger values of α can result in a noisy output.
The performance of the algorithm will be evaluated in the next section.
Simulation Results
In this section, the validity of the proposed approach has been investigated. For testing the algorithm, fundus fluorescein angiogram photographs of diabetic patients (https://misp.mui.ac.ir/en/fundus-fluorescein-angiogram-photographs-diabetic-patients-0)[39] has been used. The dataset contains 70 images of size 576 × 720 with 30 normal and 40 abnormal cases.
For evaluating the quality of the proposed super-resolution algorithm, the proposed approach has been compared with several approaches such as MDT,[24] High Accuracy Low Rank Tensor Completion (HaLRTC),[40] Fast Super-Resolution Reconstruction Algorithm (FSRRA),[41] Tensor Train Weighted Optimization (TT-WOPT)[42] and Depth Super-Resolution (DSR).[43] For tensor-based approaches, i.e., HALRTC and TT-WOPT, the inputs were first patch Hankelized with overlapped patches similar to the proposed algorithm and then given to the algorithms (except for MDT which has its own Hankelization technique). Recall that super-resolution can be considered as a special case of tensor completion problem. For the proposed approach, patch size has been set to P = 2 with overlap O = 1 and α = 0.5. For MDT, the window size has been set to [2,2]. Low resolution noisy images have been derived by down-sampling the original images to the size 144 × 180 and adding Gaussian noise with variance σ. Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) of each image have also been reported. The results of super-resolution of the inputs with rate 2 and for different noise variances are illustrated in Tables 1-3. PSNR and SSIM results show that the proposed algorithm has the ability to preserve the details as much as possible, while decreasing the noise level of the final image.
Table 1.
Results of the algorithms for the super-resolution of low resolution noisy image with σ=0.02
| Original image | Low resolution - noisy image | HaLRTC[40] | FSRRA[41] |
|---|---|---|---|
|
|
|
|
| (16.5307, 0.1519) | (31.2794, 0.7767) | ||
|
| |||
| TT-WOPT[42] | DSR[43] | MDT[24] | Proposed |
|
| |||
|
|
|
|
| (29.0375, 0.7055) | (31.3445, 0.7280) | (31.1476, 0.7830) | (31.5719, 0.8305) |
The super-resolution rate is 2
Table 3.
Results of the algorithms for super-resolution of low resolution noisy image with σ=0.06
| Original image | Low resolution- noisy image | HaLRTC[40] | FSRRA[41] |
|---|---|---|---|
|
|
|
|
| (16.2845, 0.1366) | (25.6078, 0.4010) | ||
|
| |||
| TT-WOPT[42] | DSR[43] | MDT[24] | Proposed |
|
| |||
|
|
|
|
| (25.7094, 0.4358) | (23.8312, 0.3023) | (25.9578, 0.4181) | (29.0146, 0.6905) |
The super-resolution rate is 2
Table 2.
Results of the algorithms for the super-resolution of low resolution noisy image with σ=0.04
| Original image | Low resolution - noisy image | HaLRTC[40] | FSRRA[41] |
|---|---|---|---|
|
|
|
|
| (16.4356, 0.1463) | (28.3062, 0.5697) | ||
|
| |||
| TT-WOPT[42] | DSR[43] | MDT[24] | Proposed |
|
| |||
|
|
|
|
| (27.4350, 0.5691) | (27.0173, 0.4706) | (29.1530, 0.6522) | (30.2326, 0.7546) |
The super-resolution rate is 2
The six algorithms have been applied for the super-resolution of 20 first cases of the dataset for σ = 0.06 and the averaged PSNR and SSIM for each algorithm are reported in Table 4. As the results show, the proposed approach has higher performance in comparison to the other algorithms. The computed P values were less that 0.05 (P < 0.05) which show there is a statistically significant difference between the proposed approach and other methods.
Table 4.
Averaged PSNRs and SSIMs of the algorithms for the super-resolution of 20 first cases of the dataset with σ=0.06 and rate 2
| HaLRTC[40] | FSRRA[41] | TT-WOPT[42] | DSR[43] | MDT[24] | Proposed | |
|---|---|---|---|---|---|---|
| PSNR | 16.3586±1.0410 | 25.1877±0.4279 | 25.1763±1.1106 | 23.6635±0.1801 | 25.5649±2.0908 | 27.7732±1.3619 |
| SSIM | 0.1643±0.0132 | 0.4376±0.0239 | 0.4181±0.0217 | 0.3624±0.0387 | 0.4733±0.0746 | 0.6175±0.044 |
The P value between the proposed approach and other methods is <0.05 (P<0.05) which shows a statistically significant difference between the proposed approach and other methods
For better understanding the effect of α on the performance of the algorithm, the results of previous simulations (with α = 0.5) have been compared with the situation when α = 1. By putting α = 1, the super-resolution procedure changes to the approach of.[37] The comparison has been done for two levels of noise variance and the results are shown in Table 5. For the second and fourth columns, the ranks and the results with the highest PSNR’s have been reported. The proposed approach has been compared with the situation when α = 1 and the maximum rank set to the value which has the best result for α = 1 and also compared with the situation when α = 1 and the maximum rank set the same as the rank for α = 0.5. The results show that the proposed algorithm achieved its best performance in higher ranks in comparison to the situation when α = 1. In addition, putting α = 0.5 has the ability of improving the performance of the output. This higher performance is more clear for higher level of noise. It can also be inferred that increasing the ranks to higher values for α = 1, results in decreasing the performance of the algorithm and the output image becomes noisy, while the proposed approach is more robust to the rank incremental.
Table 5.
Comparison of the proposed algorithm with α=0.5 and α=1
| Low resolution noisy image with σ=0.04 | Proposed approach with α=1 (Rmax=7) | Proposed approach with α=1 (Rmax=10) | Proposed approach with α=0.5 (Rmax=10) |
|---|---|---|---|
|
|
|
|
| (30.1066, 0.7390) | (29.9515, 0.7110) | (30.2326, 0.7546) | |
|
| |||
| Low resolution noisy image with σ=0.06 | Proposed approach with α=1 (Rmax=5) | Proposed approach with α=1 (Rmax=8) | Proposed approach with α=0.5 (Rmax=8) |
|
| |||
|
|
|
|
| (28.7621, 0.6789) | (28.3049, 0.6081) | (29.0146, 0.6905) | |
The PSNRs and SSIMs of the algorithm during rank incremental for α = 1 and α = 0.5 and σ = 0.06 are illustrated in Figures 3 and 4. As it is expected, for both cases, the PSNR starts to increase until some rank values and then decreases for higher ranks. This is due to the presence of noise which decreases the performance of the output for higher ranks. However, the maximum PSNR for α = 1 achieves in smaller ranks in comparison to α = 0.5. In addition the maximum value of PSNR for α = 1 is less than the maximum value of PSNR for α = 0.5. This shows the effectiveness of the proposed algorithm which allows the ranks to be increased to higher values without decreasing the performance of the output. The resulting PSNRs and SSIMs of the algorithm for the super-resolution of the noisy image with σ = 0.06 and for more different values of α have been shown in Table 6. The results show that, by decreasing α to some values, the performance of the algorithm increases. However, more reduction in α can result in a high increase in the computational burden without so much increasing the performance. This show that, depending on the situation, α cannot be selected too small or too large.
Figure 3.

PSNRs of the outputs during rank incremental for a low resolution noisy input with σ = 0.06 and for α = 1 and α = 0.5
Figure 4.

SSIMs of the outputs during rank incremental for a low resolution noisy input with σ = 0.06 and for α = 1 and α = 0.5
Table 6.
PSNRs and SSIMs of the super-resolution of the noisy image with σ=0.06 for different values of α
| α=1 (Rmax=5) | α=0.75 (Rmax=6) | α=0.5 (Rmax=8) | α=0.25 (Rmax=13) |
|---|---|---|---|
| (28.7621, 0.6789) | (28.9470, 0.6875) | (29.0146, 0.6905) | (28.8881, 0.6880) |
For a more accurate comparison among the algorithms, a vessel segmentation algorithm (https://github.com/farkoo/Retinal-Vessel-Segmentation/tree/master) has been applied on the output of each algorithm. For the initial low resolution noisy image, σ was set equal to 0.06 and the super-resolution rate was 2. The results are presented in Table 7 in which the images resulted from each algorithm in addition to its corresponding extracted vessels have been shown. As the results show, retina vessels can be extracted with high performance from the image resulted from the proposed approach. While the vessels extracted from other images do not provide sufficient quality.
Table 7.
Applying vessel segmentation algorithm to the resulting output of each algorithm
Super-resolution rate is 2
Finally, the algorithm has been tested for super-resolution of images with rate 3. The results in addition to PSNRs and SSIMs are presented in Table 8. For the proposed approach, α has been set equal to 0.5 and patch size was 2 with overlap 1. For the MDT, the window size set to [4,4]. As the results show, the proposed approach is also effective for the super-resolution with higher rates.
Table 8.
Super-resolution of low resolution noisy images with rate 3 and σ=0.06
Conclusion
In this article, a new approach for super-resolution of low resolution noisy images has been proposed. The proposed super-resolution approach has been derived by proposing a new update rule for the input image in each iteration by using a binary mask tensor and a combination weight. This allows the TR ranks to be increased to higher values without decreasing the quality of the final image. Simulation results and comparison with the existing algorithms confirmed the performance of the proposed approach.
Financial support and sponsorship
This work has been supported by Isfahan University of Medical Sciences (Grant number: 2400134).
Conflicts of interest
There are no conflicts of interest.
References
- 1.Singh G, Mittal A. Various image enhancement techniques-a critical review. Int J Innov Sci Res. 2014;10:267–74. [Google Scholar]
- 2.Qi Y, Yang Z, Sun W, Lou M, Lian J, Zhao W, et al. A comprehensive overview of image enhancement techniques. Arch Comput Methods Eng. 2022;29:583–607. [Google Scholar]
- 3.Janani P, Premaladha J, Ravichandran KS. Image enhancement techniques: A study. Indian J Sci Technol. 2015;8:1–2. [Google Scholar]
- 4.Sternberg SR. Biomedical image processing. Computer. 1983;16:22–34. [Google Scholar]
- 5.Dhawan AP. A review on biomedical image processing and future trends. Comput Methods Programs Biomed. 1990;31:141–83. doi: 10.1016/0169-2607(90)90001-p. [DOI] [PubMed] [Google Scholar]
- 6.Haque IR, Neubert J. Deep learning approaches to biomedical image segmentation. Inform Med Unlocked. 2020;18:100297. [Google Scholar]
- 7.Seo H, Badiei Khuzani M, Vasudevan V, Huang C, Ren H, Xiao R, et al. Machine learning techniques for biomedical image segmentation: An overview of technical aspects and introduction to state-of-art applications. Med Phys. 2020;47:e148–67. doi: 10.1002/mp.13649. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Daneshmand PG, Mehridehnavi A, Rabbani H. Reconstruction of optical coherence tomography images using mixed low rank approximation and second order tensor based total variation method. IEEE Trans Med Imaging. 2021;40:865–78. doi: 10.1109/TMI.2020.3040270. [DOI] [PubMed] [Google Scholar]
- 9.Li Y, Sixou B, Peyrin F. A review of the deep learning methods for medical images super resolution problems. IRBM. 2021;42:120–33. [Google Scholar]
- 10.Sun N, Jia Y, Bai S, Li Q, Dai L, Li J. The power of super-resolution microscopy in modern biomedical science. Adv Colloid Interface Sci. 2023;314:102880. doi: 10.1016/j.cis.2023.102880. [DOI] [PubMed] [Google Scholar]
- 11.Wang L, Zhu H, He Z, Jia Y, Du J. Adjacent slices feature transformer network for single anisotropic 3D brain MRI image super-resolution. Biomed Signal Process Control. 2022;72:103339. [Google Scholar]
- 12.Shi F, Cheng J, Wang L, Yap PT, Shen D. LRTV: MR image super-resolution with low-rank and total variation regularizations. IEEE Trans Med Imaging. 2015;34:2459–66. doi: 10.1109/TMI.2015.2437894. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Zhang X, Kelkar VA, Granstedt J, Li H, Anastasio MA. Impact of deep learning-based image super-resolution on binary signal detection. J Med Imaging (Bellingham) 2021;8:065501–20. doi: 10.1117/1.JMI.8.6.065501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Chen Y, Xie Y, Zhou Z, Shi F, Christodoulou AG, Li D. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) Washington, D. C.: IEEE; 2018. Brain MRI super resolution using 3D deep densely connected neural networks; pp. 739–42. [Google Scholar]
- 15.Park J, Hwang D, Kim KY, Kang SK, Kim YK, Lee JS. Computed tomography super-resolution using deep convolutional neural network. Phys Med Biol. 2018;63:145011. doi: 10.1088/1361-6560/aacdd4. [DOI] [PubMed] [Google Scholar]
- 16.Fang L, Li S, McNabb RP, Nie Q, Kuo AN, Toth CA, et al. Fast acquisition and reconstruction of optical coherence tomography images via sparse representation. IEEE Trans Med Imaging. 2013;32:2034–49. doi: 10.1109/TMI.2013.2271904. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Christensen-Jeffries K, Brown J, Harput S, Zhang G, Zhu J, Tang MX, et al. Poisson statistical model of ultrasound super-resolution imaging acquisition time. IEEE Trans Ultrason Ferroelectr Freq Control. 2019;66:1246–54. doi: 10.1109/TUFFC.2019.2916603. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Daneshmand PG, Rabbani H, Mehridehnavi A. Super-resolution of optical coherence tomography images by scale mixture models. IEEE Trans Image Process. 2020;29:5662–76. doi: 10.1109/TIP.2020.2984896. [doi:10.1109/TIP.2020.2984896] [DOI] [PubMed] [Google Scholar]
- 19.Gao H, Zhang G, Huang M. Hyperspectral image superresolution via structure-tensor-based image matting. IEEE J Sel Top Appl Earth Obs Remote Sens. 2021;14:7994–8007. [Google Scholar]
- 20.Xu Y, Wu Z, Chanussot J, Wei Z. Hyperspectral images super-resolution via learning high-order coupled tensor ring representation. IEEE Trans Neural Netw Learn Syst. 2020;31:4747–60. doi: 10.1109/TNNLS.2019.2957527. [DOI] [PubMed] [Google Scholar]
- 21.Zhang M, Sun X, Zhu Q, Zheng G. In:2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. Brussels, Belgium: IEEE; 2021. A survey of hyperspectral image super-resolution technology; pp. 4476–9. [Google Scholar]
- 22.Dian R, Fang L, Li S. Hyperspectral Image Super-Resolution Via Non-Local Sparse Tensor Factorization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017:p. 5344–53. [Google Scholar]
- 23.Dian R, Li S. Hyperspectral image super-resolution via subspace-based low tensor multi-rank regularization. IEEE Trans Image Process. 2019;28:5135–46. doi: 10.1109/TIP.2019.2916734. [doi:10.1109/TIP.2019.2916734] [DOI] [PubMed] [Google Scholar]
- 24.Yokota T, Erem B, Guler S, Warfield SK, Hontani H. Missing Slice Recovery for Tensors Using A Low-Rank Model In Embedded Space. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018:p. 8251–9. [Google Scholar]
- 25.Cichocki A, Lee N, Oseledets IV, Phan AH, Zhao Q, Mandic D. Low-Rank Tensor Networks for Dimensionality Reduction and Large-Scale Optimization Problems: Perspectives and Challenges Part 1. arXiv Preprint. 2016 [Google Scholar]
- 26.Cichocki A, Phan AH, Zhao Q, Lee N, Oseledets I, Sugiyama M, et al. Tensor networks for dimensionality reduction and large-scale optimization: Part 2 applications and future perspectives. Found Trends®Mach Learn. 2017;9:431–673. [Google Scholar]
- 27.Goulart JH, Boizard M, Boyer R, Favier G, Comon P. Tensor CP decomposition with structured factor matrices: Algorithms and performance. IEEE J Sel Top Signal Process. 2015;10:757–69. [Google Scholar]
- 28.Battaglino C, Ballard G, Kolda TG. A practical randomized CP tensor decomposition. SIAM J Matrix Anal Appl. 2018;39:876–901. [Google Scholar]
- 29.Veganzones MA, Cohen JE, Farias RC, Chanussot J, Comon P. Nonnegative tensor CP decomposition of hyperspectral data. IEEE Trans Geosci Remote Sens. 2015;54:2577–88. [Google Scholar]
- 30.Yokota T, Zhao Q, Cichocki A. Smooth PARAFAC decomposition for tensor completion. IEEE Trans Signal Process. 2016;64:5423–36. [Google Scholar]
- 31.Kim YD, Choi S. 2007 IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, MN, USA: IEEE; 2007. Nonnegative tucker decomposition; pp. 1–8. [Google Scholar]
- 32.Malik OA, Becker S. Advances in Neural Information Processing Systems. Montreal, Canada: NeurIPS; 2018. Low-Rank tucker decomposition of large tensors using tensorsketch; p. 31. [Google Scholar]
- 33.Mørup M, Hansen LK, Arnfred SM. Algorithms for sparse nonnegative Tucker decompositions. Neural Comput. 2008;20:2112–31. doi: 10.1162/neco.2008.11-06-407. [DOI] [PubMed] [Google Scholar]
- 34.Oseledets IV. Tensor-train decomposition. SIAM J Sci Comput. 2011;33:2295–317. [Google Scholar]
- 35.Zhao Q, Zhou G, Xie S, Zhang L, Cichocki A. Tensor Ring Decomposition. arXiv Preprint. 2016 [Google Scholar]
- 36.Sedighin F, Cichocki A, Yokota T, Shi Q. Matrix and tensor completion in multiway delay embedded space using tensor train, with application to signal reconstruction. IEEE Signal Process Lett. 2020;27:810–4. [Google Scholar]
- 37.Sedighin F, Cichocki A, Rabbani H. Optical Coherence Tomography Image Enhancement via Block Hankelization and Low Rank Tensor Network Approximation. arXiv Preprint. 2023 [Google Scholar]
- 38.Sedighin F, Cichocki A. Image completion in embedded space using multistage tensor ring decomposition. Front Artif Intell. 2021;4:687176. doi: 10.3389/frai.2021.687176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Hajeb Mohammad Alipour S, Rabbani H, Akhlaghi M. A new combined method based on curvelet transform and morphological operators for automatic detection of foveal avascular zone. Signal Image Video Process. 2014;8:205–22. [Google Scholar]
- 40.Liu J, Musialski P, Wonka P, Ye J. Tensor completion for estimating missing values in visual data. IEEE Trans Pattern Anal Mach Intell. 2013;35:208–20. doi: 10.1109/TPAMI.2012.39. [DOI] [PubMed] [Google Scholar]
- 41.Elad M, Hel-Or Y. A fast super-resolution reconstruction algorithm for pure translational motion and common space-invariant blur. IEEE Trans Image Process. 2001;10:1187–93. doi: 10.1109/83.935034. [DOI] [PubMed] [Google Scholar]
- 42.Yuan L, Zhao Q, Cao J. In: International Conference on Neural Information Processing. Cham: Springer International Publishing; 2017. Completion of high order tensor data with missing entries via tensor-train decomposition; pp. 222–9. [Google Scholar]
- 43.Peng S, Haefner B, Quéau Y, Cremers D. Depth Super-Resolution Meets Uncalibrated Photometric Stereo. Proceedings of the IEEE International Conference on Computer Vision Workshops. 2017:2961–8. [Google Scholar]
