Abstract
Background:
Image fusion is the process of combining the information of several input images into one image. Projection images obtained from three-dimensional (3D) optical coherence tomography (OCT) can show inlier retinal pathology and abnormalities that are not visible in conventional fundus images. In recent years, the projection image is often made by an average on all retina that causes to lose many intraretinal details.
Methods:
In this study, we focus on the formation of optimum projection images from retinal layers using Curvelet-based image fusion. The latter consists of three main steps. In the earlier studies, macular spectral 3D data using diffusion map-based OCT were segmented into 12 different boundaries identifying 11 retinal layers in three dimensions. In the second step, projection images are attained using conducting some statistical methods on the space between each pair of boundaries. In the next step, retinal layers are merged using Curvelet transform to make the final projection images.
Results:
These images contain integrated retinal depth information as well as an ideal opportunity to better extract retinal features such as vessels and the macula region. Finally, qualitative and quantitative evaluations show the superiority of this method to the average-based and wavelet-based fusion methods. Overall, our method obtains the best results for image fusion in all terms such as entropy (6.7744) and AG (9.5491).
Conclusion:
Creating an image with more and detailed information made by the Curvelet-based image fusion has significantly higher contrast. There are also many thin veins in Curvelet-based fused image, which are absent in average-based and wavelet-based fused images.
Keywords: Curvelet transform, image fusion, optical coherence tomography, projection image, retina
Introduction
Spectral-domain optical coherence tomography (SD-OCT) is a noninvasive imaging method to represent the details of the different depths of the retina in the micrometer resolution.[1] However, in recent years, these data have had worthwhile role in analysis, processing, and diagnosis of retinal diseases because the retinal diseases such as glaucoma, diabetic retinopathy, age-related macular degeneration (AMD), and central retinal artery (or vein) occlusion cause particular effects in the structure of retinal components during different stages of the disease.[2,3,4] These variations in OCT images can be processed automatically. In addition, the creation of SD-OCT data focusing on the macula and optic disc has provided a vast area of evaluations for the extraction of desirable characteristics of the retina. In recent investigations, two-dimensional (2D) and three-dimensional (3D) segmentation of retinal layers and retinal vessels, as well as the variation of important regions of the retina such as macula and optic disc, has gained increased popularity.[5,6] In this regard, formed projection images in the X–Y axis also render the main tool for the extraction of important characteristics of the retina and add information to the information content of other axes.[7,8,9] Projection images show inlier retinal pathology and abnormalities that are not visible in common fundus images.[9]
The projection image is also widely used to localize lesions within intraretinal layers.[10] Sayanagi et al. showed that the projection images can identify and localize polypoidal lesions in choroidal vasculopathy.[11] Gorczynska et al. demonstrated that projection OCT images can enhance the visualization of outer retinal pathology in nonexudative AMD.[9] Stopa et al. compared OCT projection images with the fundus photography, autofluorescence, and angiography in AMD patients and demonstrated that it enables researchers to link the intraretinal pathological information of 3D OCT with the other modalities.[12] The benefits of the OCT projection images have also been demonstrated to diagnose and monitor the cystoid macular edema.[13,14]
The projection image can show the retinal vessels in the inner layers of the retina such as choroid, which may not be visualized in fundus imaging due to their location, below pathologic lesions.[15] Fard et al. demonstrated the facilities of OCT angiography projection images to extract peripapillary capillary density in patients with optic disc swelling, papilledema, and pseudo papilledema.[16,17]
In other studies, researchers used projection images to segment the retinal vessels on three dimensions.[18] For this purpose, the researchers generated 3D segmentation of the retinal layers using a supervised and pixel classification-based vessel segmentation approach. Then, a 2D projection image of the vessel is generated based on the information each layer. Afterward, the 3D vascular structure is extracted using projection images.[19]
The existing studies on the extraction of the important retinal properties such as vessels from projection images obtained from OCT depend on the simple statistical approaches including mean and variance on the entire of the 3D OCT data or some specific layers.[7,8,9,10,11,19] This means that the projection image is taken using making an average throughout or in a specific section of the retinal layers. Although this method is well suitable, is simple, and is time efficient, it is associated with some inevitable informational deficiencies of retinal depth.[20,21] For example, in the entire averaging approach, the resolution of the thin veins is generally reduced or eliminated.
Image fusion is one of the most important methods in the matching of attained information from different images in an image, so that the resulting image contains desirable details of all input images.[22] These images may be obtained from a fixed section of human tissue using different imaging modalities and/or imaging of a fixed landscape, as happens during the imaging which different objects lay in the focus. Many studies are proposed for image fusion such as multiresolution transform, principal component analysis (PCA), color model, and intensity hue starvation; however, these methods have many applications and also limitations.[23,24] In medical image fusion, multiresolution transforms reveal better results with more details rather than other transforms.[25] In this regard, another effort for forming optimum projection image over retinal layers using wavelet transform is accompanied with satisfactory results.[20] This transform with describing the precise detail of the image in horizontal, vertical, and diagonal axes helps to provide more detail of vessels – particularly those that are only observed in vertical layers and also other useful amplified information and those that are displayed in the final projection image.[26] Nonetheless, limitations of wavelet transform in representing image details in four subbands and three axes divest, creating a fully optimum image, while there are always vessels and details in the image which are out of the three maximum information axes.[27] As described later, Curvelet transform can demonstrate information of subbands and different axes (e.g., more angles).[28,29] Indeed, Curvelet transform has emerged for optimum display of 2D discontinuities, and it has more distinguished properties than conventional wavelet. As whatever said, Curvelet transform shows more capability and therefore its applications in image fusion are progressively increasing. That is why, the Curvelet transform is more powerful in detailed amplification of each layer of the retina as well as the fusion of layers for creating a more optimal projection image.[30,31] It should be pointed out that this approach is conducted for the first time on the 3D OCT data. Figure 1 illustrates how averaging misses some information for fusion.[20]
Figure 1.

The obtained image from averaging within boundarie
In the present study, retinal layers involving important information would be fused through image processing methods such as the Curvelet method to obtain an optimum image with maximum information of depth. The image displaying important and highlighted information of each layer of retina including distribution of vessels, edge, and center of macula; edge and center of optic disc and its surrounded rim area; or possible disorders such as deep retinal cysts will contain information of the depth of retina which none of the other modalities of retinal imaging such as fundus imaging method can build.[9]
Our strategy in the fusion of retinal layers using Curvelet transforms is very similar to the present fusion strategies of multifocus images.[32,33] Intralayers of the retina containing important information in terms of the general distribution of vessels and the presence of details in the image were very similar, however they were different in representing subtle details. Thus, we chose the weighted averaging method for fusion of Curvelet coefficients of the images in low-frequency subbands and maximum absolute method, with presenting a new approach (viz., it was concurrently used with weighted averaging) for the fusion of Curvelet coefficients in high-frequency subbands to obtain desirable results, as described in the results section.
Materials and Methods
The proposed methods were adopted on 13 3D macular SD-OCT images obtained from eyes without pathologies using Topcon 3D OCT-1000 imaging system in the Department of Ophthalmology, Feiz Hospital, Isfahan, Iran. The size of the obtained volumes was 650 × 512 × 128 voxels with a voxel resolution of 3.125 mm × 3.125 mm × 7 mm. Then, we choose a diffusion map-based segmentation for the localization of 12 different boundaries in 3D retinal data.[34,35]
Forming projection images from each layer of the retina
In the first step of the effort, with the fusion of levels and voxels of each pair of sequential boundaries, a retinal layer-associated image would be acquired. There are multiple methods for the projection of levels between each pair of boundaries including averaging and highest and lowest value methods which determine the average, highest, and lowest values of each column of 3D space, between each pair of boundaries and pixel value in the output 2D image. Figure 1 illustrates the general concept of projection image formation of a retinal layer using averaging in the space between each pair of boundaries.
Curvelet transform-based image fusion
Fusion based on Curvelet transform can be described in three steps as follows:
Curvelet transform is performed on each input image individually and Curvelet coefficients of each image are obtained
With having a fusion rule, Curvelet coefficients associated with images in different subbands are fused and thereby fused Curvelet coefficients are acquired. The fusion rule and how to execute each program for the fusion of Curvelet coefficients in different subbands are the most important aspects of image fusion for creating an optimum image[36]
With the performance of inverse Curvelet transform on the fused Curvelet coefficients, the final image is obtained.
The general steps for the fusion of images using Curvelet transform are represented in Figure 2.
Figure 2.

Curvelet-based image fusion for two input projection images
Fusion strategy
The key step in image fusion based on Curvelet transform is the coefficient combination, namely, the process of merging the coefficients in a proper way in order to gain the best quality in the fused image. The projection images of the retinal layers, in the layers with more information of retina, are similar in general view, but they are different in subtle details. For example, the last layers of the retina, although very similar in structure of large vessels, differ in the presentation of thin veins. Thus, coefficients of low-frequency layers have little differences together, whereas coefficients of high-frequency layers have apparent differences. High-frequency coefficients usually fluctuate around 0; the larger absolute value of Curvelet coefficients show more dramatic changes in the gray scale of the image including the edges and details of the vessels. Therefore, regarding the features of the Curvelet transform and the characteristics of layer-related images, the proposed method puts forward an image fusion strategy that the low-frequency coefficients using the weighted average and high-frequency coefficients using maximum absolute-based method are integrated.[37,38] The Curvelet transform is applied to images A and B; next, the corresponding low-frequency coefficients and high-frequency coefficients are gained, respectively. Then, fused image coefficients for low- and high-frequencies are obtained as follows:
The fusion of low-frequency coefficients based on the weighted average method is performed, which is formalized as follows:

The fusion of high-frequency coefficients is based on the maximum absolute method. Its formula can be stated as:

It is worth mentioning that we concurrently obtain the fused image from the five and/or the six layers (with the most information). Once the OCT image noise is inherently high, using the largest size for high frequencies, the noise in the merged image is increased, howbeit, all important details of each layer are appropriately displayed in the final image. To overcome this problem, we perform a new method based on the weighted average and high-frequency coefficients using the maximum absolute-based method. The half effect of the fused Curvelet coefficient is integrated by the maximum value and another half is integrated from average coefficients of other layers.
Two important considerations in the proposed approach
Anatomical features of the retina in optical coherence tomography
Each retinal 3D OCT consists of cross-sectional scans called B-scans or transverse scans. Such datasets include a sizable slice of the retina, demonstrating its internal structures in detail. Each B-scan is also composed of sequential one-dimensional scans in z-direction – called A-scans or axial scans. The existence of a blood vessel in the retinal structure leads to different indicators in intersecting B-scan, and thickening occurs in retinal nerve fiber layer (RNFL). RNFL thickness is unfolded through the formation of projection images as blood vessels enclosed in light pixels in the first layers (e.g., second and third layers), whereas in the last layers which include the shadow of blood vessels, they appear in dark pixels.[39,40] This is one of the most important issues based on the anatomical properties of retina, and it should take into consideration in the case of layer fusion. Complementary image projection of the primary layers should be incorporated into the image fusion process, until the intensity of the vessels in the different layers does not neutralize each other. This anatomical view has been first investigated by Hood et al.[40]
Image fusion based on primary images with more information
With precise attention to image details obtained from a specific layer using different statistical indicators, these images are different in subtle details.[21] Although a range of difference in certain layers may be trivial, this would enhance the existing information in the images by applying different statistical approaches. If the difference among the resulting images of these approaches is tremendous, it means that if each method builds specific details of a layer, its concurrent fusion will display integrated information of a layer. If the disparity among the resulting images of statistical methods over a layer is a minute, the least application of this approach will be denoising the original images in the fused images without any reduction in the contrast of the details.
Evaluation of image fusion
To verify the performance of image fusion, an evaluation approach is needed, which usually can be divided into two categories: the subjective evaluation method and the objective assessment method. The subjective evaluation method is a visual analysis of the fused image. It is simple and also remarkably effective in the primary assessment of fused images. In a qualitative evaluation of images, many questions would be answered by direct visual observation. In the next step of subjective assessment in this article, the final obtained fused results were evaluated by two ophthalmologists and compared with the resultant images of fundus imaging. Subjective assessment methods are not comprehensive and because of alteration in the observation conditions, the evaluation results may be different. Moreover, observations maybe performed according to personal-based mode. On the other hand, in the subjective evaluation, by conducting different fusion methods (such as wavelet, Curvelet, or using a transform in different states), results are much related visually and cannot be assessed via quantitative evaluation. Hence, investigators devise several methods named objective assessments which are quantitative analysis. Some quantitative evaluation methods of fused images are provided in the following section and are used for the evaluation of the proposed method.
Standard Deviation (SD) is an important numerical scale to weigh the information capability of images, and it manifests the discrete level of gray-scale image's mean value. SD can be formalized as follows:[41]

In these equations, M and N reflect the length and the width of the image, respectively, and F(i, j) is relevant to the gray-level intensity of a pixel in the ith arrow and the jth column. The larger SD provides the more dispersed distribution of the gray-scale image and the better quality of the fused image. Namely, it comprises more information.
Information entropy of the image is an important indicator for assessing the richness of image information; it corresponds to the property of combining between images. The entropy of an image is expressed as:[42]

Where H is the entropy, L is the whole gray scales of the image, pi is the probability of ith gray level.
Average gradient (AG) represents the contrast between the variations of pattern on the image, so it is frequently performed to assess the clarity of the image. Overall, the greater value of AG provides a more clear image.[43]


In this effort, we follow a comprehensive assessment which makes the combination of the subjective visual evaluation and objective evaluation to produce the assessment of image quality more effectively and more comprehensively.
Results
Forming projection images from 11 intraretinal layers
By adopting different statistical indicators on pixels between each pair of the image corresponding to each layer of the retina, an image relevant to its statistical indicator is obtained. Figure 3 shows projection fundus images of each of the 11 intra-retinal layers for mean (up left), median (up right), maximum (down left), and variance (down right) statistical indicators[14]
Results for some indicators such as mean, average, and maximum are significant
Certain indicators such as minimum and variance are not represented as significant results.
Figure 3.
Projection fundus images of each of the 11 intraretinal layers for mean (up left), median (up right), maximum (down left), and variance (down right) statistical indicators
By simple evaluation of these images, general notes are disclosed, including:
Each method is better and more effective in the attaining of certain layers (e.g., the averaging method obtains better results for the 2nd and 6th layers than the maximum method; also the maximum method provides better results for the 3rd layer than the averaging method)
Some layers of intraretinal components contain more information (such as 2nd, 6th, and 11th)
Certain layers of intraretinal components do not contain appropriate and exclusive information for the fusion of images (including 4th, 5th, and 7th layers).
Projection of intraretinal images from the curvelet-based image fusion
First state
Based on retinal anatomy as well as represented images in Figure 3, the 2nd, 3rd, 6th, and last three layers contain the most different information of retina. Thus, in the first step of the image obtained using the Curvelet-based image fusion, these images are combined with the described methods. The resulting image is shown as a first fused Image (FI1) in Figure 4. It should be noted that the veins in the first layers have bright pixels and in the later layers have dark pixels, to collect information in the combined image, the complement of images of the first retinal layers is attained and then brought into fusion.
Figure 4.

The obtained image from the fusion of the 2nd, 3rd, 6th, and last three layers of retina with the Curvelet-based image fusion (FI1)
Second state
With respect to the anatomy of retina, we know that more information related to retinal vessels exists in the first layers (particularly the 2nd layer); also, the formed shadows of vessels in the last layers of the retina show high-resolution properties. On the other hand, this information is integrated to produce a complete image. Thus, from the new point of view, first, the primary layers containing suitable information of retina together, namely 2nd and 3rd layers, as well as the latest retina layers, are combined using Curvelet transform. When the image containing important information from two areas of the retina is formed, the resulting images are combined again. This approach helps to make apparent details of each layer effectively. Figure 5 represents the fusion of retinal layers (obtained from the averaging method) together. The Curvelet-based fusion of the 2nd and 3rd layers of retina together (CF23) is shown in Figure 5a and the Curvelet-based fusion of the last six layers of retina together (CF612) is shown in Figure 5b. Next, the obtained images of CF612 and CF23 are fused together and resulted in the second fused image (FI2) in Figure 5c. Then, according to the anatomical considerations, CF612 and the complement of the 2nd layer of retina are fused together and resulted in the third fused Image (FI3) as shown in Figure 5d.
Figure 5.

The obtained images with the Curvelet transform. (a) The fusion of the 2nd and 3rd layers of retina together (CF23). (b) The fusion of the last six layers of retina together (CF612). (c) The fusion of CF612 and CF23 (FI2) together. (d) The fusion of CF612 and the complement of the 2nd layer of retina together (FI3)
Third state
The images for an intralayer of the retina obtained by different statistical methods are adopted together using the Curvelet-based image fusion [Figure 6]. In other words, an image of each intraretinal layer contains essential information on different statistical operators. The results show that the Curvelet-transform provides the high-frequency amplified information in the new fused image of the second layer. Then, the new intraretina layers are fused to compose the final (fourth) fused image (FI4) as shown in Figure 7. Therefore, the final projection image has all the highlighted information of the input images in each intraretina layer.
Figure 6.

The obtained images for an intralayer of retina with adopting different statistical methods: (a) the image of the 2nd layer obtained from the averaging method. (b) The image of the 2nd layer obtained from the median method. (c) The image for the 2nd layer obtained from the maximum method. (d) The fusion of the three images together with the Curvelet transform
Figure 7.

The final (4th) fused image (FI4)
Subjective and objective evaluations
There are two important comparisons of result for the subjective evaluation:
The comparison of the resulting projection image due to the fusion of the target layers across the entire intra-retinal layer with the projection image obtained from the last six layers is shown in Figure 8. This is an important evaluation because it is not only showing a major part of vessel structure in the last six layers of the retina, but also highlights the importance of these first target layers of the retina
A comparison of the resulting image obtained from the fusion of intraretinal layers using the Curvelet transform-based, wavelet transform-based,[20] and the averaging based methods[21] is shown in Figure 9. The obtained image from the Curvelet transform contains amplified details and better contrast compared to the other two methods. There are also many thin veins in Curvelet-based fused image, which are absent in the averaging-based fused image (AFI) and even some of them also not exist in the wavelet-based fused image (WFI).
Figure 8.

(a) The obtained projection image from the fusion of target layers in the entire intralayers of the retina. (b) The obtained projection image from the fusion of the last six target layers. More details of the image (a) are represented in the specified areas
Figure 9.
(a) The obtained image from the Curvelet-based method (FI4). (b) The obtained image from the wavelet-based method. (c) The obtained imaged from the averaging-based method
Quantitative evaluation
The obtained images from target layers (which are selected based on our knowledge in the anatomy of the retina) using the Curvelet-based image fusion in the different states as shown in the result section are very near in the case of details and information of images. Thus, they cannot be compared via qualitative evaluation exclusively. Therefore, the amount of information in the images is assessed by applying the formula as represented in the quantitative evaluation section. Hence, in this section, we compare the results of our new method with those of previously published methods, including the averaging based method[14] and the wavelet-based method.[13] The steps of combining images for the image obtained for the wavelet-based method are exactly the same as those applied to achieve the FI3 image. According to the obtained numerical results in Table 1, AFI has the lowest quality, FI1 and FI2 possess more quality for the fusion, and also, FI4 represents the best results overall.
Table 1.
The quantitative evaluation of fused images from different methods in the result section
| Fused image | SD | H | AG |
|---|---|---|---|
| AFI (21) | 4.0327 | 5.7939 | 6.3252 |
| WFI (20) | 4.8072 | 6.3201 | 8.6096 |
| FI1 | 5.4237 | 6.7611 | 9.6352 |
| FI2 | 5.0010 | 6.5602 | 9.2256 |
| FI3 | 5.2581 | 6.7230 | 9.9192 |
| FI4 | 5.4529 | 6.7744 | 9.5491 |
SD – Standard deviation; AG – Average gradient; FI1 – First fused image; FI2 – Second fused image; FI3 – Third fused image; FI4 – Fourth fused image; AFI – Averaging-based FI; WFI – Wavelet-based FI; H – Entropy
Conclusion
The objective of the proposed method is to form projection images from each retinal layer and further use the Curvelet transform to fuse the resulting images. The projection image from the depth of OCT enhances the contrast and shows inlier retinal pathology and abnormalities not visible in common fundus images. To form projection images in each layer, the simplest and most common methods for data fusion are performed. The statistical methods are conducted on the data of boundaries between each pair of retinal layers to form each layer image as well. Herein, this approach is proposed, for the first time, by the authors. In the earlier study conducted by the same authors, retinal layers are only formed using mean statistical indicators. However, in the present effort, to use the capability of the statistical indicators, other approaches are applied. Owing to the basic difference in the definition of certain methods such as the averaging and maximum methods, these approaches manifest details and different information of each layer. Concerning data types and the presence of deficiencies in the retinal depth caused by the disease or other causes, the methods revealed different interpretations. These different interpretations provide a suitable opportunity for the authors to find a method to combine information on these methods. Therefore, the Curvelet transform and the weighted averaging fusion methods are used. Different strategies for information or data fusion are developed. However, one of the main limitations for the selection of other methods is the variability of the depth of retinal layers, as the depth of a layer which involves the space between each pair of boundaries in different length and width is varied. It means that, in certain parts, layers’ depth maybe only ten pixels, whereas in the other parts, the depth may be extended to twenty pixels. Thus, choosing statistical indicators is the simplest and most reliable method to overcome all limitations of the first step. Another note which is worth to mention is the presence of the very close results using both statistical methods, averaging and the mean, in the present study. It seems that close results in these indicators are owning to very near-intensity values in a layer of the retina (because of anatomical similarity) as well as the depth of major layers, particularly the last layers of retina are shallow. In the fusion issues, information of both methods was considered because, first, differences in the results should be considered and second, with respect to the anatomical position of the retina (shallow and close intensities in each layer in terms of similar type), information of both methods is more reliable. Taking into account these two methods in the final merging, the contribution of these two methods in the output information is highlighted.
In the next step, the present study used various methods to merge layers of images using the Curvelet transform. In the past, we discussed a similar study on the 3D OCT data, however the previous study is based on the wavelet transform. Compared to results obtained from the wavelet transform, the results of the present study are more significant because the resulting images contain more information and details. With the same combination process, FI3 provides better quantitative evaluation results than WFI. That is because of the difference between the wavelet and the Curvelet transforms into the presenting details. The wavelet transform represents an image detail only in four subbands and three axes: horizontal, vertical, and diagonal. Hence, details and input information of images only can be merged and amplified in the direction of three axes. In other words, the wavelet transform misses more details associated with the vessel edge or other small protuberances in the angles and axes which are placed out of the three main axes, whereas the Curvelet transform owing to its capacity of representing details in more high-frequency subbands and in all the angles, it can easily cover this weakness and represents better images.
Overall, FI4 obtains the best results for image fusion in all terms such as entropy (6.7744) and AG (9.5491). It can be explained by the creation of new intraretinal images from the images of different statistical operators as shown in Figure 6. Creating an image with more and detailed information made by the FI4 has significantly higher contrast. There are many arguments in the selection of the Curvelet-based transform as a fusion method. A list of newly proposed ideas is presented as follows:
Rather than the fusion of layers together, it is suggested that at first, desirable characteristics of layers be extracted and then the characteristics be fused together
Using newer generations of multiresolution transforms such as second-generation Curvelet and Contourlet transforms for the fusion of images
Comparison of extracted vessels from formed projection images using the Curvelet method with extracted vessels from fundus images
Using local selection coefficients such as maximum local energy instead of using pixel-based fusion for the selection of Curvelet coefficients
Conducting the proposed method on the OCT data in the optic disc area
Extraction of macular and optic disc areas in the layers, which are apparent using modern image processing approaches.
Financial support and sponsorship
None.
Conflicts of interest
There are no conflicts of interest.
BIOGRAPHIES

Jalil Jalili received his BSc, Msc and PhD degrees all in Biomedical Engineering (bioelectrics) from Science & Research Branch of Islamic Azad University (2009), and Tehran University of Medical Sciences (2018, highest honor), respectively. His main research interests are medical image analysis/processing, multiresolution transforms and biomedical optics including implementation/construction of retinal imaging systems.
Email: jalil_jalili_am@yahoo.com

Hossein Rabbani received his BSc degree in Electrical Engineering (Communications) from Isfahan University of Technology in 2000 with the highest honors, and his MSc and PhD degrees in Bioelectrical Engineering in 2002 and 2008, respectively, from Amirkabir University of Technology. In 2007 he was with Queen's University, as a Visiting Researcher, in 2011 with University of Iowa, as a Postdoctoral Research Scholar, and in 2013-2014 with Duke University as a Postdoctoral Fellow. He is now a professor in Biomedical Engineering Department and Medical Image & Signal Processing Research Center (MISP), Isfahan University of Medical Sciences, Isfahan, Iran, and Editor in-Chief of Journal of Medical Signals and Sensors (JMSS). His main research interests are medical image analysis and modeling, statistical (m-D) signal processing, sparse transforms, and image restoration.
Email: h_rabbani@med.mui.ac.ir

Alireza Mehri Dehnavi received BSc in Electrical Engineering from Isfahan University of Technology in 1988, MSc of Engineering in Measurement and Instrumentation from Indian Institute of Technology Roorkee in 1992 and PhD in Medical Engineering from Liverpool University in 1996. He is a Professor of Biomedical Engineering in School of Advanced Technologies in Medicine of Isfahan University of Medical Sciences. His research interests are medical optics, devices and signal processing.
Email: mehri@med.mui.ac.ir

Rahele Kafieh received her BSc in Bioelectrical Engineering at Sahand University of Technology (2004) and completed her Msc and PhD in Bioelectrical Engineering at Isfahan University of Medical Sciences (2008 and 2014). She is Assistant Professor at School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran and guest researcher at Neurocure Clinical Research Center, Charite University, Berlin, Germany. Her research is concentrated on biomedical image analysis, problems in area of graph based image analysis, timefrequency methods, deep learning and image segmentation.
Email: r_kafieh@yahoo.com

Mohammad Reza Akhlaghi is an Associate Professor in Department of Ophthalmology, Isfahan University of Medical Sciences, Isfahan, Iran. He received his VitreoRetinal Fellowship in 2006 from the Tehran University of Medical Sciences.
Email: akhlaghi@med.mui.ac.ir
References
- 1.Brezinski ME. Elsevier; 2006. Optical Coherence Tomography: Principles and Applications. [Google Scholar]
- 2.Regatieri CV, Branchini L, Carmody J, Fujimoto JG, Duker JS. Choroidal thickness in patients with diabetic retinopathy analyzed by spectral-domain optical coherence tomography. Retina. 2012;32:563–8. doi: 10.1097/IAE.0b013e31822f5678. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Wilde C, Patel M, Lakshmanan A, Amankwah R, Dhar-Munshi S, Amoaku W, et al. The diagnostic accuracy of spectral-domain optical coherence tomography for neovascular age-related macular degeneration: A comparison with fundus fluorescein angiography. Eye (Lond) 2015;29:602–9. doi: 10.1038/eye.2015.44. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Fu D, Tong H, Zheng S, Luo L, Gao F, Minar J. Retinal status analysis method based on feature extraction and quantitative grading in OCT images. Biomed Eng Online. 2016;15:87. doi: 10.1186/s12938-016-0206-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Novosel J, Vermeer KA, de Jong JH, Ziyuan Wang, van Vliet LJ. Joint segmentation of retinal layers and focal lesions in 3-D OCT data of topologically disrupted retinas. IEEE Trans Med Imaging. 2017;36:1276–86. doi: 10.1109/TMI.2017.2666045. [DOI] [PubMed] [Google Scholar]
- 6.Kafieh R, Rabbani H, Kermani S. A review of algorithms for segmentation of optical coherence tomography from retina. J Med Signals Sens. 2013;3:45–60. [PMC free article] [PubMed] [Google Scholar]
- 7.Garvin MK, Abràmoff MD, Lee K, Niemeijer M, Sonka M, Kwon YH. 2-D pattern of nerve fiber bundles in glaucoma emerging from spectral-domain optical coherence tomography. Invest Ophthalmol Vis Sci. 2012;53:483–9. doi: 10.1167/iovs.11-8349. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Golabbakhsh M, Rabbani H. Vessel-based registration of fundus and optical coherence tomography projection images of retina using a quadratic registration model. IET Image Process. 2013;7:768–76. [Google Scholar]
- 9.Gorczynska I, Srinivasan VJ, Vuong LN, Chen RW, Liu JJ, Reichel E, et al. Projection OCT fundus imaging for visualising outer retinal pathology in non-exudative age-related macular degeneration. Br J Ophthalmol. 2009;93:603–9. doi: 10.1136/bjo.2007.136101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Heiferman M, Simonett J, Fawzi A. En face OCT imaging in retinal disorders. Retin Physician. 2015;12:45. [Google Scholar]
- 11.Sayanagi K, Gomi F, Akiba M, Sawa M, Hara C, Nishida K. En-face high-penetration optical coherence tomography imaging in polypoidal choroidal vasculopathy. Br J Ophthalmol. 2015;99:29–35. doi: 10.1136/bjophthalmol-2013-304658. [DOI] [PubMed] [Google Scholar]
- 12.Stopa M, Bower BA, Davies E, Izatt JA, Toth CA. Correlation of pathologic features in spectral domain optical coherence tomography with conventional retinal studies. Retina. 2008;28:298–308. doi: 10.1097/IAE.0b013e3181567798. [DOI] [PubMed] [Google Scholar]
- 13.Wanek J, Zelkha R, Lim JI, Shahidi M. Feasibility of a method for en face imaging of photoreceptor cell integrity. Am J Ophthalmol. 2011;152:807–140. doi: 10.1016/j.ajo.2011.04.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Murakami T, Nishijima K, Akagi T, Uji A, Horii T, Ueda-Arakawa N, et al. Optical coherence tomographic reflectivity of photoreceptors beneath cystoid spaces in diabetic macular edema. Invest Ophthalmol Vis Sci. 2012;53:1506–11. doi: 10.1167/iovs.11-9231. [DOI] [PubMed] [Google Scholar]
- 15.Alasil T, Ferrara D, Adhi M, Brewer E, Kraus MF, Baumal CR, et al. En face imaging of the choroid in polypoidal choroidal vasculopathy using swept-source optical coherence tomography. Am J Ophthalmol. 2015;159:634–43. doi: 10.1016/j.ajo.2014.12.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Fard MA, Jalili J, Sahraiyan A, Khojasteh H, Hejazi M, Ritch R, et al. Optical Coherence Tomography Angiography in Optic Disc Swelling. Am J Ophthalmol. 2018;191:116–23. doi: 10.1016/j.ajo.2018.04.017. [DOI] [PubMed] [Google Scholar]
- 17.Fard MA, Sahraiyan A, Jalili J, Hejazi M, Suwan Y, Ritch R, et al. Optical coherence tomography angiography in papilledema compared with pseudopapilledema. Invest Ophthalmol Vis Sci. 2019;60:168–75. doi: 10.1167/iovs.18-25453. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Hong Y, Makita S, Yamanari M, Miura M, Kim S, Yatagai T, et al. Three-dimensional visualization of choroidal vessels by using standard and ultra-high resolution scattering optical coherence angiography. Opt Express. 2007;15:7538–50. doi: 10.1364/oe.15.007538. [DOI] [PubMed] [Google Scholar]
- 19.Niemeijer M, Garvin MK, van Ginneken B, Sonka M, Abramoff MD, editors. San Diego, California, United States: 2008. Vessel Segmentation In 3D Spectral Oct Scans of the Retina Medical Imaging 2008: Image Processing. International Society for Optics and Photonics. [Google Scholar]
- 20.Jalili J, Rabbani H, Akhlaghi M, Kafieh R, Mehridehnavi A, editors. Montreal, Canada: 2012. Forming Projection Images from Each Layer of Retina Using Diffusion May Based Oct Segmentation 2012 11th International Conference on Information Science. Signal Processing and their Applications. [Google Scholar]
- 21.Jalili J, Rabbani H, Mehri-Dehnavi A, Akhlaghi M. Formation and fusion of projection images from 11 layers of retina using statistical indicators to obtain an image with appropriate contrast from the retinal depth. J Isfahan Med Sch. 2013;31:255. [Google Scholar]
- 22.Zhang Y. Understanding image fusion. Photogramm Eng Remote Sens. 2004;70:657–61. [Google Scholar]
- 23.Sahu DK, Parsai M. Different image fusion techniques–a critical review. Int J Mod Eng Res. 2012;2:4298–301. [Google Scholar]
- 24.Thakare VV, Katiyar P. Review on various image fusion techniques. J Multimed Technol Recent Adv. 2018;5:14–7. [Google Scholar]
- 25.Ali F, El-Dokany I, Saad A, Abd El-Samie FE. Curvelet fusion of MR and CT images. Prog Electromagn Res. 2008;3:215–24. [Google Scholar]
- 26.Yelampalli PK, Nayak J, Gaidhane VH. Daubechies wavelet-based local feature descriptor for multimodal medical image registration. IET Image Process. 2018;12:1692–702. [Google Scholar]
- 27.Miri MS, Mahloojifar A. Retinal image analysis using curvelet transform and multistructure elements morphology by reconstruction. IEEE Trans Biomed Eng. 2011;58:1183–92. doi: 10.1109/TBME.2010.2097599. [DOI] [PubMed] [Google Scholar]
- 28.Starck JL, Murtagh F, Candès EJ, Donoho DL. Gray and color image contrast enhancement by the curvelet transform. IEEE Trans Image Process. 2003;12:706–17. doi: 10.1109/TIP.2003.813140. [DOI] [PubMed] [Google Scholar]
- 29.Ma J, Plonka G. The curvelet transform. IEEE Signal Process Mag. 2010;27:118–33. [Google Scholar]
- 30.Arif M, Wang G. Fast curvelet transform through genetic algorithm for multimodal medical image fusion. Soft Comput. 2019:1–22. [Google Scholar]
- 31.Nencini F, Garzelli A, Baronti S, Alparone L. Remote sensing image fusion using the curvelet transform. Inf Fusion. 2007;8:143–56. [Google Scholar]
- 32.Yang J, Zhao ZM. Multi-focus image fusion method based on curvelet transform. Opto Electron Eng. 2007;6:67–71. [Google Scholar]
- 33.Yang Y, Tong S, Huang S, Lin P, Fang Y. A hybrid method for multi-focus image fusion based on fast discrete curvelet transform. IEEE Access. 2017;5:14898–913. [Google Scholar]
- 34.Kafieh R, Rabbani H, Abramoff MD, Sonka M. Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map. Med Image Anal. 2013;17:907–28. doi: 10.1016/j.media.2013.05.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Kafieh R, Rabbani H, Hajizadeh F, Abramoff MD, Sonka M. Thickness mapping of eleven retinal layers segmented using the diffusion maps method in normal eyes. J Ophthalmol. 2015;2015:259123. doi: 10.1155/2015/259123. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Bhateja V, Krishn A, Sahu A, editors. Udaipur India: Springer; 2016. Medical Image Fusion in Curvelet Domain Employing PCA and Maximum Selection Rule Proceedings of the Second International Conference on Computer and Communication Technologies. Publisher: Association of computing machinery. [Google Scholar]
- 37.Yang G, Li M, Chen L, Yu J. The nonsubsampled contourlet transform based statistical medical image fusion using generalized Gaussian density. Comput Math Methods Med. 2015;2015:262819. doi: 10.1155/2015/262819. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Lahmiri S. Wavelet low-and high-frequency components as features for predicting stock prices with backpropagation neural networks. J King Saud Univ Comput Inf Sci. 2014;26:218–27. [Google Scholar]
- 39.Kafieh R, Danesh H, Rabbani H, Abramoff M, Sonka M, editors. Vessel Segmentation in Images of Optical Coherence Tomography Using Shadow Information and Thickening of Retinal Nerve Fiber Layer 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, Canada. 2013:1075–91. [Google Scholar]
- 40.Hood DC, Fortune B, Arthur SN, Xing D, Salant JA, Ritch R, et al. Blood vessel contributions to retinal nerve fiber layer thickness profiles measured with optical coherence tomography. J Glaucoma. 2008;17:519–28. doi: 10.1097/IJG.0b013e3181629a02. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Yang S, Wang M, Jiao L, Wu R, Wang Z. Image fusion based on a new contourlet packet. Inf Fusion. 2010;11:78–84. [Google Scholar]
- 42.Alipour SH, Houshyari M, Mostaar A. A novel algorithm for PET and MRI fusion based on digital curvelet transform via extracting lesions on both images. Electron Physician. 2017;9:4872–9. doi: 10.19082/4872. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Singh R, Khare A. Multiscale medical image fusion in wavelet domain. ScientificWorldJournal 2013. 2013 doi: 10.1155/2013/521034. Article ID: 521034. [DOI] [PMC free article] [PubMed] [Google Scholar]


