Skip to main content
Electronic Physician logoLink to Electronic Physician
. 2017 Jul 25;9(7):4872–4879. doi: 10.19082/4872

A novel algorithm for PET and MRI fusion based on digital curvelet transform via extracting lesions on both images

Shirin Hajeb Mohammad Alipour 1, Mohammad Houshyari 2, Ahmad Mostaar 3,
PMCID: PMC5587006  PMID: 28894548

Abstract

Background and aim

Merging multimodal images is a useful tool for accurate and efficient diagnosis and analysis in medical applications. The acquired data are a high-quality fused image that contains more information than an individual image. In this paper, we focus on the fusion of MRI gray scale images and PET color images.

Methods

For the fusion of MRI gray scale images and PET color images, we used lesion region extracting based on the digital Curvelet transform (DCT) method. As curvelet transform has a better performance in detecting the edges, regions in each image are perfectly segmented. Curvelet decomposes each image into several low- and high-frequency sub-bands. Then, the entropy of each sub-band is calculated. By comparing the entropies and coefficients of the extracted regions, the best coefficients for the fused image are chosen. The fused image is obtained via inverse Curvelet transform. In order to assess the performance, the proposed method was compared with different fusion algorithms, both visually and statistically.

Result

The analysis of the results showed that our proposed algorithm has high spectral and spatial resolution. According to the results of the quantitative fusion metrics, this method achieves an entropy value of 6.23, an MI of 1.88, and an SSIM of 0.6779. Comparison of these experiments with experiments of four other common fusion algorithms showed that our method is effective.

Conclusion

The fusion of MRI and PET images is used to gather the useful information of both source images into one image, which is called the fused image. This study introduces a new fusion algorithm based on the digital Curvelet transform. Experiments show that our method has a high fusion effect.

Keywords: Medical Image Fusion, MRI, PET, Digital Curvelet Transform

1. Introduction

Image fusion is a combination of two or more images of the same organ in order to achieve a single image that provides more information than one image. Image fusion is a very useful tool in a variety of fields, such as satellite imaging, military surveys, remote sensing, computer vision, and medical imaging. In addition, image fusion reduces the memory requirement for image storage because we have one fused image instead of multiple source images (1). The fused image is more suitable for human and machine perception or further image processing tasks (1). Different approaches have been proposed for the fusion of MR and PET images. The major objectives of these methods are to improve both spatial resolution and functional information. As we know, MRI is a non-invasive technique that produces highly-detailed images of the anatomy of an organ. But, PET is a color-coded functional-imaging system (2). Thus, it seems necessary to develop an automatic method to produce a fused image containing both structural and functional information of a same scene. Until now, several fusion algorithms in the spatial domain, such as simple averaging (3), Bayesian methods (4, 5), Intensity Hue saturation (HIS) (6), Principal component analysis (PCA) (2, 7, 8), Independent component analysis (ICA) (9), Empirical mode decomposition (EMD) (10) methods and in the transform domain like wavelet transform (11), Laplacian pyramid (12), Curvelet transform (13, 14) have been reported. In spatial-domain methods, the fused image is produced by applying some operations on pixels of each source image. Thus, spatial and color distortion may occur. By applying transform domain methods, we can overcome to this problem. These approaches transform images into the frequency domain, and all of the operations are applied on Fast Fourier transform (FFT) of the images (15, 16). The current study was conducted to design a fusion system using MRI and PET registered images. For this reason, we used digital Curvelet transform (DCT), which is a multi-scale directional transform that has high performance in detecting curved edges. So, by modifying the DCT’s sparse coefficients, we can extract the necessary objects of each source image. Then, we use the region-based fusion algorithm. The main steps of the system proposed in this paper are as follows. In Section 2.1, preprocessing and histogram equalization of both MRI and PET images are described. Section 2.2 explains DCT. Sections 2.3 and 2.4 describe region segmentation of PET and MRI images, respectively. In Section 2.5, a new fusion method is described based on coefficients of DCT and tissue features, such as entropy. Section 3 provides the results. Our conclusions are presented in Section 4.

2. Background and Proposed Algorithm

In this study, we worked on a database that contained PET and MRI registered images. This database was obtained from website of Harvard University. The complete block diagram of a proposed algorithm is shown in Figure 1. Each step of this algorithm is described completely in this section.

Figure 1.

Figure 1

Block diagram of the proposed technique

2.1. Preprocessing

Since PET images are red, green, and blue (RGB) color images, first, they must be converted into intensity (brightness in a spectrum), hue (property of the spectral wavelength), and saturation (purity of the spectrum) (IHS) color space (6). In the proposed algorithm, a triangular IHS transform was used. This transform can be depicted as (17):

I=R+G+B3ifB<RandB<GH=G-B3(I-B),         S=I-BIifR<BandR<GH=1+B-R3(I-R),         S=I-RIifG<RandG<BH=2+R-G3(I-G),         S=I-GI

And the inverse IHS transform is:

ifB<RandB<G:R=I(1+2S-3SH),         G=I(1-S+3SH),         B=I(1-S)ifR<GandR<B:R=I(1-S),         G=I(1+5S-3SH),         B=I(1-4S+3SH)ifG<RandG<B:R=I(1-7S+3SH),         G=I(1-S),         B=I(1+8S-3SH)

Figures 2(a) and 2- (c) are related to the original MRI and PET images of the same position. After performing IHS transform on the PET image (Figure 2(d)), Contrast Limited Adaptive Histogram Equalization (CLAHE) (18) was applied on the grey level MRI and the I component of the PET images separately to enhance contrast and attain a uniform background, as shown in Figures 2(b) and 2(f). Then, histogram matching was used to match the histogram of the MRI image with the PET-intensity component, as shown in Figure 2(g). The PET-intensity images and MRI new panchromatic (PAN) images were fed into the Curvelet transform.

Figure 2.

Figure 2

(a) MRI original image; (b) After CLAHE on MRI; (c) PET original image; (d) After IHS transform on PET; (e) Intensity component of PET image; (f) After CLAHE on Fig. 2(e); (g) After histogram matching.

2.2. Digital Curvelet Transform (DCT)

The Curvelet transform, which was proposed by Donoho (19), is a multi-scale, directional transform. DCT has its best performance in representing the edges and other singularities along curves. Also, it has a better directional selectivity (DR) than other multi-scale transforms, such as the wavelet transform. As the wavelet’s basis is isotropic, and the curve has direction, so the wavelet requires many more coefficients to account for the edges. By applying the Curvelet transform and modifying its coefficients, objects and features can be made more distinguishable (20). This property makes DCT as efficient tool for analyzing medical images, because they contain several curved-shaped objects. In fact, DCT uses the Ridgelet transform in various sub-bands of an image (21). The Ridgelet transform only can handle straight-line singularities (22). In order to detect curved singularities, all of the different frequency sub-bands of the image must be applied separately. However, the mechanism of DCT is composed of four major steps: (1) sub-band decomposition, (2) smooth partitioning, (3) renormalization, and (4) Ridgelet analysis, which is discussed in detail in (21). After preprocessing, the Curvelet transform is applied on both PET and MRI images separately. Then, by modifying its (few non-zero) coefficients based on the sparsity of the Curvelet’s coefficients, important regions of each image can be extracted. These extracted regions are further used in the fusion process.

2.3. MRI Region Segmentation

Note that the white color in the MR image is related to high activity regions of the brain. So, segmentation of these white hypo-intense lesions has an important role in the fusion of PET and MR images. In this step, the Curvelet transform is applied on an MR grey-level image in order to detect tumors. First, the image is decomposed into low- and high-frequency components. As discussed above, DCT is an efficient transform for diagnosing 2D singularities, such as curved edges. Other types of transforms do not have this capability. Thus, DCT is the best choice. The original MR image and the detected lesions are shown in Figure 3.

Figure 3.

Figure 3

(a) MRI original image; (b) enhanced image after CLAHE; (c) Image after modifying bright object by DCT; (d) image after applying threshold

2.4. PET Region Segmentation

In this step, we want to detect important regions of PET image. The regions shown in white are related to the kind of disease in the brain. In this step, we applied DCT on a color PET image according to the method that was used in (23) for extracting the bright lesions. In order to make bright lesions more distinguishable, the intensity of the gray levels (g_1 (i,j)) in the green channel of RGB can be changed as follows (24):

g2(i,j)=9×g1(i,j)-9×g¯+90,

where , is average intensity in a 3 × 3 neighborhood. The original image and the resulting image are shown in Figure 4. Then, this improved image is converted to a gray level image and is fed into DCT. The bright lesions are extracted by modifying the DCT coefficients and choosing an appropriate threshold (Figure 4(e)).

Figure 4.

Figure 4

(a) PET original image; (b) Image after green component enhancement by CLAHE; (c) converting RGB to gray level; (d) Image after applying DCT and modifying its coefficients; (e) Image after applying threshold

2.5. Fusion Implement

The result of this step is a new intensity image that includes the identical spatial details of the original MRI and has the identical functional information of the original PET. Now, we have four input images, i.e., (1) MRI PAN image, (2) enhanced PET image, (3) extracted lesions of MRI, and (4) extracted lesions of PET.

First, the proposed algorithm decomposes each of the input images into low- and high-frequency components using DCT. Then, the frequency sub-bands are fused separately according to our fusion rule. At first, we use the entropy, which is known as a texture feature, and then the pixel value of the segmented images is used to construct the fused image. Entropy represents the average information of the image. So, an image with higher entropy contains more texture information than an image with lower entropy. After applying DCT on the PET and MR PAN images, the regional entropy of each sub-band is calculated as follows (25):

pij=Gray_value(i,j)Σi=1MΣj=1MGray_value(i,j)H=-Σi=1MΣj=1Npijlog2pij,

where the size of the regional image is M × N, and the gray value of the image in point (i,j) is represented by Grayvalue(i,j). So pij is the gray value probability in point (i,j), and H is the regional image’s entropy.

The proposed procedure is as follows:

  1. Applying DCT on (i) enhanced PET (ii) MRI PAN (iii) extracted lesions resulted from PET (iv) and extracted lesions resulted from MRI

  2. Calculating entropy of each sub bands of PET and MRI PAN

  3. If the regional information entropy and related lesion contrast of image A are larger than or equal to image B at the same time, we select the Curvelet coefficients of image A as the coefficients of the fused image.

  4. If the regional information entropy and related lesion contrast of image A are less than image B at the same time, we select the Curvelet coefficients of image B as the coefficients of the fused image.

  5. Otherwise, we set the coefficients of fused image as the average coefficients of images A and B.

  6. The intensity (I) of the fused image is constructed by the inverse Curvelet transform.

  7. By applying IHS to the RGB transform (adding color information on intensity image), an RGB fused image is produced that contains both structural and functional information.

3. Results

We applied our method on color PET and high resolution MRI registered images that were obtained from the Harvard University website (http://www.med.harvard.edu/AANLIB/home.html). All images were resized to 256×256 pixels. Our database consists of brain images that are related to astrocytoma images. The performance of the proposed scheme was evaluated both visually and quantitatively. The visual comparison showed that our proposed method had more details of MRI than other methods. In order to perform a quality analysis of our algorithm, some quantitative measurements, such as entropy, mutual information, and structural similarity, are needed. These measurements are discussed in Sections 3.1, 3.2, and 3.3. The statistical measurements are shown in Table 1.

Table 1.

Performance comparison of some fusion methods based on entropy and mutual information

Method EN MI SSIM
DWT 5.4140 1.7487 0.6775
GIHS 5.7868 1.7084 0.6207
GFF 5.6628 1.7883 0.6819
IAWP 5.6831 1.8298 0.6718
Proposed method 6.23 1.8878 0.6779

3.1. Entropy

As discussed in Section 2.5, entropy contains useful information about the texture. Images that have higher entropy present more anatomical information. So, entropy could be good choice for evaluation of the different fusion algorithms.

3.2. Mutual Information

The mutual information measures the information that two random variables share. Let A and B be two discrete random variables. Then, mutual information can be defined as (26):

I(A,B)=bBaAp(a,b)log(p(a,b)p(a)p(b))

where p(a,b) is the joint probability distribution function of A and B, and p(a) and p(b) are the marginal probability distributions of A and B, respectively. Since the log base is 2, the unit of mutual information is the bit.

According to above equation, we can see that if A and B are independent, then knowing A doesn’t give any information about B. So their mutual information is zero.

MI is related to entropy of each A and B as follows:

I(A,B)=H(A)-H(A|B)=H(B)-H(B|A)=H(A)+H(B)-H(A,B)=H(A,B)-H(A|B)-H(B|A)

where H(A) and H(B) are the marginal entropies of A and B, respectively. H(A,B), H(A,B) and H(B|A) are the joint entropies and conditional entropies, respectively.

3.3. Structural Similarity

Structural similarity (SSIM) (1) is used for measuring the similarity between the resulting image and the original reference image. SSIM has useful information about the quality of the fused image.

4. Discussion

The performance of our method is discussed in this section. For visual comparison, Figure 5 shows the results of our new fusion approach in comparison with different fusion schemes. In this case, original source images are related to astrocytoma grade IV. Figures (5-a) and (5-b) are related to the original MR and PET images, respectively. The resulting fused image of the proposed method is shown in Figure (5-g), which is compared with the technique based on the digital wavelet transform (DWT) (27) in Figure (5-c), generalized HIS (GIHS) (28) in Figure (5-d), guided filtering (GFF) (29) in Figure (5-e), and improved additive wavelet (IAWF) (30) in Figure (5-f). Visual comparison shows that Figure (5-g) presents greater detail. Also, the common features of the MRI and PET images are represented well in this figure. So, it is easy to conclude that the proposed scheme provides better visual quality than the existing schemes.

Figure 5.

Figure 5

(a) MRI original image (astrocytoma); (b) PET original image (astrocytoma); fused images by (c) DWT (1); (d) GFF (1); (e) IAWF [30]; (f) GIHS [28]; (g) proposed method

Apart from the visual performance, quantitative analysis is done (using entropy (25), MI (26) and SSIM (1)) over the fused images. These metrics of the images obtained by different fusion algorithms are shown in Table 1. Note that the larger value of the metric represents better quality. This table demonstrates that the proposed algorithm can preserve high spatial resolution. From the resulting high entropy of the MI and the SSIM, we can conclude that our fusion procedure provides a greater amount of information in the resulting fused image. So, the quantitative analysis agrees with the visual assessments.

5. Conclusions

Since PET and MRI imaging modalities prepare functional and anatomical information, respectively, combining these two images provides more comprehensive information about the subject. This meaningful information and less distortion in the fused image allow accurate and effective diagnosis, analysis, and treatment. In this paper, a Curvelet-based fusion of PET and MRI brain images is introduced. First, two source images are preprocessed. Then, DCT is applied on both images and decomposes them into different sub bands. By modifying the DCT coefficients, lesion detection is performed on astrocytoma cases. The fused image’s coefficients are chosen by comparing the entropy of each pixel and also according to the segmented lesions. Since DCT is a multi-scale, highly-directional transform, it represents curved objects and edges better than other decomposition methods. Also, more detail in the source images can be seen in the fused image. Visual and quantitative comparisons showed that the proposed scheme produced better results than the four other methods that were discussed. Future work should be done to apply other multi-scale directional transforms, such as the Contourlet transform and the Shearlet transform.

Acknowledgments

This article was derived from thesis of Shirin Hajeb in the Biomedical and Medical Physics Department at the Shahid Beheshti University of Medical Sciences.

Footnotes

iThenticate screening: January 13, 2016, English editing: February 23, 2016, Quality control: May 18, 2017

Conflict of Interest:

There is no conflict of interest to be declared.

Authors’ contributions:

All authors contributed to this project and article equally. All authors read and approved the final manuscript.

References

  • 1.Javed U, Riaz MM, Ghafoor A, Ali SS, Cheema TA. MRI and PET Image Fusion Using Fuzzy Logic and Image Local Features. Scientific World Journal. 2014;2014:708075. doi: 10.1155/2014/708075. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.He C, Liu Q, Li H, Wang H. Multimodal medical image fusion based on IHS and PCA. Procedia Engineering. 2010;7:280–5. doi: 10.1016/j.prpeng.2010,11,045. [DOI] [Google Scholar]
  • 3.Malviya A, Bhirud S. Image Fusion of Digital Images. Entropy. 2009;7(7.4735):7.4955. [Google Scholar]
  • 4.Ge Z, Wang B, Zhang L. Remote sensing image fusion based on Bayesian linear estimation. Sci China Inform Sci. 2007;50(2):227–40. doi: 10.1007/s11432-007-008-7. [DOI] [Google Scholar]
  • 5.Zhou H, Cheng Q, Zargham M, editors. Fast fusion of medical images based on Bayesian risk minimization and pixon map. Computational Science and Engineering, 2009 CSE’09 International Conference; IEEE; 2009. pp. 1086–91. [DOI] [Google Scholar]
  • 6.Tu TM, Su SC, Shyu HC, Huang PS. A new look at IHS-like image fusion methods. Information fusion. 2001;2(3):177–86. doi: 10.1016/s1566-2535(01)00036-7. [DOI] [Google Scholar]
  • 7.Hao-quan W, Hao X, editors. Multi-mode medical image fusion algorithm based on principal component analysis. Computer Network and Multimedia Technology; 2009 CNMT 2009 International Symposium on; IEEE; 2009. pp. 1–4. [DOI] [Google Scholar]
  • 8.Al-Azzawi NA, Sakim HAM, Wan Abdullah A, editors. An efficient medical image fusion method using contourlet transform based on PCM. Industrial Electronics & Applications; 2009 ISIEA 2009 IEEE Symposium on; IEEE; 2009. pp. 11–14. [DOI] [Google Scholar]
  • 9.Cui Z, Zhang G, Wu J, editors. Medical image fusion based on wavelet transform and independent component analysis. Artificial Intelligence; 2009 JCAI’09 International Joint Conference on; IEEE; 2009. pp. 480–3. [DOI] [Google Scholar]
  • 10.Zheng Y, Qin Z. Medical image fusion algorithm based on bidimensional empirical mode decomposition. Journal of Software. 2009;20(5):1096–105. doi: 10.3724/SP.J1001.2009.03542. [DOI] [Google Scholar]
  • 11.Pajares G, Manuel de la Cruz J. A wavelet-based image fusion tutorial. Pattern recognition. 2004;37(9):1855–72. doi: 10.1016/j.patcog.2004.03.010. [DOI] [Google Scholar]
  • 12.Pradeep M, editor. Implementation of image fusion algorithm using MATLAB (LAPLACIAN PYRAMID). Automation, Computing, Communication, Control and Compressed Sensing (iMac4s); 2013 International Multi-Conference on; IEEE; 2013. pp. 165–8. [DOI] [Google Scholar]
  • 13.Kumar YK, editor. Three-Band MRI Image Fusion: A Curvelet Transform Approach. World Congress on Medical Physics and Biomedical Engineering; September 7–12, 2009; Munich, Germany. Springer; 2010. pp. 105–6. [DOI] [Google Scholar]
  • 14.Ali F, El-Dokany I, Saad A, Abd El-Samie FES. Curvelet fusion of MR and CT images. Progress In Electromagnetics Research C. 2008;3:215–24. doi: 10.2528/PIERC08041305. [DOI] [Google Scholar]
  • 15.Sahu DK, Parsai M. Different Image Fusion Techniques–A Critical Review. International Journal of Modern Engineering Research (IJMER) 2012;2(5):4298–301. 10.1.1.417.139. [Google Scholar]
  • 16.Stathaki T. Image fusion: algorithms and applications. Academic Press; 2011. [Google Scholar]
  • 17.Al-Wassai FA, Kalyankar NV, Al-Zuky AA. The IHS Transformations Based Image Fusion. Cmputer vision and pattern recognition. 2011;2(5) arXiv preprint arXiv:11074396. 2011. [Google Scholar]
  • 18.Sasi NM, Jayasree V. Contrast Limited Adaptive Histogram Equalization for Qualitative Enhancement of Myocardial Perfusion Images. Engineering. 2013;5:331–26. doi: 10.4236/eng.2013.510B066. [DOI] [Google Scholar]
  • 19.Donoho DL, Duncan MR, editors. AeroSense. Vol. 2000. International Society for Optics and Photonics:SPIE; 2000. Digital curvelet transform: strategy, implementation, and experiments. [DOI] [Google Scholar]
  • 20.Hajeb Mohammad Alipour S, Rabbani H, Akhlaghi M. A new combined method based on curvelet transform and morphological operators for automatic detection of foveal avascular zone. Signal, Image and Video Processing. 2014;8(2):205–22. doi: 10.1007/s11760-013-0530-6. [DOI] [Google Scholar]
  • 21.Candes E, Demanet L, Donoho D, Ying L. Fast discrete curvelet transforms. Multiscale Modeling & Simulation. 2006;5(3):861–99. doi: 10.1137/05064182X. [DOI] [Google Scholar]
  • 22.Do MN, Vetterli M. The finite ridgelet transform for image representation. IEEE Trans Image Process. 2003;12(1):16–28. doi: 10.1109/TIP.2002.806252. [DOI] [PubMed] [Google Scholar]
  • 23.Esmaeili M, Rabbani H, Dehnavi A, Dehghani A. Automatic detection of exudates and optic disk in retinal images using curvelet transform. Image Processing, IET. 2012;6(7):1005–13. doi: 10.1049/iet-ipr.2001.0333. [DOI] [Google Scholar]
  • 24.Hajeb Mohammad Alipour S, Rabbani H, Akhlaghi MR. Diabetic retinopathy grading by digital curvelet transform. Comput Math Methods Med. 2012;2012:761901. doi: 10.1155/2012/761901. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Teng J, Wang X, Zhang J, Wang S, editors. Wavelet-based texture fusion of CT/MRI images. Image and Signal Processing (CISP); 2010 3rd International Congress on; IEEE; 2010. pp. 2709–2713. [DOI] [Google Scholar]
  • 26.Piella G. A general framework for multiresolution image fusion: from pixels to regions. Information fusion. 2003;4(4):259–80. doi: 10.1016/S1566-2535(03)00046-0. [DOI] [Google Scholar]
  • 27.Pajares G, De La Cruz JM. A wavelet-based image fusion tutorial. Pattern recognition. 2004;37(9):1855–72. doi: 10.1016/j.patcog.2004.03.010. [DOI] [Google Scholar]
  • 28.Li T, Wang Y. Biological image fusion using a NSCT based variable-weight method. Information Fusion. 2011;12(2):85–92. doi: 10.1016/J.inffus.2010.03.007. [DOI] [Google Scholar]
  • 29.Savitha M, Jeyaseeli VS, Sindumathi S. Image Fusion with Guided Filtering. IEEE Trans Image Process. 2013;22(7):2864–75. doi: 10.1109/TIP.2013.2244222. [DOI] [PubMed] [Google Scholar]
  • 30.Kim Y, Lee C, Han D, Kim Y, Kim Y. Improved additive-wavelet image fusion. Geoscience and Remote Sensing Letters, IEEE. 2011;8(2):263–7. doi: 10.1109/LGRS.2010.2067192. [DOI] [Google Scholar]

Articles from Electronic Physician are provided here courtesy of The Electronic Physician

RESOURCES