Skip to main content
The Scientific World Journal logoLink to The Scientific World Journal
. 2014 Jan 19;2014:708075. doi: 10.1155/2014/708075

MRI and PET Image Fusion Using Fuzzy Logic and Image Local Features

Umer Javed 1,2, Muhammad Mohsin Riaz 3, Abdul Ghafoor 3,*, Syed Sohaib Ali 3, Tanveer Ahmed Cheema 2
PMCID: PMC3916105  PMID: 24574912

Abstract

An image fusion technique for magnetic resonance imaging (MRI) and positron emission tomography (PET) using local features and fuzzy logic is presented. The aim of proposed technique is to maximally combine useful information present in MRI and PET images. Image local features are extracted and combined with fuzzy logic to compute weights for each pixel. Simulation results show that the proposed scheme produces significantly better results compared to state-of-art schemes.

1. Introduction

Fusion of images obtained from different imaging systems like computed tomography (CT), MRI, and PET plays an important role in medical diagnosis and other clinical applications. Each imaging technique provides a different level of information. For instance, CT (based on X-ray principle) is commonly used for visualizing dense structures and is not suitable for soft tissues and physiological analysis. MRI on the other hand provides better visualization of soft tissues and is commonly used for detection of tumors and other tissue abnormalities. Likewise, information of blood flow in the body is provided by PET (a nuclear imaging technique) but it suffers from low resolution as compared to CT and MRI. Hence, fusion of images obtained from different modalities is desirable to extract sufficient information for clinical diagnosis and treatment.

Image fusion integrates (complementary as well as redundant) information from multimodality images to create a fused image [16]. It not only provides accurate description of the same object but also helps in required memory reduction by storing fused images instead of multiple source images. Different techniques are developed for medical image fusion which can be generally grouped into pixel, feature, and decision level fusion [7]. Compared to feature and decision, pixel level methods [1, 2] are more suited for medical imaging as they can preserve spatial details in fused images [1, 8].

Conventional pixel level methods (including addition, subtraction, multiplication, and weighted average) are simpler but are less accurate. Intensity Hue saturation (IHS)-based methods fuse the images by replacing the intensity component [1, 5, 9]. These methods generally produce high-resolution fused images but cause spectral distortion (due to inaccurate estimation of spectral information) [10]. Similarly, principal components analysis based methods fuse images by replacing certain principle components [11].

Multiresolution techniques including pyramids, discrete wavelet transform (DWT), contourlet, curvelet, shearlet, and framelet transform image into different bands for fusion (a comprehensive comparison is presented in [12]). DWT-based schemes decompose the input images into horizontal, vertical, and diagonal subbands which are then fused using additive or substitutive methods. Earlier DWT-based fusion schemes cannot preserve the salient features of the source images efficiently, hence producing block artifacts and inconsistency in the fused results [2, 3]. Human visual system is combined with DWT to fuse the low frequency bands using visibility and variance features, respectively. Local window approach is used (to adjust coefficients adaptively) for noise reduction and maintaining homogeneity in fused image [4]. However, the method often produces block artifacts and reduced contrast [3, 5]. Consistency verification and activity measures combined with DWT can only capture limited directional information and hence are not suitable for sharp image transitions [13].

Texture features and visibility measure are used with framelet transform [5] to fuse high and low frequency components, respectively. Contourlet transform based methods use different and flexible directions to detect the intrinsic geometrical structures [13]. The common methods are variable weight using nonsubsampled contourlet transform [14]; and bio-inspired activity measurer using pulse-coded neural networks [15]. However, the down- and up-sampling in contourlet transform lack shift invariance and cause ringing artifacts [14]. Curvelet transform uses various directions and positions at length scales [16]; however, it does not provide a multiresolution representation of geometry [17]. Shearlet transform carries different features (like directionality, localization, and multiscale framework) and can decompose the image into any scale and direction to fuse the required information [17].

Prespecified transform matrix and learning techniques are used with kernel singular value decomposition to fuse images in sparse domain [18]. In [19], image fusion has been performed using redundancy DWT and contourlet transform. A pixel level neuro-fuzzy logic based fusion adjusts the membership functions (MFs) using backpropagation and least mean square algorithms [20]. A spiking cortical model is proposed to fuse different types of medical images [21]. However, these schemes are complex or work under certain assumptions/constraints.

A fusion technique for MRI and PET images using local features and fuzzy logic is presented. The proposed technique maximally combines the useful information present in MRI and PET images. Image local features are extracted and combined with fuzzy logic to compute weights for each pixel. Simulation results based on visual and quantitative analysis show the significance of the proposed scheme.

2. A`-Trous-Based Image Fusion: An Overview

In contrast to conventional multiresolution schemes (where the output is downsampled after each level), à-trous or undecimated wavelet provides shift invariance, hence better suited for image fusion.

Let different approximations I MRI,k of MRI image I MRI (having dimensions M × N) be obtained by successive convolutions with a filter f, that is,

IMRI,k+1=IMRI,kf, (1)

where I MRI,0 = I MRI and f is a bicubic B-spline filter. The kth wavelet plane W MRI,k of I MRI is,

WMRI,k=IMRI,k+1IMRI,k. (2)

The image I MRI is decomposed into low I MRI,L and high I MRI,H frequency components as

IMRI=IMRI,L+IMRI,H=IMRI,L+k=0KWMRI,k, (3)

where K is the total number of decomposition levels. Similarly PET image I PET in terms of low I PET,L and high I PET,H frequency components is

IPET(β)=IPET,L(β)+k=0KWPET,k(β), (4)

where β ∈ {R, G, B}, as PET images are assumed to be in pseudocolor [9].

Different methods are present in literature to fuse low and high frequency components which are generally grouped into substitute wavelet (SW) and additive wavelet (AW). The fused image I SW using SW is

ISW(β)=IPET,L(β)+k=0KWMRI,k. (5)

Note that SW method fuses image by completely replacing the high frequency components of PET by high frequency components of MRI image, which can cause geometric and spectral distortion. SW and IHS (SWI) are combined to overcome the limitation in fused image I SWI; that is,

ISWI(β)=IPET,L(β)k=0KWINT,k+k=0KWMRI,k, (6)

where the intensity image I INT is

IINT=1BβIPET(β). (7)

The substitution process in SWI method sometimes results in loss of information as the intensity component is obtained by simple averaging/weighting.

In AW method, the fused image I AW is obtained by injecting high frequency components of I MRI into I PET:

IAW(β)=IPET(β)+k=0KWMRI,k. (8)

AW method adds the same amount of high frequencies into low-resolution bands which causes redundancy of high frequency components (hence resulting in spectral distortion).

To cater the limitation, AW luminance proportional (AWLP) method injects the high frequencies in proportion to the intensity values [22]. Consider

IAWLP(β)=IPET(β)+IPET(β)(1/B)βIPET(β)k=1KWMRI,k, (9)

where B are total number of bands. The fused image I AWLP of AWLP preserves the relative spectral information amongst different bands. The fused image using improved additive wavelet proportional (IAWP) [23] method is

IIAWP(β)=IPET(β)+IPET(β)(1/B)βIPET(β),×[k=1KWMRI,kk=1KWMRIR,k], (10)

where W MRIR,k are wavelet planes of a low-resolution (a spatially degraded version of I MRI) MRI image I MRIR. The I MRIR is obtained by filtering the high frequencies (by applying a smoothing filter). The major limitations of the above schemes includes induction of redundant high/low frequencies; and consequently spatial degradations.

3. Proposed Technique

The proposed scheme first decomposes the MRI and PET images into low and high frequencies using à-trous wavelet. High and low frequencies are then fused separately according to defined criterion. The overall fused image I F in terms of high I F,H and low I F,L(β) frequencies is

IF(β)=IF,L(β)+IF,H. (11)

3.1. Fusion of Low Frequencies

Fusion of low frequencies I MRI,L and I PET,L is critical and challenging task. Various schemes utilize different criterions for fusion of low frequencies. For instance, one choice is to totally discard the low frequencies of one image, another choice is to take average or weighted average of both and so forth. However, the schemes provide limited performance as they do not cater the spatial properties of image. We have proposed fusion of low frequency using different weighting average for each pixel location. The weights are computed based on the amount of information contained in vicinity of each pixel.

3.1.1. Local Features

Local variance (LV) and local blur (LB) features are used with fuzzy inference engine to compute the desired weights for fusing low frequencies.

LV [24] is used to evaluate the regional characteristics of I PET,L image and is defined as I LV:

ILV(β,m,n)=1(2m1+1)(2n1+1)×m2=mm1m+m1n2=nn1n+n1(IPET(β,m2,n2)I¯PET(β))2, (12)

where I¯PET(β) is the mean value of m 1 × n 1 window centered at (m, n) pixel. Note that image containing sharp edges results in higher value (and vice versa).

LB I LB is computed using local Rényi entropy [25] of I PET,L image. Let P βmn(k) be the probability (or normalized histogram) having intensity values k = 1,2,…, K within a local window (of size m 1 × n 1) centered at (β, m, n) pixel. I LB is defined as [25]

ILB(β,m,n)=12ln(kKPβmn3(k)). (13)

High values of I LV and I LB show that I PET,L contain more information and need to be assigned more weight as compared to I MRI,L image.

3.1.2. Fuzzy Inference Engine

Let high ζ LV,1(u) and low ζ LV,2(u) Gaussian Membership functions (MFs) having means u¯(1), u¯(2) and variances σ u (1), σ u (2) for LV be [26]

ζLV,1(u)=e((uu¯(1))/σu(1))2,ζLV,2(u)=e((uu¯(2))/σu(2))2. (14)

Similarly let high ζ LB,1(v) and low μ LB,2(v) Gaussian MFs having means v¯(1), v¯(2) and variances σ v (1), σ v (2) for LB be

ζLB,L(v)=e((vv¯L)/σvL)2,ζLB,H(v)=e((vv¯H)/σvH)2. (15)

The inputs I LV(β, m, n) and I LV(β, m, n) are mapped into fuzzy set using Gaussian fuzzifier [27] as

ζLV,LB(u,v)=e((uILV(β,m,n))/ς1)2×e((vILB(β,m,n))/ς2)2, (16)

where ς 1 and ς 2 are noise suppression parameters. The inputs are then processed by fuzzy inference engine using pre defined IF-THEN rules [26, 27] as follows.

  • Ru (1): IF I LV(β, m, n) is high and I LB(β, m, n) is high THEN I WT(β, m, n) is high.

  • Ru (2): IF I LV(β, m, n) is low and I LB(β, m, n) is high THEN I WT(β, m, n) is medium.

  • Ru (3): IF I LV(β, m, n) is high and I LB(β, m, n) is low,THEN I WT(β, m, n) is medium.

  • Ru (4): IF I LV(β, m, n) is low and I LB(β, m, n) is low THEN I WT(β, m, n) is low.

The output MFs for high (having mean y¯(1) and variance σ y (1)), medium (having mean y¯(2) and variance σ y (2)), and low (having mean y¯(3) and variance σ y (3)) are defined as

ζW,1(y)=e((yy¯(1))/σy(1))2,ζW,2(y)=e((yy¯(2))/σy(2))2,ζW,3(y)=e((yy¯(3))/σy(3))2. (17)

The output of fuzzy inference engine is

ζW,L(y)=max{c,d,e}[sup{u,v}ζLV,LB(u,v)ζLV,c(u)ζLV,d(v)ζW,e(y)], (18)

where {c, d}∈{1,2} and e ∈ {1,2, 3}. The weights I WT(β, m, n) are obtained by processing fuzzy outputs using center average defuzzifier [27].

The I F,L(β) image is obtained by weighted sum of I PET,L and I MRI,L as

IF,L(β,m,n)=IWT(β,m,n)IPET,L(β,m,n)+(1IWT(β,m,n))IMRI,L(m,n). (19)

3.2. Fusion of High Frequencies

Let W MRI-MRIR,k represent a wavelet plane of the resultant image I MRII MRIR. This ensures that only those high frequency components are used for image fusion, which are not already present in I MRI. By the virtue of this, the proposed scheme not only avoids redundancy of information but also results in improved fusion results as compared to early techniques. The fused high frequency image I F,H is

IF,H=k=1KWMRI-MRIR,k. (20)

Note that I F,H is not dependent on the bands β because I MRI is gray-scale image.

4. Results and Discussion

The simulations of proposed and existing schemes are performed on PET and MRI images obtained from Harvard database [28]. The fusion database for brain images is classified into normal, grade II astrocytoma, and grade IV astrocytoma images. The MRI and PET images are coregistered with 256 × 256 spatial resolution. The proposed fusion scheme is compared visually and quantitatively (using entropy [29], mutual information (MI) [29], structural similarity (SSIM) [30], Xydeas and Petrovic [31] metric, and Piella [32] metric) with DWT [12], GIHS [6], IAWP [23], and GFF [33] schemes.

The original MRI images belonging to normal brain, grade II astrocytoma, and grade IV astrocytoma are shown in Figures 1(a)1(c), respectively. Fluorodeoxyglucose (FDG) is a radiopharmaceutical commonly used for PET scans. The PET-FDG images of normal, grade II, and grade IV astrocytoma are shown in Figures 1(d)1(f), respectively. It can be seen that different imaging modalities provide complementary information for the same region.

Figure 1.

Figure 1

Original MRI and PET images: (a)–(c) MRI; (d)–(f) PET.

Figure 2 shows fused images (of normal brain) obtained by using different techniques. It can be seen from Figure 2(e) that the proposed technique has preserved the complementary information of both modalities and the fuzzy based weight assessment has enabled offering less spectral information loss as compared to other state-of-art techniques.

Figure 2.

Figure 2

Image fusion results for normal images: (a) DWT [12]; (b) GIHS [6]; (c) GFF [33]; (d) IAWP [23]; (e) proposed technique.

Figure 3 shows fused images (of grade II astrocytoma class) obtained by using different techniques. From Figure 3(e), it can be observed that the proposed technique provides complementary information contained in both modalities and the fuzzy based weight assessment has enabled offering less spectral information loss as compared to other state of art techniques. The improvement in fused images is more visible in the tumorous region (bottom right corner).

Figure 3.

Figure 3

Image fusion results for grade II astrocytoma images: (a) DWT [12]; (b) GIHS [6]; (c) GFF [33]; (d) IAWP [23]; (e) proposed technique.

Figure 4 shows fused images (of Grade IV astrocytoma) obtained by using different techniques. Similar improvement (as that of Figures 2(e) and 3(e)) can be observed in Figure 4(e). It is easy to conclude that the proposed scheme provides better visual quality compared to the existing schemes.

Figure 4.

Figure 4

Image fusion results for grade IV astrocytoma images: (a) DWT [12]; (b) GIHS [6]; (c) GFF [33]; (d) IAWP [23]; (e) proposed technique.

Table 1 shows the quantitative comparison of different fusion techniques. Note that a higher value of the metric represents better quality. The fused images obtained using proposed technique provide better quantitative results in terms of entropy [29], MI [29], SSIM [30], Xydeas and Petrovic [31], and Piella [32] metrics.

Table 1.

Quantitative measures for fused PET-MRI images.

Scenario Techniques Entropy [29] MI [29] SSIM [30] Xydeas and Petrovic [31] Piella [32]
Normal
brain
DWT [12] 5.403 1.6607 0.6083 0.4944 0.7558
GIHS [6] 5.381 1.7017 0.7095 0.5362 0.8014
GFF [33] 5.115 1.7479 0.6803 0.4825 0.6741
IAWP [23] 5.152 1.7753 0.6735 0.3233 0.3331
Proposed 5.738 1.7912 0.6788 0.5746 0.8469

Grade II
astrocytoma
DWT [12] 3.4820 1.3817 0.7287 0.6495 0.8566
GIHS [6] 3.4679 1.3848 0.8149 0.6227 0.8779
GFF [33] 3.5558 1.3758 0.8120 0.6417 0.8561
IAWP [23] 3.6351 1.3770 0.8018 0.3757 0.5405
Proposed 3.5762 1.4292 0.8133 0.6674 0.9125

Grade IV
astrocytoma
DWT [12] 5.4140 1.7487 0.6775 0.5727 0.8434
GIHS [6] 5.7868 1.7084 0.6207 0.5697 0.8547
GFF [33] 5.6628 1.7883 0.6819 0.5112 0.7917
IAWP [23] 5.6831 1.8298 0.6718 0.3584 0.5642
Proposed 5.8204 1.8683 0.6739 0.5885 0.8755

5. Conclusion

An image fusion technique for MRI and PET using local features and fuzzy logic is presented. The proposed scheme maximally combines the useful information present in MRI and PET images using image local features and fuzzy logic. Weights are assigned to different pixels for fusing low frequencies. Simulation results based on visual and quantitative analysis show that the proposed scheme produces significantly better results compared to state of art schemes.

6. Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  • 1.Bhatnagar G, Wu QMJ, Liu Z. Human visual system inspired multi-modal medical image fusion framework. Expert Systems with Applications. 2013;40(5):1708–1720. [Google Scholar]
  • 2.Yang L, Guo BL, Ni W. Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing. 2008;72(1–3):203–211. [Google Scholar]
  • 3.Amolins K, Zhang Y, Dare P. Wavelet based image fusion techniques—an introduction, review and comparison. ISPRS Journal of Photogrammetry and Remote Sensing. 2007;62(4):249–263. [Google Scholar]
  • 4.Yang Y, Park DS, Huang S, Rao N. Medical image fusion via an effective wavelet-based approach. Eurasip Journal on Advances in Signal Processing. 2010;2010:13 pages.579341 [Google Scholar]
  • 5.Bhatnagar G, Wu QMJ. An image fusion framework based on human visual system in framelet domain. International Journal of Wavelets, Multiresolution and Information Processing. 2012;10(1) doi: 10.1142/S0219691312500403.1250002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Li T, Wang Y. Biological image fusion using a NSCT based variable-weight method. Information Fusion. 2011;12(2):85–92. [Google Scholar]
  • 7.Shivappa ST, Rao BD, Trivedi MM. An iterative decoding algorithm for fusion of multimodal information. Eurasip Journal on Advances in Signal Processing. 2008;2008:10 pages.478396 [Google Scholar]
  • 8.Yang B, Li S. Pixel-level image fusion with simultaneous orthogonal matching pursuit. Information Fusion. 2012;13(1):10–19. [Google Scholar]
  • 9.Daneshvar S, Ghassemian H. MRI and PET image fusion by combining IHS and retina-inspired models. Information Fusion. 2010;11(2):114–123. [Google Scholar]
  • 10.Wang Z, Ziou D, Armenakis C, Li D, Li Q. A comparative analysis of image fusion methods. IEEE Transactions on Geoscience and Remote Sensing. 2005;43(6):1391–1402. [Google Scholar]
  • 11.Li H, Manjunath BS, Mitra SK. Multisensor image fusion using the wavelet transform. Graphical Models and Image Processing. 1995;57(3):235–245. [Google Scholar]
  • 12.Pajares G, de la Cruz JM. A wavelet-based image fusion tutorial. Pattern Recognition. 2004;37(9):1855–1872. [Google Scholar]
  • 13.Do MN, Vetterli M. The contourlet transform: an efficient directional multiresolution image representation. IEEE Transactions on Image Processing. 2005;14(12):2091–2106. doi: 10.1109/tip.2005.859376. [DOI] [PubMed] [Google Scholar]
  • 14.Li D, Chongzhao H. Fusion for CT image and MR image based on nonsubsampled transformation. Proceedings of the IEEE International Conference on Advanced Computer Control (ICACC '10); March 2010; pp. 372–374. [Google Scholar]
  • 15.Qu X-B, Yan J-W, Xiao H-Z, Zhu Z-Q. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Automatica Sinica. 2008;34(12):1508–1514. [Google Scholar]
  • 16.Candès E, Demanet L, Donoho D, Ying LX. Fast discrete curvelet transforms. Multiscale Modeling and Simulation. 2006;5(3):861–899. [Google Scholar]
  • 17.Miao Q-G, Shi C, Xu P-F, Yang M, Shi Y-B. A novel algorithm of image fusion using shearlets. Optics Communications. 2011;284(6):1540–1547. [Google Scholar]
  • 18.Yu NN, Qiu TS, Liu WH. Medical image fusion based on sparse representation with KSVD. Proceedings of the World Congress on Medical Physics and Biomedical Engineering; 2013; pp. 550–553. [Google Scholar]
  • 19.Rajkumar S, Kavitha S. Redundancy Discrete Wavelet Transform and Contourlet Transform for multimodality medical image fusion with quantitative analysis. Proceedings of the 3rd International Conference on Emerging Trends in Engineering and Technology (ICETET '10); November 2010; pp. 134–139. [Google Scholar]
  • 20.Teng J, Wang S, Zhang J, Wang X. Neuro-fuzzy logic based fusion algorithm of medical images. Proceedings of the 3rd International Congress on Image and Signal Processing (CISP '10); October 2010; pp. 1552–1556. [Google Scholar]
  • 21.Wang R, Wu Y, Ding M, Zhang X. Medical image fusion based on spiking cortical model. Medical Imaging 2013: Digital Pathology; 2013; [Google Scholar]
  • 22.Alparone L, Wald L, Chanussot J, Thomas C, Gamba P, Bruce LM. Comparison of pansharpening algorithms: outcome of the 2006 GRS-S data-fusion contest. IEEE Transactions on Geoscience and Remote Sensing. 2007;45(10):3012–3021. [Google Scholar]
  • 23.Kim Y, Lee C, Han D, Kim Y, Kim Y. Improved additive-wavelet image fusion. IEEE Geoscience and Remote Sensing Letters. 2011;8(2):263–267. [Google Scholar]
  • 24.Chang D-C, Wu W-R. Image contrast enhancement based on a histogram transformation of local standard deviation. IEEE Transactions on Medical Imaging. 1998;17(4):518–531. doi: 10.1109/42.730397. [DOI] [PubMed] [Google Scholar]
  • 25.Gabarda S, Cristóbal G. Blind image quality assessment through anisotropy. Journal of the Optical Society of America A. 2007;24(12):B42–B51. doi: 10.1364/josaa.24.000b42. [DOI] [PubMed] [Google Scholar]
  • 26.Riaz MM, Ghafoor A. Fuzzy logic and singular value decomposition based through wall image enhancement. Radioengineering Journal. 2012;22(1):p. 580. [Google Scholar]
  • 27.Wang LX. A Course in Fuzzy Systems and Control. New York, NY, USA: Prentice Hall; 1997. [Google Scholar]
  • 28.Harvard Medical Atlas Database. http://www.med.harvard.edu/AANLIB/home.html.
  • 29.Qu G, Zhang D, Yan P. Information measure for performance of image fusion. Electronics Letters. 2002;38(7):313–315. [Google Scholar]
  • 30.Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing. 2004;13(4):600–612. doi: 10.1109/tip.2003.819861. [DOI] [PubMed] [Google Scholar]
  • 31.Xydeas CS, Petrović V. Objective image fusion performance measure. Electronics Letters. 2000;36(4):308–309. [Google Scholar]
  • 32.Piella G. Image fusion for enhanced visualization: a variational approach. International Journal of Computer Vision. 2009;83(1):1–11. [Google Scholar]
  • 33.Li S, Kang X, Hu J. Image fusion with guided filtering. IEEE Transactions on Medical Imaging. 2013;22(7):2864–2875. doi: 10.1109/TIP.2013.2244222. [DOI] [PubMed] [Google Scholar]

Articles from The Scientific World Journal are provided here courtesy of Wiley

RESOURCES