Skip to main content
Computational and Mathematical Methods in Medicine logoLink to Computational and Mathematical Methods in Medicine
. 2019 Mar 3;2019:3503267. doi: 10.1155/2019/3503267

Medical Image Fusion Based on Fast Finite Shearlet Transform and Sparse Representation

Ling Tan 1, Xin Yu 1,
PMCID: PMC6421746  PMID: 30944576

Abstract

Clinical diagnosis has high requirements for the visual effect of medical images. To obtain rich detail features and clear edges for fusion medical images, an image fusion algorithm FFST-SR-PCNN based on fast finite shearlet transform (FFST) and sparse representation is proposed, aiming at the problem of poor clarity of edge details that is conducive to maintaining the details of source image in current algorithms. Firstly, the source image is decomposed into low-frequency coefficients and high-frequency coefficients by FFST. Secondly, the K-SVD method is used to train the low-frequency coefficients to obtain the overcomplete dictionary D, and then the OMP algorithm sparsely encodes the low-frequency coefficients to complete the fusion of the low-frequency coefficients. Then, a high-frequency coefficient is applied to excite a pulse-coupled neural network, and the fusion coefficient of the high-frequency coefficient is selected according to the number of ignitions. Finally, the fused low-frequency coefficient and high-frequency coefficient are reconstructed into the fused medical image by FFST inverse transform. The experimental results show that the image fusion result of the proposed algorithm is about 35% higher than the comparison algorithms for the edge information transfer factor QAB/F index and has achieved good results in both subjective visual effects and objective evaluation indicators.

1. Introduction

With the development of imaging devices, different sensors can acquire different information from images of the same scenario [14]. In medicine, images of different modes are properly fused to make the source images complementary to each other and thus obtain more informative images [5, 6].

In recent years, the image fusion method based on multiscale geometric analysis has been widely used in the image processing due to its multiresolution characteristics [7]. The wavelet transform [8, 9] is the most typical multiscale analysis method, but it has only three (horizontal, vertical, and diagonal) directions when decomposing an image and thus cannot well represent a two-dimensional image with curve singularity or a high-dimensional function with surface singularity, and it is easy to produce pseudo-Gibbs phenomenon. To solve this problem, multiscale geometric analysis methods such as contourlet transform [10] and shearlet transform [11] have been proposed successively. They have good anisotropy and directional selectivity. Among them, the NSCT is the best one for the image fusion. NSCT has a translation invariance, and it can attenuate the Gibbs effect generated in various types of transformations in the past. But the amount of computational data is too large, the computational complexity is high, and the real-time performance is poor. Compared with NSCT, the shearlet transform [12] fusion algorithm has a more flexible structure, higher computational efficiency, and better fusion effect. However, it uses subsampled in the discretization process, thus it has no translation invariance and is easy to produce pseudo-Gibbs phenomenon near singular points during the image fusion. By cascading the non-subsampling pyramid filter and the shear filter, fast finite shearlet transform (FFST) [13] gets all the advantages of the shearlet transform, avoids the subsampled process, and obtains translation invariance. However, FFST exhibits a problem: the low-frequency coefficients it decomposed are not sparse. Sparse representation (SR) can express the deeper structural characteristics among low-frequency coefficients and make a perfect approximation for the linear combination of a small number of atoms in the dictionary [14]. To extract the fine contour information from the edge of images, highlight the edge features, and get more abundant information, this paper proposed the FFST-SR-PCNN, a medical image fusion algorithm based on the fast finite shearlet transform (FFST) and sparse representation (SR).

2. Medical Image Fusion Algorithm Based on FFST-SR-PCNN

The FFST-SR-PCNN first decomposed the registered source images into low-frequency {C k0 1, C k0 2} and high-frequency coefficients {C k,l 1, C k,l 2}(k > 0, l > 0) by FFST, where k was the scale of decomposition and l was the number of directions of decomposition. Then, low-frequency coefficients were fused by the SR fusion algorithm, and high-frequency coefficients were fused by the fusion algorithm of the simplified PCNN model. Finally, the fused low-frequency and high-frequency coefficients were reconstructed by the inverse FFST to obtain the fused images. The process of FFST-SR-PCNN is illustrated in Figure 1.

Figure 1.

Figure 1

FFST-SR-PCNN medical image fusion algorithm process.

2.1. Shearlet Transform

Set A a as the dilation matrix and S s as the shearlet matrix. They are defined as

Aa=a00a,Ss=1s01, (1)

where aR +, sR.

For ∀ψL 2(R 2), the shearlet function defined by the dilating, shear, and translation of ψ is

ψa,s,txa3/4ψAa1Ss1xt. (2)

For ∀fL 2(R 2), its continuous shearlet transform and corresponding Parseval equation is

SHfa,s,tf,ψa,s,t=f^,ψ^a,s,t. (3)

Specially, define wavelet function ψ 1 and impulse function ψ 2, whose Fourier transform is ψ^1w and ψ^2w, respectively.

Set ψ^wψ^1w1ψ^2w1/w2; then, ψ^w fulfills the permissibility. Choose different w 1 and w 2; the frequency domain is separated into different areas, including horizontal cone and vertical cone.

2.2. FFST

The shearlet transform generated shearlet functions with different features by scaling, shearing, and translating basis functions. Image decomposition based on the shearlet transform included the following: (1) decompose images into low-frequency and high-frequency subbands at different scales with Laplacian pyramid algorithm; (2) directionally subdivide subbands of different scales with the shear filter to realize multiscale and multidirectional decomposition and to make the size of the decomposed subband images consistent with the source images [15].

To obtain a discrete shearlet transformation, this algorithm discretized the scaling, shearing, and translating parameters in formula (2):

aj22j=14j,j=0,,j01,sj,kk2j,2jk2j,tmmN,mϑ, (4)

where ϑ={(m 1, m 2) : m i=0,…, N − 1, i=1,2} and j 0 represented the scale of decomposition; thus a discrete shearlet was obtained:

ψ^j,k,m=ψAaj1Ss,j,k1xtm. (5)

The expression of the frequency domain was

ψ^j,k,mω=ψ^AajTBs,j,kTωexp2πiω,tm=ψ^14jω1ψ^22jω2ω1+kexp2πiω,tmN, (6)

where

Ωw1,w2:wi=N2,,N21,i=1,2. (7)

To obtain the shearlets in the whole frequency domain, |k|=2j was defined at the intersection of the conical surfaces, and the sum of the shearlets was

ψ^j,k,mh×vψ^j,k,mh+ψ^j,k,mv+ψ^j,k,m×. (8)

Thus, the discrete shearlet can be expressed as

SHfj,k,mf,ϕm,τ=0,f,ψ^j,k,mτ,τh,v,f,ψ^j,k,mh×v,τ=×,k=2j, (9)

where j=0,…, j 0 − 1, −2j+1 ≤ 2j − 1, mϑ.

The shearlet defined by formula (9) can be realized by a two-dimensional fast Fourier transform algorithm with high computational efficiency. Since FFST has no subsampled process, it owns translation invariance. FFST also has excellent localization characteristics and high directional sensitivity.

2.3. Sparse Representation

The basic idea of sparse representation is to represent or approximately represent any signal by the linear combination of a small number of nonzero atoms in a given dictionary [16]. If a signal can be represented or approximated by the linear combination of a small number of atoms in DR K×N, then the mathematical model of sparse representation [14] can be obtained by the following formula:

minAA0,s.t.XDA22<ε, (10)

where dictionary D=[d 1, d 2,…, d N] ∈ R K×N is an overcomplete set; A is the coefficient of the sparse representation of signal X; ‖A0 is the L 0 norm of A; and ε is the margin of approximation error.

In FFST-SR-PCNN, first, the K singular value decomposition (K-SVD) method was used to train low-frequency coefficients and obtain the matrix D of an overcomplete dictionary. Then, the orthogonal matching pursuit (OMP) optimization algorithm was used to approximate the original signal through the local optimal solution and estimate the coefficient A of sparse representation [17]. Finally, the sparse coefficients were fused according to image features adaptively.

With the complete dictionary DR K×N, the objective function equation of the K-SVD algorithm can be written as follows:

minD,αXDα22,s.t.i,αi0T0, (11)

where T 0 is the sparse representation of the maximum number of nonzero count in the coefficient, i.e., the maximum sparsity.

Formula (11) is an iterative process. First, suppose the dictionary D is fixed, then use the orthogonal matching pursuit (OMP) algorithm to get the sparse matrix; next, fix the matrix and update the dictionary column by column, which means only the first atom in the dictionary is updated.

The fusion process of low-frequency coefficient based on sparse presentation is illustrated in Figure 2.

Figure 2.

Figure 2

Low-frequency coefficient fusion process based on sparse representation.

In Figure 2, L A and L B are low-frequency coefficients; n × n is the size of the sliding window.

2.4. Pulse-Coupled Neural Network

Pulse-coupled neural network (PCNN) can combine the input high-frequency coefficients with human visual characteristics to obtain detailed information such as texture, edge, and contour [18]. The mathematical expression of the simplified model is

Fijn=Iij,Lijn=expαLLijn1+VLk,lWijklYijn1,Uijn=Fijn1+βLijn,θijn=expαθθijn1+VθYijn1,Yijn1=1,Uijn>θijn,0,Uijnθijn, (12)

where n is the number of iterations; I ij is the stimulation signal; Y ij and U ij are the external input and the internal state, respectively; F ij is the feedback input; L ij is the link input; W ijkl is the connection weight coefficient between neurons; β, θ ij, and α θ are the link strength, the variable threshold input, and the time constant of variable threshold attenuation, respectively; and V L and V θ are amplification coefficients of the link input and the threshold.

High-frequency coefficient fusion used a pixel as the neuronal feedback input to stimulate the simplified PCNN model. SF was

Fij=SFij=RFij2+CFij2, (13)

where the window size was 3 × 3; RFij and CFij were

RFij=1M×Ni=1Mj=2NXi,jXi,j12,CFij=1M×Ni=2Mj=1NXi,jXi1,j2. (14)

It got ignition maps through PCNN ignition and selected fusion coefficients according to the number of ignition times.

3. Implementation of FFST-SR-PCNN

3.1. Rules of Low-Frequency Coefficient Fusion

The process was implemented as follows:

  • Step 1. Decompose the source images A and B with the registered size M × N by FFST to obtain the low-frequency coefficient and the high-frequency coefficient.

  • Step 2. Using a sliding window with a step size of one pixel S and a size n × n, the low-frequency coefficients L A and L B are subjected to block processing to obtain (N+n − 1) × (M+n − 1) image subblocks, and the image subblocks are converted into column vectors to obtain a sample training matrix V A and V B.

  • Step 3. Do iterative operation for sample matrix with K-SVD and obtain overcomplete dictionary matrix D of low-frequency efficient.

  • Step 4. Estimate the sparse coefficient of V A and V B with OMP algorithm and obtain sparse coefficient matrix α A and α B. The ith column sparse coefficient matrix will be fused as follows.
    • Case 1. If the L 1 norm of α A is larger than L 1 norm of α B, then fuse with equation (15):
αFi=αAi+12αBi,ifαAi<αBi,αAi·αBi<0,αAi,otherwise. (15)
  • Case 2. If the L 1 norm of α A is smaller than L 1 norm of α B, then fuse with equation (16):

αFi=αBi+12αAi,ifαAi>αBi,αAi·αBi<0,αBi,otherwise. (16)
  • Case 3. If the L 1 norm of α A is equal to L 1 norm of α B, then fuse with equation (17):

αFi=αAi+12αBi,ifαAi>αBi,αAi·αBi<0,αBi+12αAi,ifαAi<αBi,αAi·αBi<0,αAi+αBi2,otherwise, (17)
  •   where α A i and α B i are the ith column sparse coefficient matrix of α A and α B, respectively; α F i is the ith column fused sparse coefficient matrix.

  • Step 5. Multiply overcomplete dictionary matrix D and fused sparse coefficient matrix α F. Fused sample training matrix V F is

VF=DαF. (18)
  • Step 6. Turn the columns of V F into data subblocks, reconstruct data subblocks, and obtain low-frequency fusion coefficient.

3.2. Rules of High-Frequency Coefficient Fusion

The process was implemented as follows:

  • Step 1. Calculate the neighborhood spatial frequency SFA and SFB of the high-frequency coefficients H A and H B according to equation (13) and use it as the link strength values of the neurons.

  • Step 2. Initialization: L ij(0)=U ij(0)=θ ij(0)=0. Now neurons are in off state, i.e., Y ij(0)=0 the resulting pulse is O ij(0)=0.

  • Step 3. Compute L ij[n], U ij[n], θ ij[n] and Y ij[n] according to equation (12).

  • Step 4. Compare the output threshold (ignition frequency) of firing time at the pixels of fire mapping image O A, O B; the high-frequency fused coefficient H F(i, j) is

HFi,j=HAi,j,ifOAi,j>OBi,j,HBi,j,ifOAi,j<OBi,j,HAi,j+HBi,j2,ifOAi,j=OBi,j. (19)

4. Experimental Results and Analysis

In order to verify the effectiveness of FFST-SR-PCNN, five representative algorithms were selected as the controls for medical image fusion experiments. Five indicators including spatial frequency (SF), average gradient (AG), mutual information (MI), edge information transfer factor QAB/F (high-weight evaluation indicator) [1922], and running time (RT) were used to make objective evaluation. Comparing algorithm 1 was a fusion algorithm proposed in [23] for images based on PCNN. Comparing algorithm 2 was an improved fusion algorithm proposed in [24] for medical images based on NSCT and adaptive PCNN. Comparing algorithm 3 was a fusion algorithm proposed in [25] for medical images based on SR and neural network. Comparing algorithm 4 was a fusion algorithm proposed in [26] for multimode medical images based on NSCT and Log-Gabor energy. Comparing algorithm 5 was a fusion algorithm proposed in [27] for medical images based on non-subsampled Shearlet transform and parameter adaptive pulse-coupled neural network.

4.1. Gray Image Fusion Experiment

In this experiment, six pairs of brain images in different states were selected for fusion. The first three pairs are CT/MR-T2 images and the last three pairs are MR-T1/MR-T2 images. The resulting images fused by different algorithms are shown in Figures 38, and their objective quality evaluation indicators are listed in Tables 16.

Figure 3.

Figure 3

CT/MR-T2 medical image fusion results. (a) CT original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Figure 4.

Figure 4

CT/MR-T2 medical image fusion results. (a) CT original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Figure 5.

Figure 5

CT/MR-T2 medical image fusion results. (a) CT original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Figure 6.

Figure 6

MR-T1/MR-T2 medical image fusion results. (a) MR-T1 original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Figure 7.

Figure 7

MR-T1/MR-T2 medical image fusion results. (a) MR-T1 original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Figure 8.

Figure 8

MR-T1/MR-T2 medical image fusion results. (a) MR-T1 original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Table 1.

Quality assessment of CT/MR-T2 medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 34.2861 34.3181 30.7780 30.6366 36.6748 36.9338
AG 9.6513 8.2010 10.0870 9.6272 9.2317 9.6804
MI 2.1254 2.2705 2.0769 2.2199 2.9953 2.5248
QAB/F 0.5190 0.4850 0.4939 0.5257 0.5843 0.5995
RT/s 16.2069 32.4554 30.5628 22.5256 8.7295 11.3624

Table 2.

Quality assessment of CT/MR-T2 medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 27.0626 26.9760 26.3291 23.9678 28.7623 29.3344
AG 7.5929 6.7873 7.9253 7.2387 6.7479 7.2640
MI 2.1941 2.6101 2.2457 2.3168 2.9609 3.0805
QAB/F 0.4617 0.4088 0.5313 0.4733 0.5161 0.5473
RT/s 16.2266 32.3894 29.9843 22.0427 7.9451 10.1755

Table 3.

Quality assessment of CT/MR-T2 medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 37.5717 41.0988 36.5295 38.8050 40.0197 41.7215
AG 9.8877 8.9200 10.0862 10.2808 10.4221 10.4347
MI 2.0886 2.3744 2.0724 2.1990 2.4176 2.4719
QAB/F 0.5559 0.5582 0.5242 0.6148 0.6300 0.6516
RT/s 16.2690 31.5377 30.1011 22.4004 8.3358 11.7004

Table 4.

Quality assessment of MR-T1/MR-T2 medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 22.7160 22.6900 21.5347 22.7164 24.3787 24.4962
AG 6.4857 6.3629 6.6983 6.4737 6.5910 6.8862
MI 2.3686 2.6473 2.4908 2.4517 2.9919 2.7319
QAB/F 0.5204 0.5614 0.6105 0.5686 0.6261 0.6416
RT/s 15.3972 27.2017 29.6786 22.8392 7.8440 8.5966

Table 5.

Quality assessment of MR-T1/MR-T2 medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 33.0368 35.8930 26.6605 30.3170 33.7416 34.2538
AG 12.8183 13.6667 10.2990 11.7227 12.8446 13.3015
MI 2.7059 4.0357 2.4716 2.6096 2.9629 3.2689
QAB/F 0.6086 0.6540 0.4126 0.5333 0.5285 0.6353
RT/s 16.2790 31.6667 31.1026 22.0803 8.0882 12.4248

Table 6.

Quality assessment of MR-T1/MR-T2 medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 25.6557 25.1765 23.7800 23.2217 26.4393 27.8123
AG 9.8227 9.3917 8.9306 8.9716 9.5509 10.2213
MI 2.5279 3.1267 3.1352 2.5935 3.7742 3.1781
QAB/F 0.4643 0.4707 0.5005 0.4656 0.5478 0.5724
RT/s 16.2837 34.0250 31.1384 22.7084 8.2634 12.1475

According to Figures 38, comparing algorithm 1 gave poor performance compared to the source images in the presentation of detailed feature information and had horizontal and vertical blocking effects (Figures 3(c), 4(c), 5(c), 6(c), 7(c), and 8(c)). Comparing algorithm 2 gave poor performance compared to the source MR-T2 image in the presentation of detailed edge information and had blurry edge details (Figures 3(d), 4(d), 5(d), 6(d), 7(d), and 8(d)). Comparing algorithm 3 had low overall contrast and blurred edge details (Figures 3(e), 4(e), 5(e), 6(e), 7(e), and 8(e)). Comparing algorithm 4 had blurry edge details (Figures 3(f), 4(f), 5(f), 6(f), 7(f), and 8(f)). Comparing algorithm 5 had low contrast in the upper right corner (Figures 3(g), 4(g), 5(g), 6(g), 7(g), and 8(g)). FFST-SR-PCNN fully retained the feature information of the source images, without dark lines and low contrast (Figures 3(h), 4(h), 5(h), 6(h), 7(h), and 8(h)). From the evaluation indicators in Tables 16, FFST-SR-PCNN had better performance than the other five comparing algorithms on QAB/F by an average increase of 15.5%. FFST-SR-PCNN is not always the best one in each individual evaluation indicators, but it never ranked less than top three. It can be seen that the computational efficiency of FFST-SR-PCNN was lower than comparing algorithm 5 (average 34.8% lower), while higher than the other four methods (average 34.6%, 65%, 63.7%, and 48.5% higher, respectively). This is because the number of iterations of the comparison algorithm 5 is relatively small, but its other indicators were not as good as FFST-SR-PCNN. Totally, FFST-SR-PCNN had the best effect and can provide better fused medical images with relative lower computing cost.

4.2. Color Image Fusion Experiment

In this experiment, six pairs of brain images in different states were selected for fusion. The first three pairs are MR-T2/PET images and the last three pairs are MR-T2/SPECT images. The resulting images fused by different algorithms are shown in Figures 914, and their objective quality evaluation indicators are listed in Tables 712.

Figure 9.

Figure 9

MR-T2/PET medical image fusion results. (a) MR-T2 original image. (b) PET original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Figure 10.

Figure 10

MR-T2/PET medical image fusion results. (a) MR-T2 original image. (b) PET original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Figure 11.

Figure 11

MR-T2/PET medical image fusion results. (a) MR-T2 original image. (b) PET original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Figure 12.

Figure 12

MR-T2/SPECT medical image fusion results. (a) MR-T2 original image. (b) SPECT original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Figure 13.

Figure 13

MR-T2/SPECT medical image fusion results. (a) MR-T2 original image. (b) SPECT original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Figure 14.

Figure 14

MR-T2/SPECT medical image fusion results. (a) MR-T2 original image. (b) SPECT original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.

Table 7.

Quality assessment of MR-T2/PET medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 27.9886 24.4599 24.5435 28.1239 28.1658 28.7088
AG 8.6039 7.9211 7.0367 8.6999 8.5409 8.9234
MI 3.1791 3.0843 3.2350 3.2391 3.3424 3.7138
QAB/F 0.5920 0.4431 0.4673 0.6186 0.5797 0.6882
RT/s 15.5014 39.0562 31.0281 26.6560 9.1450 9.3991

Table 8.

Quality assessment of MR-T2/PET medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 32.6319 27.9935 28.0245 33.7764 34.2547 34.5131
AG 10.9999 9.4439 8.8861 11.5490 11.8755 11.5588
MI 3.3142 3.2200 3.2471 3.4545 3.7286 4.0487
QAB/F 0.5886 0.4417 0.4566 0.6541 0.6331 0.7145
RT/s 15.4448 39.8451 30.4667 25.3966 7.7208 9.9195

Table 9.

Quality assessment of MR-T2/PET medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 32.1710 26.2209 26.6609 32.8288 33.5603 34.0538
AG 11.0293 9.3185 8.5356 11.3710 11.4366 11.6015
MI 3.3244 3.2592 3.3420 3.4270 3.6002 3.9820
QAB/F 0.5580 0.4026 0.4468 0.6060 0.5819 0.6898
RT/s 15.4377 40.0521 30.5628 26.3830 7.7023 10.0894

Table 10.

Quality assessment of MR-T2/SPECT medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 22.1254 17.6301 17.4476 21.6041 22.2932 21.7161
AG 7.4238 6.2468 5.9028 7.1767 7.4044 7.0624
MI 2.7010 2.6122 2.7375 2.7932 2.9168 3.8262
QAB/F 0.6647 0.3967 0.4266 0.6481 0.6849 0.7154
RT/s 15.4280 40.6601 30.8525 26.2191 7.4911 8.9699

Table 11.

Quality assessment of MR-T2/SPECT medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 19.0298 15.1207 15.2854 18.7206 19.0044 18.3672
AG 5.8012 5.1296 4.8812 5.5770 5.7915 5.3554
MI 2.5702 2.3812 2.5202 2.6742 2.7730 3.4318
QAB/F 0.6692 0.3700 0.4581 0.6589 0.6668 0.6773
RT/s 15.4179 38.9686 30.0630 26.3533 7.5471 8.3015

Table 12.

Quality assessment of MR-T2/SPECT medical image fusion.

Index Method 1 Method 2 Method 3 Method 4 Method 5 FFST-SR-PCNN
SF 22.0008 18.5584 17.6392 21.7631 22.2414 21.5873
AG 7.1240 6.3536 5.5604 6.9306 7.1145 6.9346
MI 22.4242 2.2906 2.3527 2.5019 2.6839 3.5690
QAB/F 0.6753 0.4438 0.4188 0.6873 0.6915 0.7207
RT/s 15.4405 39.9952 30.1950 25.8698 7.5521 8.6035

According to Figures 914, comparing algorithm 1 gave poor performance compared to the source image in the presentation of detailed feature information and had widespread blocking effects (Figures 9(c), 10(c), 11(c), 12(c), 13(c), and 14(c)). Comparing algorithm 2 retained most feature information from the source images, but the fused image had low overall contrast (Figures 9(d), 10(d), 11(d), 12(d), 13(d), and 14(d)). Comparing algorithm 3 had blurred edge contours compared to the source image (Figures 9(e), 10(e), 11(e), 12(e), 13(e), and 14(e)). Comparing algorithm 4 retained most of the feature information from the source image, but the edge contours are blurred (Figures 9(f), 10(f), 11(f), 12(f), 13(f), and 14(f)). Comparing algorithm 5 has clearer details than the other four algorithms (method 1 to method 4), but its contrast is still somewhat low (Figures 9(g), 10(g), 11(g), 12(g), 13(g), and 14(g)). The FFST-SR-PCNN method fully retained the feature information from the source images, without low contrast and blocking effects (Figures 9(h), 10(h), 11(h), 12(h), 13(h), and 14(h)). From the evaluation indicators in Tables 712, FFST-SR-PCNN had better performance than the other five comparing algorithms on QAB/F by an average increase of 31.7%. FFST-SR-PCNN is not always the best one in each individual evaluation indicators, but it never ranked less than top two. It can be seen that the computational efficiency of the proposed method was lower than comparing algorithm 5 (average 17.7% lower), while higher than the other four methods (average 40.35%, 76.8%, 69.8%, and 64.4% higher, respectively). This is because the number of iterations of the comparison algorithm 5 is relatively small, but its other indicators were not as good as the proposed algorithm. Overall, FFST-SR-PCNN had the best effect and can provide better fused medical images with relative lower computing cost.

Taken above gray images and color images fusion results together, FFST-SR-PCNN can achieve better fusion performance in edge sharpness, change intensity, and contrast.

5. Conclusion

To promote the fusion performance of unimodal medical images, this thesis proposed a FFST-SR-PCNN algorithm based on FFST, sparse presentation, and pulse-coupled neural network. It has excellent detail delineation and can efficiently extract the feature information of images, thus enhanced the overall performance of the fusion results. The performance of FFST-SR-PCNN is evaluated by several experiments. In the comparing experiments with 5 comparison algorithms, all single-evaluation indexes of our algorithm are ranked in the top three; the comprehensive evaluation index of our algorithm has best result, and its QAB/F is higher than other 5 comparison algorithms. In terms of subjective manner, FFST-SR-PCNN can efficiently express the marginal information of images and make the details of fusion image clearer, with more smooth edges. Thus, it has better subjective visual effects.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  • 1.Goshtasby A. A., Nikolov S. Image fusion: advances in the state of the art. Information Fusion. 2007;8(2):114–118. doi: 10.1016/j.inffus.2006.04.001. [DOI] [Google Scholar]
  • 2.Shabanzade F., Ghassemian H. Multimodal image fusion via sparse representation and clustering-based dictionary learning algorithm in nonsubsampled contourlet domain. Proceedings of 2016 8th International Symposium on Telecommunications (IST); September 2016; Tehran, Iran. pp. 472–477. [Google Scholar]
  • 3.Mohammed A., Nisha K. L., Sathidevi P. S. A novel medical image fusion scheme employing sparse representation and dual PCNN in the NSCT domain. Proceedings of 2016 IEEE Region 10 Conference (TENCON); November 2016; Singapore. pp. 2147–2151. [Google Scholar]
  • 4.Yang Y., Que Y., Huang S., Lin P. Multimodal sensor medical image fusion based on type-2 fuzzy logic in NSCT domain. IEEE Sensors Journal. 2016;16(10):3735–3745. doi: 10.1109/jsen.2016.2533864. [DOI] [Google Scholar]
  • 5.James A. P., Dasarathy B. V. Medical image fusion: a survey of the state of the art. Information Fusion. 2014;19:4–19. doi: 10.1016/j.inffus.2013.12.002. [DOI] [Google Scholar]
  • 6.Bhateja V., Moin A., Srivastava A., Bao L. N., Lay-Ekuakille A., Le D.-N. “Multispectral medical image fusion in contourlet domain for computer based diagnosis of Alzheimer’s disease. Review of Scientific Instruments. 2016;87(7):1–4. doi: 10.1063/1.4959559. [DOI] [PubMed] [Google Scholar]
  • 7.Shingadiya R. P., Rahul J. Review on multimodality medical image fusion. International Journal of Engineering Sciences & Research Technology. 2015;4(1):628–631. [Google Scholar]
  • 8.Mehra I., Nishchal N. K. Wavelet-based image fusion for securing multiple images through asymmetric keys. Optics Communications. 2015;335(4):153–160. doi: 10.1016/j.optcom.2014.09.040. [DOI] [Google Scholar]
  • 9.Francis M., Suraj A. A., Kavya T. S., Nirmal T. M. Discrete wavelet transform based image fusion and denoising in FPGA. Journal of Electrical Systems and Information Technology. 2014;1(1):72–81. doi: 10.1016/j.jesit.2014.03.006. [DOI] [Google Scholar]
  • 10.Huang H., Feng X. A., Jiang J. Medical image fusion algorithm based on nonlinear approximation of contourlet transform and regional features. Journal of Electrical and Computer Engineering. 2017;2017:9. doi: 10.1155/2017/6807473.6807473 [DOI] [Google Scholar]
  • 11.Liu X., Zhou Y., Wang J. Image fusion based on shearlet transform and regional features. AEU-International Journal of Electronics and Communications. 2014;68(6):471–477. doi: 10.1016/j.aeue.2013.12.003. [DOI] [Google Scholar]
  • 12.Labate D., Lim W. Q., Kutyniok G., Weiss G. Sparse multidimensional representation using shearlets. Proceedings of Optics and Photonics 2005 (Wavelets XI, SPIE); September 2005; San Diego, CA, USA. pp. 254–262. [Google Scholar]
  • 13.Hauser S., Ssteidl G. Fast finite shearlet transform. 2012. http://arxiv.org/abs/1202.1773.
  • 14.Aharon M., Elad M., Bruckstein A. K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing. 2006;54(11):4311–4322. doi: 10.1109/tsp.2006.881199. [DOI] [Google Scholar]
  • 15.Guo K., Labate D. Optimally sparse multidimensional representation using shearlets. SIAM Journal on Mathematical Analysis. 2007;39(1):298–318. doi: 10.1137/060649781. [DOI] [Google Scholar]
  • 16.Olshausen B. A., Field D. J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature. 1996;381(6583):607–609. doi: 10.1038/381607a0. [DOI] [PubMed] [Google Scholar]
  • 17.Ouyang N., Zheng X.-Y., Yuan H. Multi-focus image fusion based on NSCT and sparse representation. Computer Engineering and Design. 2017;38(1):177–182. [Google Scholar]
  • 18.Sneha S., Deep G., Anand R. S., Kumar V. Nonsubsampled shearlet based CT and MR medical image fusion using biologically inspired spiking neural network. Biomedical Signal Processing and Control. 2015;18:91–101. doi: 10.1016/j.bspc.2014.11.009. [DOI] [Google Scholar]
  • 19.Xydeas C. S., Petrović V. Objective image fusion performance measure. Electronics Letters. 2000;36(4):308–309. doi: 10.1049/el:20000267. [DOI] [Google Scholar]
  • 20.Qu G., Zhang D., Yan P. Information measure for performance of image fusion. Electronics Letters. 2002;38(7):313–315. doi: 10.1049/el:20020212. [DOI] [Google Scholar]
  • 21.Piella G., Heijmans H. A new quality metric for image fusion. Proceedings of IEEE Conference Publications on Image Processing; September 2003; Barcelona, Spain. pp. 173–176. [Google Scholar]
  • 22.Liu Z., Blasch E., Xue Z., Zhao Z., Laganiere R., Wu W. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2012;34(1):94–109. doi: 10.1109/tpami.2011.109. [DOI] [PubMed] [Google Scholar]
  • 23.Chen H., Zhu J., Liu Y.-Y., et al. Image fusion based on pulse coupled neural network. Optics and Precision Engineering. 2010;18(4):995–1001. [Google Scholar]
  • 24.Chen J., Huang D. A medical image fusion improved algorithm based on NSCT and adaptive PCNN. Journal of Changchun University of Science and Technology (Natural Science Edition) 2015;38(3):152–159. [Google Scholar]
  • 25.Chen Y., Xia J., Chen Y., et al. Medical image fusion combining sparse representation and neural network. Journal of Henen University of Science and Technology (Natural Science) 2018;39(2):40–48. [Google Scholar]
  • 26.Yang Y., Tong S., Huang S., Lin P. Log-Gabor energy based multimodal medical image fusion in NSCT domain. Computational and Mathematical Methods in Medicine. 2014;2014:12. doi: 10.1155/2014/835481.835481 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Yin M., Liu X., Liu Y., Chen X. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Transactions on Instrumentation and Measurement. 2018;68(1):49–64. doi: 10.1109/TIM.2018.2838778. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data used to support the findings of this study are included within the article.


Articles from Computational and Mathematical Methods in Medicine are provided here courtesy of Wiley

RESOURCES