Skip to main content
. 2020 Apr 11;20(8):2169. doi: 10.3390/s20082169
Algorithm 1 Proposed NSST-based multi-sensor image fusion framework.
Input:
        source image A and B;
        Parameters: pyramid decomposition level l, the number of pyramid levels N, similarity threshold T
Output:
        the fused image F
  • 1:

    It inputs two any-size source images A and B to the trained siamese network.

  • 2:

    It generates a dense prediction map S, where each prediction has two dimensions.

  • 3:

    for any prediction Si do

  • 4:

        It does normalization processing to obtain a corresponding image block weight with a dimension

        value of 1.

  • 5:

    end for

  • 6:

    for an overlapping region of two adjacent predictions Sj and Sj+1 do

  • 7:

        It does the averaging process to obtain the mean value of the overlapping image block weights.

  • 8:

        It outputs the same size weight map W as source image.

  • 9:

    end for

  • 10:

    for each source image A, B, and weight map W do

  • 11:

        It does pyramid decomposition respectively to obtain a contrast sub-images CA, CB and a

        Gaussian sub-image GW.

  • 12:

        for each decomposition level l obtained by the contrast pyramid decomposition of source image

        do

  • 13:

            It calculates the energy EA,Bl(x,y) of its corresponding local area.

  • 14:

            It determines the similarity of fusion mode Ml(x,y).

  • 15:

            It defines a similarity threshold T (when 0l<N, T1=3; when l=N, T2=0.6) to determine

            the strategy of coefficient fusion.

  • 16:

        end for

  • 17:

    end for

  • 18:

    The fused image F is obtained by the inverse pyramid transform of sub-image CFl after fusion.