Skip to main content
Journal of Digital Imaging logoLink to Journal of Digital Imaging
. 2012 May 19;26(2):361–370. doi: 10.1007/s10278-012-9483-5

Development of Automated Image Stitching System for Radiographic Images

Salbiah Samsudin 1,3,, Somaya Adwan 1,, H Arof 1, N Mokhtar 1, F Ibrahim 2
PMCID: PMC3597956  PMID: 22610151

Abstract

Standard X-ray images using conventional screen-film technique have a limited field of view that is insufficient to show the full bone structure of large hands on a single frame. To produce images containing the whole hand structure, digitized images from the X-ray films can be assembled using image stitching. This paper presents a new medical image stitching method that utilizes minimum average correlation energy filters to identify and merge pairs of hand X-ray medical images. The effectiveness of the proposed method is demonstrated in the experiments involving two databases which contain a total of 40 pairs of overlapping and non-overlapping hand images. The experimental results are compared with that of the normalized cross-correlation (NCC) method. It is found that the proposed method outperforms the NCC method in classifying and merging the overlapping and non-overlapping medical images. The efficacy of the proposed method is further indicated by its average execution time, which is about five times shorter than that of the other method.

Keywords: Medical image stitching, Image registration, Panoramic image, MACE filter

Introduction

X-ray images are very useful in diagnosing fractured bones or joint dislocation, looking for injury or arthritis, and in guiding orthopedic surgery for joint replacement and fracture reductions.

Conventional X-ray equipment can only provide a limited visual field. For high-resolution and larger image using the conventional screen-film technique, special cassettes and films of limited sizes are utilized. Sometimes, using one X-ray image is not enough to detect any abnormality in the human body. To produce images containing the whole body parts, digitized images from the cassettes and films which contain portions of the body parts can be assembled. For example, in conventional radiography, a large image can be assembled from the X-ray images of multiple exposures with a small spatial overlap. This technique is commonly referred to as stitching.

Image stitching is the process of overlapping two or more images taken at different viewpoints and different times to generate a wider viewing panoramic image. It consists of image registration and an image blending process. In image registration, portions of adjacent or consecutive images are modeled to find the merge position and the transformation which aligns the images [1]. Once the image has been successfully matched, a panorama image will be created to make a wider viewing so that the images are matched and merged seamlessly.

Image stitching plays an important role in panorama creation, super resolution image formation, medical image analysis [2, 3], and many other computer vision applications [1]. Image stitching can be classified into feature-based and direct-based registration methods. Direct-based methods are based on pixel-to-pixel matching to maximize a measure of image similarity to find a parametric transformation between the two images [1, 4, 5].

Feature-based methods first extract distinctive features such as a corner in the two images to match these features and establish a global correspondence by comparing the feature descriptors; then, images are warped according to parametric transformations that are estimated from those correspondences [4]. Direct methods have the advantage of using all of the available image data and hence can provide very accurate registration, but being iterative methods, they require initialization. Unlike direct methods, feature-based methods do not require initialization, but they are time-consuming, and for the majority of cases, finding features inside component images are difficult [6]. Some other methods are the combination of the two mentioned methods [1, 4, 5].

Direct- or pixel-based methods using the full image content are the most interesting methods in current research. Theoretically, these are the most flexible of the registration methods since, unlike all the other methods mentioned, they do not start by reducing the gray-level image to relatively sparse extracted information but use all of the available information throughout the registration process [1, 4, 5].

Kumar et al. [7] proposed a method for stitching medical image using histogram matching coupled with the sum of squared difference to overcome the drawback of feature-based method for image alignment. Although their method improves the efficiency of the similarity measure and search, they still have increasing complexity, and the degrees of freedom of the transformation are increased. Furthermore, hence, the sum of the squared difference method is not differentiable at the origin; it is not well suited to gradient descent approaches [1, 4].

Yu and Mingquan [8] adopted the grid-based registration method for the medical infrared images. They used the sum of the squared difference metric to measure the similarity between the pixels in the two images. In order to improve the registration accuracy and reduce the computational time, they divided the registration process into two steps. The first step is rough registration, which records the best registration point position, while the second step is precise registration. With the current best registration point as the center, the template moves n grids and computes the square of difference of the corresponding pixels in the two images. The processing time decreased slightly by using the two steps, but still suffers from the complexity. An alternative to taking intensity differences is to perform correlation, i.e., to maximize the product (or cross-correlation) of the two aligned images [8].

Čapek et al. [9] utilized the point matching method together with the normalized correlation coefficient (NCC) to evaluate a similarity measure of the X-ray image. They claim that their method gave precise and correct results, but the time taken for processing is long.

The NCC score is always guaranteed to be in the range [−1, 1], which makes it easier to handle in some higher-level applications (such as deciding which patches truly match). However, the NCC score is undefined if either of the two patches has zero variance. In fact, its performance degrades noisy low-contrast regions.

Correlation is a basic statistical approach to direct-based image registration. It is usually used for template matching or pattern recognition. It is a match metric, i.e., it gives a measure of the degree of similarity between an image and a template. This similarity measure method has been widely used because it can be computed using the fast Fourier transform (FFT); thus, for combining large images of the same size, it can be implemented efficiently. Furthermore, both direct correlation and correlation using FFT have costs which grow at least linearly with the image area [4].

Correlation filters are a direct-based method that has found applications in automatic target recognition [10] and biometric identification [11, 12]. The simplest form of correlation filter is known as the matched spatial filter (MSF) [13, 14]. It performs well at detecting a reference image corrupted by additive white noise, but this technique suffers from distortion variance, poor generalization, and poor localization properties. The reason for this poor performance is because MSF uses a single training filter to generate broad correlation peaks [11]. This shortcoming is addressed by introducing another correlation filter known as a synthetic discriminant function (SDF). It is a linear combination of MSFs. It linearly combines a set of training images into one filter, which further allows users to constrain the filter output at the origin of the correlation filter [15]. These pre-specified constraints are also known as “peak constraints.” SDF filters provide some degree of distortion in variance, but like MSFs, they result in large side lobes and broad correlation peaks that make localization difficult.

To reduce the large side lobes observed in SDFs and to maximize peak sharpness for better object localization and detection, minimum average correlation energy (MACE) filters were introduced. A MACE filter minimizes the average correlation energy of the correlation outputs from the training images while producing a sharp peak for the training object patterns [1618]. This results in a correlation plane value very close to zero, except at the location of the trained object.

Based on the attributes of the MACE filter, we develop a stitching method, the details of which will be presented in the following sections. The proposed system employs correlation filters to find the best-matched position for two X-ray images that will be combined to form a single image.

In this paper, a robust method is proposed for medical image stitching which uses MACE filter and peak-to-side lobe ratio as a similarity measure. In our experiment, it is assumed that there exists enough overlapping between adjacent images not less than 30 % so that precise stitching can be achieved. Our method proves its efficacy in matching accuracy and challenging processing time. The main method and the algorithm are discussed in “Image Stitching Method”, “Experiment and Results” describes our experiment and presents the results. Discussion of the result is presented in “Performance Evaluations” section. Conclusions are presented in “Conclusion.”

Image Stitching Method

The image stitching method functional flow presented in this paper is shown in Fig. 1. The components of the functional flow depicted in Fig. 1. It consists of seven components: (1) image preprocessing—this repository consists of input images after applying the pre-image processing strategy; (2) frequency domain transformation—the main function of this processor is to convert real-time domain values of images into frequency domain; (3) image filter computing—this processor implements MACE filter and produces a correlation plane of the input images; (4) correlation filter module—it actually takes the input test image and MACE filter correlation plane of the input object and finds the relation between them; (5) time domain transformation—this employs a logic that transforms the frequency domain values to time domain values; (6) peak-to-side lobe ratio calculation—this processor enhances the correlation peak and uses its logic to measure the performance of a filter and enhance the decision making to find the best match in the two images; (7) image stitching module—this utilizes the information received from all processors to create the panoramic image and output of the stitched image. These modules are explained in detail in forthcoming sections.

Fig. 1.

Fig. 1

Image stitching workflow

Image Preprocessing

Intensities in the images are highly sensitive to external factors such as illuminations. These external factors affect the distribution of intensities in the histogram of the images, which in turn affects the matching accuracy to a great extent. In order to make intensities relatively insensitive to the particular contrast, brightness of the original image, etc., we apply a histogram equalization with a flat envelop on the image to redistribute the intensities throughout the image. Then, the equalized image is Fourier-transformed.

Frequency Domain Transformation

Frequency domain transformation implements fast Fourier transform in 2D (FFT2). It calculates FFT2 for each single image in “Image repository” and stores the results of each image. Consider two images, f(n1, n2) and g(n1, n2), where we assume that the index ranges are n1 = −M1M1(M1 > 0) and n2 = −M2M2(M2 > 0) for mathematical simplicity; hence, N1 = 2M1 + 1 and N2 = 2M2 + 1. Let F(k1,k2) and G(k1,k2) denote the 2D fast Fourier transforms (FFT2) of two images f and g. F(k1,k2) and G(k1,k2) are given by Eqs. 1 and 2, respectively.

graphic file with name M1.gif 1
graphic file with name M2.gif 2

where k1 = −M1M1, k2 = −M2M2, Inline graphic. These results are sent to “Image filter computing.”

Image Filter Computing

Image filter computing implements the MACE filter. Its functions are to take all the input images and minimize the average correlation energy to increase the peak sharpness. It minimizes side lobes of the correlation plane while signifying the pattern peak correlations.

The logic of the MACE filter is defined by the following equation:

graphic file with name M4.gif 3

where MACE represents the MACE filter, F(k1,k2) is the FFT2-transformed image, and Inline graphic represents the complex conjugate of F(k1,k2).

Image filter computing does the summation of all input images in frequency domain and takes the conjugate of each image and uses Eq. 3 to calculate the MACE filter correlation plane of the input images.

Correlation Filter Module

This module applies its logic to define the relation between the test image and register image patterns generated by the correlation of a filter in the form of a “correlation plane.” A test image is sent to the frequency domain transformation to be transformed into the FFT2 domain, as depicted in Eq. 2. Then, it makes use of Eq. 4 to conduct the cross-correlation between the test image and the correlation plane of input image so that the relation between them could be found.

graphic file with name M6.gif 4

where MACE represents the MACE filter and Inline graphic represents the complex conjugate of G(k1,k2).

Time Domain Transformation

The inverse in time transformation is given in Eq. 5.

graphic file with name M8.gif 5

Then, to find the peak value, Eqs. 6 and 7 are applied, as follows:

graphic file with name M9.gif 6
graphic file with name M10.gif 7

Once the absolute value of the correlation between the input image and the test image are calculated, then the location of the highest peak is used to calculate the peak-to-side lobe ratio (PSR).

Peak-to-Side Lobe Ratio Calculation

PSR is used as a performance metric for the correlation filter [19]. In some cases, the correlation plane of pairs of non-overlapped images contains many non-dominant local peaks, which can cause them to be wrongly classified as overlapping. By looking at the PSR, this problem can be corrected. A decision on whether a pair of images is overlapping or not can be made by comparing their PSR against a threshold. A pair of test images will be declared as overlapping if their PSR is higher than the threshold; otherwise, they are deemed as non-overlapping. The PSR value can be calculated using Eq. 8, as follows:

graphic file with name M11.gif 8

where Peak is the largest value in the correlation output and the Mean and standard deviation are the average value and the standard deviation of the correlation outputs in an rectangular region (of size 21 × 21) centered on the peak, but excluding the peak region (a 5 × 5 region), as indicated in Fig. 2.

Fig. 2.

Fig. 2

Peak-to-side lobe ratio

In our system, the threshold value for the PSR is set at 15. From our experience, setting the threshold at 15 gives the highest accuracy by trial and error on the database used. When the threshold was set at a value lower than 15, more non-overlapping images were wrongly classified as overlapping, whereas when the threshold was set at a value higher than 15, some overlapping images were mistakenly identified as non-overlapping.

Image Stitching Module

As mentioned previously, image stitching consists of image registration and image blending. Information from the location of the peak of the correlation plane is used to translate the images to the right position before merging them. Then, the intensities of pixels at the borderline between the two images are adjusted so that the transition from one image to the next is smooth. Then panoramic image is created and the stitched image is displayed.

Image Blending

Once we have matched all of the input images with respect to each other, the final panorama image is produced. This process involves how to optimally blend the pixels in the two images to minimize visible seams and create an attractive looking panorama. Image blending is the process of adjusting the values of pixels into two registered images such that when the images are joined, the transition from one image to the next is invisible. At the same time, the merged images should preserve the quality of the input images as much as possible [1, 20].

One of our objectives in this paper was to merge the images so that the seam between the images is visually undetectable. A seam is the artificial edge produced by the intensity differences of pixels immediately next to where the images are joined [1, 5]. In our image stitching system, we employ median filtering to adjust the local pixel intensity around the borderline area. The previous stages are utilized to determine the location of the matching position. Then, the two images are merged after translation and warping [20].

Experiment and Results

To conduct the experiments and demonstrate the effectiveness of our system, the proposed algorithm is implemented in MATLAB 7.4 running on a computer with a 2.0-GHz dual core OPTERON processor and a 1.87-GB RAM. Two sets of experiments were carried out using two databases which contain 40 pairs of overlapping and non-overlapping images, respectively. These radiographic images are secured from the medical faculty of King Fahd University Hospital. During preprocessing, histogram equalization was applied on each image.

The system detected the matched images successfully and classified this image as an overlapped image. The system also shows the PSR value and peak value of the test image. Figure 3 shows the image stitching system presented in this paper. Looking at Fig. 3, it is clear that the value of the PSR of the particular pair of image is greater than the threshold value of 15.

Fig. 3.

Fig. 3

Result of peak and PSR of pair of overlapped images

Figure 4 shows a correlation graph of the X-ray image shown in Fig. 3. It can be observed in Fig. 4 that a sharp correlation peak results in a high PSR value. The PSR value and the peak value were calculated and shown in Fig. 3. For this pair of image, the PSR = 17.1952, which is greater than the threshold value; the image is then declared as an overlapped image, and the best area is the one that gives the highest peak. The stitching result of the two images is presented in Fig. 5.

Fig. 4.

Fig. 4

Correlation graph for correctly overlapped image

Fig. 5.

Fig. 5

Panoramic image for images in Fig. 3

Another pair of overlapping test images is shown in Fig. 6. The correlation plane and the final merged image are shown in Figs. 7 and 8, respectively. In the second set of experiments, 20 non-overlapping images were presented to the system. The system ruled that all these images were uncorrelated since their PSR values were lower than the threshold. An example of a pair of non-overlapping images is shown in Fig. 9. Figure 10 shows the correlation plane of these images. By analyzing Fig. 10, we found that there are many local peaks that can be observed in the correlation graph output, but our decision is supported by the threshold value of PSR. We conducted more experiments with different X-ray images, and the system produced positive results in matching accuracy.

Fig. 6.

Fig. 6

A pair of overlapping images

Fig. 7.

Fig. 7

Correlation plane for images in Fig. 6.

Fig. 8.

Fig. 8

Panoramic image for images in Fig. 6

Fig. 9.

Fig. 9

Result of peak and PSR value for non-overlapped image

Fig. 10.

Fig. 10

Correlation graph for non-overlapped image

Performance Evaluations

In this paper, the MACE filter is utilized for hand X-ray image stitching. To evaluate its performance and capability in image stitching algorithm, the PSR is calculated in each experiment and the performances are measured for each class. The MACE filter produces an acceptable significant margin that discriminates the true overlapped area between the two images. Experimental results showed that increasing the number of training images has helped in choosing the discriminant threshold value of PSR and, as a consequence, increases the margin between the true overlapped images. Furthermore, it helps detect the false class of the overlapped images and, thus, gives a better discriminating ability. It was also observed that the variance of the false matching is reduced as the number of training images is increased. In the stitching stage, the following five criteria were used to select the proper threshold: true-positives rate (TPR), semi-true positive rate (STPR), false-negative rate (FNR), false-positive rate (FPR), and true-negative rate (TNR). True-positive rates were defined when the two images have a common area and overlapped, then the system correctly stitches them and a panoramic image is created. False-negative rates were defined when the images are not matched and not stitched and the system rejects creating a panoramic image while the two images are overlapped. Semi-true positive rates were defined when the two images are overlapped and a panoramic image is created, but the merging is not at the exact point and seems to appear in the panoramic window. False-positive rates were defined when the two images are not matched at any point and the two images are not same—no overlapping area between them. Finally, true-negative rates were defined when the panoramic image is created and the two images are stitched, but neither the input image nor the test image has any overlapped area. In this case, the system identified them as containing an overlapped area.

Next, each image in the database was verified against every other image. For a given matching PSR threshold for a class, the performance can be measured by calculating the five values for the previous mentioned criteria, as illustrated in Tables 1 and 2 below.

Table 1.

Evaluation of matching precision of medical stitching system using TPR, STPR, and FNR

Method Total no. of overlapped images No. of true-positive overlapped images (TPR) No. of semi-overlapped images (STPR) No. of false-negative overlapped image (FNR) TPR (%) STPR (%) FNR (%) Average processing time (s)
Proposed 20 19 1 0 95.0 5.0 0 0.3
NCC 20 17 2 1 85 10 5 1.3

Table 2.

Evaluation of matching precision of stitching system using TNR and FPR

Method Total no. of non-overlapped images in the database No. of false-positive overlapped images (FPR) No. of true-negative overlapped(TNR) FPR (%) TNR (%) Average processing time (s)
Proposed 20 20 0 100 0 0.18
NCC 20 19 1 95.0 5.0 0.85

In our experiment, the error rate (ER) is calculated by aggregating the values of FNR and the value of TNR. The lower the ER, the higher is the overall performance of the stitching method. Thus, as shown in Table 1 and 2, the scores of FNR and TNR are 0, resulting in 0 % ER. This result proves that our system has significant performance. On the other hand, the scores of the STPR are considered as another measure of the performance of our method. As demonstrated in Table 1, there are some images which have been overlapped, but seem to have merged in the panoramic image. This case can be considered as a true match, but needs to be further processed to remove the noise and overcome the cause of this semi-matching. The PSR value reflects the MACE filter’s ability to recognize and verify the similarity between two overlapped images. It can be realized that the true overlapped image has a higher PSR than the false category.

The performance of the proposed method is compared against that of the NCC method using the same databases. It is found that the NCC method gives higher FNR and TNR scores, which result in 5 % ER. On the other hand, the execution time of the MACE correlation method is about five times less than that of the NCC method.

Conclusion

The MACE filter for the medical image stitching framework is presented in this paper. The ability of the method to classify overlapping and non-overlapping hand images is demonstrated using two sets of databases. Then, the performance of the proposed method is compared with the NCC. It is observed that the proposed method displays comparable or superior performance to the NCC method in all instances on the two databases. These results are promising and demonstrated the potential use of advanced correlation filters as an interesting option for direct-based image stitching methods. However, the proposed method can be further improved to tackle more advanced stitching problems that may involve image warping, rotation and scale variations, slant, and tilt.

Acknowledgments

This work is supported by the Research Grant of the Ministry of Higher Education of Malaysia under Project UM C/HIR/MOHE/ENG/16.

Contributor Information

Salbiah Samsudin, Email: salbiah2000@gmail.com, Email: ctsalbiah@hotmail.com.

Somaya Adwan, Email: somi@siswa.um.edu.my.

H. Arof, Email: ahamzah@um.edu.my

N. Mokhtar, Email: norrimamokhtar@um.edu.my

F. Ibrahim, Email: Fatimah@um.edu.my

References

  • 1.Chia-Yen Chen: Image Stitching—Comparisons and New Techniques. CITR-TR-30, October 1998
  • 2.Pluim JPW. Antoine Maintz JB, and Viergever MA: Mutual-information-based registration of medical images: a survey. IEEE Trans Med Imaging. 2003;22:986–1004. doi: 10.1109/TMI.2003.815867. [DOI] [PubMed] [Google Scholar]
  • 3.Gooßen A, Pralow T, Grigat RR: Automatic stitching of digital radiographies using image interpretation. Medical Image Understanding and Analysis 2008. Proceedings of the Twelfth Annual Conference, University of Dundee, 2008, pp 204–208
  • 4.Brown LG. A survey of image registration techniques. ACM Computing Surveys. 1992;24:325–376. doi: 10.1145/146370.146374. [DOI] [Google Scholar]
  • 5.Szeliski R: Image Alignment and Stitching: A Tutorial. Handbook of Mathematical Models in Computer Vision. New York: Springer, 2005, pp 273–292
  • 6.Li Y, Ma L: A fast and robust image stitching algorithm. Proceedings of the 6th World Congress on Intelligent Control and Automation, 21–23 June 2006, Dalian, China
  • 7.Kumar A, SekharBandaru R, Rao BM, Kulkarni S, Ghatpande N: Automatic image alignment and stitching of medical images with seam blending. World Academy of Science, Engineering and Technology, 65, 2012.
  • 8.Yu W, Mingquan W: Research on stitching technique of medical infrared images. 2010 International Conference on Computer Application and System Modeling (ICCASM 2010)
  • 9.Čapek M, Wegenkittl R, Felkel P. A fully automatic stitching of 2D medical data sets. BIOSIGNAL. 2002;16:326–328. [Google Scholar]
  • 10.Mahalanobis A, Carlson DW, Vijaya Kumar BVK: Evaluation of MACH and DCCF correlation filters for SAR ATR using MSTAR public database. In: Zelnio EG Ed. Algorithms for Synthetic Aperture Radar Imagery V, Vol. 3370, Photo-Opt Instrum Eng, 1998, pp 460–468
  • 11.Riedel DE, Liu W, Tjahyadi R. Correlation filters for facial recognition login access control. PCM (1) 2004;3331:385–393. [Google Scholar]
  • 12.M. Savvides et al.: Biometric technologies for human identification. SPIE Defense and Security Symposium 5404, August 2004, pp. 124–135
  • 13.Vanderlugt AB: Signal detection by complex matched spatial filtering. IEEE Trans Inf Theory IT-10, pp 139–145, 1964
  • 14.Vijaya Kumar BVK, Mahalanobis A, Juday R. Correlation Pattern Recognition. UK: Cambridge University Press; 2005. [Google Scholar]
  • 15.Mahalanobis A, Vijaya Kumar BVK, Casasent D. Minimum average correlation energy filters. Applied Optics. 1987;26(17):3633–3640. doi: 10.1364/AO.26.003633. [DOI] [PubMed] [Google Scholar]
  • 16.Savvides, M, Vijaya Kumar BVK, Khosla P: Face verification using correlation filter. Proceedings of the Third IEEE Automatic Identification Advanced Technologies, 2002, pp 56–61
  • 17.Vijaya Kumar BVK, Savvides M, Xie C. Correlation pattern recognition for face recognition. Proc IEEE. 2006;94:1963–1976. doi: 10.1109/JPROC.2006.884094. [DOI] [Google Scholar]
  • 18.Savvides M, Venkataramani K, Vijaya Kumar BVK: Incremental updating of advanced correlation filters for biometric authentication systems. Proceedings of the IEEE International Conference on Multimedia and Expo 3, 2003, pp 229–232
  • 19.Vijaya Kumar BVK, Savvides M, Venkataramani K, Xie C. Spatial frequency domain image processing for biometric recognition. IEEE ICIP, 2002, pp 53–56
  • 20.Rankov V, Locke RJ, Edens RJ, Barber PR, Vojnovic B: An algorithm for image stitching and blending. Proceedings of SPIE, Vol. 5701, Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XII, March 2005

Articles from Journal of Digital Imaging are provided here courtesy of Springer

RESOURCES