Abstract
This paper presents a theoretical analysis of the effect of spatial resolution on image registration. Based on the assumption of additive Gaussian noise on the images, the mean and variance of the distribution of the sum of squared differences (SSD) were estimated. Using these estimates, we evaluate a distance between the SSD distributions of aligned images and non-aligned images. The experimental results show that by matching the resolutions of the moving and fixed images one can get a better image registration result. The results agree with our theoretical analysis of SSD, but also suggest that it may be valid for mutual information as well.
Keywords: Image registration, Resolution
1. INTRODUCTION
Image registration is the process of transforming the coordinate system of a given moving image to that of a fixed image. It is a key component of medical image analysis with applications, including segmentation, multi-modality fusion, longitudinal studies, population modeling, and statistical atlases.1–10 Typically, the moving and fixed images have identical digital resolution, though it is common for interpolation to be used to upsample the lower digital resolution image to the higher resolution one. Interpolation blurs the edge information; intuitively, it follows that it is more difficult to align two edges with different spatial resolution compared to edges with the same resolution. However, the effect of spatial resolution on image registration has not been theoretically discussed before.
There has been a lot of work on using multi-resolution registration schemes going back over several years. The advantages of these “pyramid representations” are reducing computational cost and establishing links between global information as well as local information.11,12 However, the effect of spatial resolution on image registration has not been studied.
2. NEW WORK TO BE PRESENTED
This paper presents a theoretical analysis of the effect of spatial resolution on image registration. We develop quantitative guidance to preprocess the images in order to match their resolutions, and a measure for anisotropic spatial resolution is additionally discussed based on the idea of isotropic spatial resolution.13 We also experimentally explore the effects of random noise and spatial resolution on image registration. We assume that the random noise is additive Gaussian, and hence the SSD between the two images can be considered to be a random variable. The separability of the SSD distributions of perfectly aligned image pairs and misaligned (in our case, translated) image pairs determines how well images can be registered. Using the assumption that the noise is additive Gaussian, we can estimate the mean and variance of the distribution. From there, we evaluate a distance between the SSD distributions of aligned images pairs and shifted image pairs. We also present experimental results for mutual information (MI).14–16
3. METHOD
3.1 Theoretical prediction of spatial resolutions effect on image registration
Let x be a voxel coordinate, n1(x) and n2(x) be independent additive Gaussian noise with distribution . Let z1(x) be the fixed image and z2(x) be the moving image, both instances of the same true high resolution (HR) image f(x) with additive noise n1(x) and n2(x), respectively; i.e., zi(x) = f(x) + ni(x), i = 1, 2. The low resolution (LR) image derived from f is with corresponding and . , i = 1, 2. If z2(x − v(x)) is a transformed noisy version of f, then registering z1 and z2 aims to recover v(x).
Case 1: Registration of two HR images. We wish to compute the mean and variance of SSD (z1(x), z2(x − v(x))). For convenience, we denote SSD by and .
Case 2: Registration of two LR images. .
Case 3: Registration of one HR and one LR images. .
In that 1(x) and n2(x) are independent, . Let N denote the number of voxels in the image domain, the mean and variance of can be calculated, the results are listed in Table 1.
Table 1.
Mean and Variance of SSD
| Mean | Variance | ||||
|---|---|---|---|---|---|
|
Case 1:
|
|
|
|||
|
Case 2:
|
|
|
|||
|
Case 3:
|
|
|
For a correct registration result to be obtained we need to be less than for any v. How well we can distinguish the distributions of and determines the quality of the registration output. We use the sensitivity index, , defined as
We evaluate for the three cases. After some math, we get:
We note that if d′(HR; LR) < 0, then the expectation of is less than . When this happens, any registration algorithm that uses SSD will misregister the images. The larger we make d′ the more confidence we can have in a registration result. This gives us an optimality criterion for matching the resolution of images during registration. We compare d′s to understand the effect of resolution on image registration.
Claim 1
d′ (LR, LR) < d′ (HR, HR)
Proof
Using a Taylor expansion and assuming v(x) is small, |f(x) − f(x − v(x))| is related to the gradient of the image, ∇ f(x), while a smoother image has a smaller gradient. Thus
Since a smoother image has a smaller gradient we have
If we also assume the image is wide sense stationary (WSS), and the low resolution image , in which h(x) is a low pass filter, then the autocorrelation . In other words, is a low-pass filtered result of Rff(l). Thus we have:
This gives the same result, that , as Taylor expansion. ▮
Implication
Since d′ (LR, LR) < d′ (HR, HR), one has a better chance to get a correct registration result with high resolution images.
Claim 2
d′ (HR, LR) < d′ (HR, HR)
Proof
Therefore,
▮
Implication
Thus d (HR′, LR) < d (HR′, HR).
Claim 3
d′ (HR, LR) < d′ (LR, LR) unless the resolution of the two images are only slightly different.
Proof
Whether,
is valid depending on the relationship between and f(x). ▮
Implications
If there is enough difference between and f(x), which indicates a large enough , then d′ (HR, LR) < d′ (LR, LR) < d′ (HR, HR).
If , d′ (HR, LR) ≈ d′ (HR, HR) > d′ (LR, LR). However, this is not a situation that concerns us as it indicates that there is only a slight difference between the resolutions.
If there is a large difference between and f(x), which makes , d′ (HR, LR) < 0 < d′ (LR, LR) < d′ (HR, HR),which indicates that misregistration is more likely to occur between HR and LR images.
Conclusions
We claim that d′ (LR, LR) < d′ (HR, HR), which shows that the higher the resolution of the images, the more confidence we can have about the registration results. We also claim that d′ (HR, LR) < d′ (HR, HR), and that if the resolution difference between f and is large enough, then d′ (HR, LR) < d′ (LR, LR). This last result may be counterintuitive. It appears that the HR image carries more information; thus, two LR images should produce a worse registration result. However, our analysis reveals the opposite. Images with similar resolutions are more likely to produce a better registration result compared to images with different resolutions unless the resolution difference is very small.
3.2 Measure of anisotropic resolution
In order to better appreciate the potential for good or bad registration results due to resolution differences, we need to understand their underlying physical resolutions (not just the digital resolutions). Therefore, here we develop a measure of anisotropic spatial resolution from the edge-based sharpness metric.17–25 Then we can do experiments to see whether matching the resolution of two images (perhaps by even lowering the resolution of the high resolution image) increases our confidence of acquiring a more accurate registration result.
We consider the edges and the gradient profiles of the two images with different resolutions, (see Fig. 1 in J. Sun et al.13). The gradient curve of the LR edge is more spread out; therefore, we can use the full width at half maximum (FWHM) of the gradient curve as a measure of the resolution. Our algorithm to identify gradient curves and their FWHMs is:
Use Canny edge detector to identify edge voxels.
Find the gradient direction at each edge voxel. Then collect the edge voxels that have similar gradient directions with the target direction.
Apply blob matching to the gradient profile in order to find the center and range of each edge, and calculate the FWHM.
4. EXPERIMENTS
In our experiments, we want to verify our claims in Sec. 3.1. Specifically, we aim to verify, (a) d′ (HR, HR) > d′ (LR, LR), (b) d′ (HR, HR) > d′ (LR, HR), and (c) d′ (LR, LR) > d′ (LR, HR). To verify these claims, we performed multiple simulations with input images containing random noise. We have used a skull-stripped two-dimensional (2D) slice of a T1-weighted images of one subject from the Multi-modal Reproducibility Resource dataset.26
Using this image as the true HR image f, we simulated a noisy HR image, z1 (see Fig. 1). The second noisy HR image, z2 is a shifted version of f with different random noise. We considered four different shifts (v) which are translations in the y-plane by 0, 1, 2, and 3 voxels. For each of these shifts, we calculated the SSD between z1 and z2. We did this for 500 simulations of z1 and z2 and built a distribution of SSD values for each of the four shifts. We calculated the sensitivity index d′ (HR, HR), between the SSD distributions for the different shifts, for HR images. These values are recorded in the first row of the SSD portion of Table 2. Similarly, we simulated LR images by blurring f (see Fig 1) and adding noise to calculate d′ (LR, LR), which we show in the second row of the SSD portion of Table 2. For all the shifts, it is apparent that d′ (HR, HR) > d′ (LR, LR), thus verifying our first claim. Next, we chose z1 as a noisy HR image and z2 as a noisy LR image, carried out the simulations. We then calculated d′ (HR, LR), which is shown in the last row of the SSD portion of Table 2. Comparing this row to the first and second rows, it is clear that for all shifts, d′ (HR, HR) > d′ (LR, HR), d′ (LR, LR) > d′ (LR, HR), thus verifying our second and third claims.
Figure 1. Spatial Resolution.

(a) The HR image is a 256 × 256 2D MR brain image, while the LR image is obtained by filtering the HR image using a Gaussian kernel with a standard deviation of 1.5. All the image intensities are normalized between [0, 255]. The additive Gaussian noise has a standard deviation of 10. (b) The distributions of (upper row) and MI(z1, z2) (lower row). The left column represents the results of the experiment implemented on a pair HR images. While the middle column displays the results of a pair of LR images, and the right are the results of a HR image and a LR image. In the right column, the distribution of sum squared difference (SSD) and MI with v = 0 (blue curve) and v = 1 (orange curve) are too close to distinguish, which indicates that we are more likely to get misregistration when v = 1.
Table 2.
Spatial Resolution: d′ for SSD and MI for image pairs shifted by v = 1; 2; 3 voxels.
| SSD
|
MI
|
||||||
|---|---|---|---|---|---|---|---|
| v | 1 | 2 | 3 | v | 1 | 2 | 3 |
|
|
|
||||||
| d′ (HR, HR) | 96.0 | 201.5 | 272.9 | d′ (HR, HR) | 37.5 | 65.0 | 76.1 |
| d′ (LR, LR) | 31.8 | 102.2 | 172.6 | d′ (LR, LR) | 21.0 | 50.0 | 68.2 |
| d′ (HR, LR) | 0.34 | 61.6 | 146.0 | d′ (HR, LR) | 1.4 | 20.0 | 37.4 |
|
|
|
||||||
If instead of SSD, we calculate mutual information (MI) in our simulations, we observe that our claims are still true, as is demonstrated in the MI part of Table 2. This is an empirical result which points to some very interesting connections between SSD and MI as similarity measures, but we do not have a theoretical proof for the relationships between d′ (HR, HR), d′ (LR, LR), and d′ (HR, LR) on the MI distributions as of yet.
Figure 1(b) shows our fits for the SSD (first row) and MI (second row) distributions for four different shifts (0, 1, 2, 3). Each column represents the cases HR-HR, LR-LR, and HR-LR pair of images. Visually, we can appreciate the fact that the SSD distribution for v = 0 is far apart from the SSD distribution for v = 1 for HR-HR and LR-LR. However, for the HR-LR case, the SSD distributions for v = 0 and v = 1 overlap with each other, indicating that a registration algorithm can result in a lower SSD for a shift of 1 voxel, which is clearly not the correct result and is undesirable behavior.
To further verify Claim 3, we used the measure in Sec. 3.2 to get the local sharpness of the edges in the 3D Brain Web images, and a blurred LR version. Then we estimate the resolutions along x, y, and z. Finally, we will see if matching resolution can improve registration accuracy. We chose the HR image as a 3D Brain Web image without noise or intensity non-uniformity, while the LR image is obtained by filtering the HR image using Gaussian kernel with a standard deviation of 1 in the x direction, 0.5 in the y direction and 0 in the z direction. All the image intensities are normalized between [0, 255]. The additive Gaussian noise has a standard deviation σ = 1.0, which makes the SNR ≊ 38. Then do the experiment above with a translation distance of 1 voxel in all three directions. The result was displayed in Figure 2 and Table 3. The histograms of local sharpness in x, y, and z direction are shown in Figure 3.
Figure 2. Distributions of SSD.

(a) The HR image is the 3D BrainWeb image without noise or intensity non-uniformity, while the LR image is obtained by filtering the HR image using Gaussian kernel. The left column represents the results of the experiment implemented on a pair HR images. (b) The middle figure displays the results of a pair of LR images. (c) The right is the results of a HR image and a LR image. In the right column, the distribution of SSD with v = 0 (blue curve) and v = 1 (other curves) are closer compared to (a) and (b)
Table 3.
Spatial Resolution: d′ for SSD of image pairs shifted by v = 1 voxels in x, y, and z direction.
| v = 1 | X | Y | Z |
|---|---|---|---|
| d′ (HR, HR) | 42.1 | 36.7 | 33.9 |
| d′ (LR, LR) | 38.1 | 29.7 | 30.2 |
| d′ (HR, LR) | 29.6 | 22.6 | 22.6 |
| d′ (blurred HR, LR) | 36.7 | 28.8 | 29.3 |
Figure 3. histogram of local sharpness.

(a) After collecting all the edges in X direction of HR Brain image, the local sharpness are calculated. There are two peaks in the histogram. (b) histogram in Y direction of HR Brain image. (c) histogram in Z direction of HR Brain image. (d) histogram in X direction of LR Brain image. (e) histogram in Y direction of LR Brain image. (f) histogram in Z direction of LR Brain image.
There are two or more peaks in the histograms. Ideally, the edge comes from a blurred unit step edge.17 However, if the edge is naturally gentle, then the local sharpness should not be used to measure the resolution. We chose the center of the left peak in the histograms to be the measure of resolution. In most histograms of Figure 3, the left peak is easy to recognize except for Figure 3(d), which is for the LR image. The most left peak (centered at about 1.9) shrinks considerably in comparison to Figure 3(a), in that blurring can merge edges. The results are displayed in the first two rows of Table 4, rHR and rLR.
Table 4.
Spatial Resolution: results of resolution measure for HR Brain Web image and LR Brain Web image.
| resolution[mm] | X | Y | Z | |
|---|---|---|---|---|
| HR Brain rHR | 1.1 | 1.0 | 1.2 | |
| LR Brain rLR | 1.9 | 1.1 | 1.2 | |
|
|
1.55 | 0.45 | 0 |
We then applied a lowpass filter on the HR image using a Gaussian blur kernel with a standard deviation of (listed in the third row of Table 4). The d′ of SSD for blurred HR image and LR image (d′ (blurred HR, LR)) is listed in the fourth row of Table 3. It can been that d′ (blurred HR, LR) > d′ (HR, LR). Therefore, matching the resolution of two subject images can give a better image registration result.
5. CONCLUSION
In this work, we analyzed the effect of resolution on image registration. Our theoretical analysis and experiments show that 1) images with the same resolution can be registered accurately with more confidence and 2) matching the resolution of two subject images can give a better image registration result.
Future work will include a theoretical analysis of other cost functions, and a more deep analysis of the edge-based resolution metric.
Acknowledgments
This work was supported by NIH/NIBIB under grant R01 EB017743.
References
- 1.Rohde GK, Aldroubi A, Dawant BM. The Adaptive Bases Algorithm for intensity based nonrigid Image Registration. IEEE Trans Med Imag. 2003;22(11):1470–1479. doi: 10.1109/TMI.2003.819299. [DOI] [PubMed] [Google Scholar]
- 2.Penney GP, Blackall JM, Hamady MS, Sabharwal T, Adam A, Hawkes DJ. Registration of freehand 3D ultrasound and magnetic resonance liver images. Medical Image Analysis. 2004;8(1):81–91. doi: 10.1016/j.media.2003.07.003. [DOI] [PubMed] [Google Scholar]
- 3.Avants BB, Epstein CL, Grossman M, Gee JC. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Medical Image Analysis. 2008;12(1):26–41. doi: 10.1016/j.media.2007.06.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Heinrich MP, Jenkinson M, Bhushan M, Matin T, Gleeson FV, Brady M, Schnabel JA. MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration. Medical Image Analysis. 2012;16(7):1423–1435. doi: 10.1016/j.media.2012.05.008. [DOI] [PubMed] [Google Scholar]
- 5.Wachinger C, Navab N. Entropy and laplacian images: Structural representations for multi-modal registration. Medical Image Analysis. 2012;16(1):1–17. doi: 10.1016/j.media.2011.03.001. [DOI] [PubMed] [Google Scholar]
- 6.Crum WR, Hartkens T, Hill DLG. Non-rigid image registration: theory and practice. The British Journal of Radiology. 2014;77(S2) doi: 10.1259/bjr/25329214. [DOI] [PubMed] [Google Scholar]
- 7.Sotiras A, Davatzikos C, Paragios N. Deformable medical image registration: A survey. IEEE Trans Med Imag. 2013;32(7):1153–1190. doi: 10.1109/TMI.2013.2265603. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Chen M, Jog A, Carass A, Prince JL. SPIE Medical Imaging. International Society for Optics and Photonics; 2015. Using image synthesis for multi-channel registration of different image modalities; pp. 94131Q–94131Q. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Chen M, Lang A, Ying HS, Calabresi PA, Prince JL, Carass A. Analysis of macular OCT images using deformable registration. Biomed Opt Express. 2015;5(7):2196–2214. doi: 10.1364/BOE.5.002196. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Bilgel M, Carass A, Resnick SM, Wong DF, Prince JL. Deformation field correction for spatial normalization of PET images. NeuroImage. 2015;119:152–163. doi: 10.1016/j.neuroimage.2015.06.063. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Mallat SG. A theory for multiresolution signal decomposition: the wavelet representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 1989;11(7):674–693. [Google Scholar]
- 12.Rosenfeld A. Multiresolution image processing and analysis. Vol. 12. Springer Science & Business Media; 2013. [Google Scholar]
- 13.Sun J, Sun J, Xu Z, Shum H-Y. Computer Vision and Pattern Recognition, 2008 CVPR 2008 IEEE Conference on. IEEE; 2008. Image super-resolution using gradient profile prior; pp. 1–8. [Google Scholar]
- 14.Wells WM, III, Viola P, Atsumi H, Nakajima S, Kikinis R. Multi-modal volume registration by maximization of mutual information. Medical Image Analysis. 1996;1(1):35–51. doi: 10.1016/s1361-8415(01)80004-9. [DOI] [PubMed] [Google Scholar]
- 15.Viola P, Wells WM., III Alignment by maximization of mutual information. International Journal of Computer Vision. 1997;24(2):137–154. [Google Scholar]
- 16.Studholme C, Hill DLG, Hawkes DJ. An overlap invariant entropy measure of 3D medical image alignment. Pattern Recognition. 1999;32(1):71–86. [Google Scholar]
- 17.Wu S, Lin W, Jian L, Xiong W, Chen L. Information, Communications and Signal Processing, 2005 Fifth International Conference on. IEEE; 2005. An objective out-of-focus blur measurement; pp. 334–338. [Google Scholar]
- 18.Chung Y-C, Wang J-M, Bailey RR, Chen S-W, Chang S-L. Cybernetics and Intelligent Systems, 2004 IEEE Conference on. Vol. 1. IEEE; 2004. A non-parametric blur measure based on edge analysis for image processing applications; pp. 356–360. [Google Scholar]
- 19.Vu CT, Phan TD, Chandler DM. A spectral and spatial measure of local perceived sharpness in natural images. Image Processing, IEEE Transactions on. 2012;21(3):934–945. doi: 10.1109/TIP.2011.2169974. [DOI] [PubMed] [Google Scholar]
- 20.Marziliano P, Dufaux F, Winkler S, Ebrahimi T. Image Processing 2002 Proceedings 2002 International Conference on. Vol. 3. IEEE; 2002. A no-reference perceptual blur metric; pp. III–57. [Google Scholar]
- 21.Ong E, Lin W, Lu Z, Yang X, Yao S, Pan F, Jiang L, Moschetti F. Signal Processing and Its Applications, 2003 Proceedings Seventh International Symposium on. Vol. 1. Ieee; 2003. A no-reference quality metric for measuring image blur; pp. 469–472. [Google Scholar]
- 22.Dijk J, Van Ginkel M, Van Asselt RJ, Van Vliet LJ, Verbeek PW. Computer Analysis of Images and Patterns. Springer; 2003. A new sharpness measure based on gaussian lines and edges; pp. 149–156. [Google Scholar]
- 23.Zhong S-h, Liu Y, Liu Y, Chung F-l. Image Processing (ICIP), 2010 17th IEEE International Conference on. IEEE; 2010. A semantic no-reference image sharpness metric based on top-down and bottom-up saliency map modeling; pp. 1553–1556. [Google Scholar]
- 24.Narvekar ND, Karam LJ. Quality of Multimedia Experience, 2009 QoMEx 2009 International Workshop on. IEEE; 2009. A no-reference perceptual image sharpness metric based on a cumulative probability of blur detection; pp. 87–91. [Google Scholar]
- 25.Ferzli R, Karam LJ. A no-reference objective image sharpness metric based on the notion of just noticeable blur (jnb) Image Processing, IEEE Transactions on. 2009;18(4):717–728. doi: 10.1109/TIP.2008.2011760. [DOI] [PubMed] [Google Scholar]
- 26.Landman BA, Huang AJ, Gifford A, Vikram DS, Lim IAL, Farrell JAD, Bogovic JA, Hua J, Chen M, Jarso S, Smith SA, Joel S, Mori S, Pekar JJ, Barker PB, Prince JL, van Zijl P. Multi-parametric neuroimaging reproducibility: A 3-T resource study. NeuroImage. 2011;54(4):2854–2866. doi: 10.1016/j.neuroimage.2010.11.047. [DOI] [PMC free article] [PubMed] [Google Scholar]
