Abstract
This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10° and roto-translational perturbation up to 3 cm and 5°.
Key words: Image registration, brain imaging, computed tomography, magnetic resonance imaging, single-photon emission computed tomography (SPECT)
BACKGROUND
In radiation oncology, noninvasive imaging is a central component of treatment planning. The information gained from different imaging modalities is usually of a complementary nature: combining morphologic (CT [computed tomography], MRI [magnetic resonance imaging]), and functional data (positron emission tomography (PET), single-photon emission computed tomography [SPECT]) improves the possibilities of interpreting 3-dimensional (3D) data. Several techniques are developed for image fusion and recent overviews can be found in literature.1–4 Manual reslicing of data using functional-anatomic landmark is frequently used to register reconstruction studies. Disadvantages of such methods are operator subjectivity and the time required to correct for six degrees of motion. Techniques that match surface contours have found application in multimodality registration studies, but often require manual editing of the data to generate appropriate surfaces and have so far been largely limited to brain studies. More recently, a number of fully automated techniques based on voxel intensity have been described and validated: among them mutual-information maximization is one of the most popular algorithms.
In the last few years, because of the increasing interest in the clinical use of image fusion, a number of commercially available software often, but not always, integrated in radiation treatment planning have made their appearance on the market. They usually offer a wide choice of the methods to be employed in image registration extending from manual techniques, based on anatomical landmarks, semiautomatic techniques based on external fiducial markers or surface matching, or fully automated techniques based on voxel intensity measures. To be clinically useful, registration algorithms should be accurate, robust (adaptable to different degree of misregistration), and flexible (applicable to different situations). Ideally, they should also be automatic and fast. Validation of software registration is not trivial because ground truth is rarely known. Nonetheless, an effort should be in made to perform software acceptance testing before their introduction in clinical use.
In the present study, we evaluated the image fusion software Syntegra5 (Philips Medical Systems, Eindhoven, The Netherlands). In Syntegra, multiple 3D reconstructed studies of the same patient or phantom are used. An active floating image can be reoriented to be aligned with a stationary reference image in any direction using rigid body transformation. The reoriented, aligned image can then be saved and the matrix of transformation visualized. More than one secondary image can be selected for image fusion, but you can only view fusion one at a time with the primary image set. When visually inspecting the registration results between two data sets, a cutout window tool is available that allows drawing a rectangle through which the secondary image set can be viewed. Syntegra allows interactive registration, registration based on fiducial points and implements three rigid body algorithms of automatic registration: 1) cross correlation,6–8 which makes direct comparison between image set values and is supposed to be effective for registering image sets of the same modality; 2) local correlation,6–8 which makes comparison of many areas of the image sets: it works best for image sets with differing modalities in which equivalent features can easily be seen (as in the case of CT-MRI); 3) normalized mutual information9,10: As it does not assume a functional relationship between the values, it is effective for any multimodality image sets.
The aim of this study was to characterize the performances of Syntegra with respect to accuracy and robustness in registering CT, MRI, and 99mTc-methoxyisobutylisonitrile (MIBI) SPECT brain images. This work has two main characteristics. The first is represented by the assessment of accuracy and robustness using both phantom and patients instead of using only phantom data as is currently done in multimodality image registration quality assurance studies.11–13 The second is represented by the definition of a new set of estimators to identify translation and rotation errors in the three coordinate axes independently from point position in image field of view (FOV).
METHODS
Phantoms
Alderson Rando phantom is usually used in CT for detailed mapping of dose distribution. The phantom is constructed with a natural human skeleton cast inside material that is radiologically equivalent to soft tissue. Hole grids are drilled through the phantom’s soft tissue material. To better evidence the head boundary in SPECT and MRI imaging, the phantom’s head was circled with a capillary tube of 3-mm inner diameter filled with a solution of 99mTc (2.2 MBq/ml) and gadolinium-diethylenetriamine penta-acetic acid (DTPA)-dimeglumine (Gd-DTPA; 0.5 mmol/l). Four capillaries of 1-mm inner diameter, filled with 99mTc (61 MBq/ml) and Gd-DTPA (0.5 mmol/l) were placed internally and used as fiducial markers.
Hoffman 3D phantom™ (Data Spectrum Corporation, Hillsborough, NC, USA) allows anatomically accurate simulation of radioactivity distribution for brain SPECT and brain PET studies and distribution of proton density and relaxation parameters for brain MRI studies. The phantom was filled with a solution of 99mTc (2.2 MBq/ml) and Gd-DTPA (0.5 mmol/l).
Patients
Ten consecutive patients affected by high-grade glioma were examined. All of the patients were candidate to receive 3D conformal radiotherapy. Positioning was secured by customized thermoplastic mask and head rest to achieve accurate immobilization for simulation and reliable repositioning during each treatment session. Multiple image studies were performed by CT, MRI, and SPECT acquired in the same week.
Acquisitions
The CT investigation was performed on a helical CT (Prospeed Plus, GE, Milwaukee, WI, USA). The scan was performed axially with 3-mm slice thickness, scanned without gap from vertex to foramen magnum, using a 512 × 512 matrix and a pixel size of 0.49 mm.
MRI was performed using a 0.5-T system (Signa, GE, Milwaukee, WI, USA). The acquisition was done with a standard head coil. The head was fixed using velcro band as thermoplastic mask did not fit into the head coil. Three-dimensional spoiled-GRASS fast sequence (repetition time, 25 ms, minimum echo time 1 ms) after contrast material application (Gd-DTPA 0.1 mmol/kg body weight) and 3-mm slice thickness were acquired from vertex to foramen magnum, using a 256 × 256 matrix and a pixel size of 1.17 mm.
SPECT scan was performed on a dual-head gamma camera (Varicam, GE, Milwaukee, WI, USA) equipped with low-energy high-resolution collimator. Each patient was injected intravenously with 740 MBq of 99mTc-MIBI 8 min before scanning. Acquisition parameters were: 15% energy window centered on 140 keV, 120 projection angles over 360°, acquisition time of 20 s per frame, and matrix size of 128 × 128 with pixel size of 4.4 mm in the projection domain. The images were reconstructed by filtered backprojection using a Hanning filter (cutoff = 23, order 5). A uniform attenuation correction by Chang’s first order (μ = 0.11 cm−1) method was used. Images were reconstructed using a 128 × 128 matrix with 2.34-mm pixel size and 4.4-mm slice thickness. The images were transferred via network to a Sun Solaris work station and matched using the image registration software Syntegra.
Definition of the Gold Standard
Image registration was performed by means of an interactive method based on 3D rigid body transformations. Axial, sagittal, and coronal slices were simultaneously displayed on a screen with three view ports. Oblique orientations were displayed through additional view ports. The CT slices were selected as reference images and MRI and 99mTc-MIBI SPECT images were set as floating data sets. For each image modality, window level and width were independently adjusted and different chromatic scales were used for the overlay. Image registration was obtained by translating and rotating the floating images according to CT for each orientation. After every transformation of the registration procedure, each view port was immediately updated showing the fusion between the references and floating images. In phantoms, the superposition of the skull margins, internal and external capillaries were checked. A preliminary study led by using a Jaszczak phantom showed that the registration errors of this interactive method were less than 2 mm in agreement with other data of the literature.14,15 In the present study, the accuracy of image registration was verified in patients by direct superposition of anatomic landmarks such as: basilar artery and foramens of the skull base for CT and MRI images; pituitary gland, temporal horns of choroids plexus and skull surface for 99mTc-MIBI SPECT. Image registration was performed independently by two couples of observers composed by one radiotherapist and one medical physicist. Each couple performed jointly and saved the image registration. Depending on the spatial resolution of the analyzed image, the discrepancies between the two solutions ranged from 0.67 to 1.6 mm and from 0.47 to 1.67°. The interobserver reproducibility between the two couples of observers was evaluated by means of intraclass correlation coefficient (ICC) using variance component estimation.16 Values above 0.70 indicate good reproducibility and little interobserver variability, whereas values below 0.50 indicate a poor reproducibility or large interobserver variability. The values of ICC were 0.73 and 0.64 for the translational and rotational components of the registrations. A way of improving reproducibility is to take multiple observations and average the scores. The result of this strategy is to divide the error plus observer variance by the number of observations. Since two independent couples of observer are involved and their solutions averaged the reliability of this averaged solution increased to ICC = 0.79 and 0.71 for the translational and rotational component of the registration.
Definition of Estimators
Because accuracy is defined as the closeness of the test method and the gold standard method, a way must be devised to compare these two image fusion. Among previously defined estimators, Target Registration Error (TRE)17,18 is the most widespread: it measures the displacement between two corresponding points visible on the two data sets after applying the registering transformation, that is:
![]() |
where p is a point on the first data set, T is the transformation applied to p, and q is the corresponding point on the second data set. This estimator does not allow accounting separately for translation and rotation errors. Moreover, TRE increases moving from the center toward the periphery of the image.
We want to define estimators that separately identify errors caused by translation and rotation in the three coordinate axes and independently from point position in image field of view.
Assuming no distortions, a certain rigid transformation (rotation and translation) of the floating image should bring it into perfect registration with the fixed image. A rigid transformation can be parameterized, eg, by the three components tx, ty, and tz of the translation vector and the three rotation angles φx, φy, and φz around the coordinate axes:
![]() |
where:
![]() |
If we define RTgs and RTt as the matrices linked to the gold standard transformation and to the transformation under consideration, we can easily define an error matrix ME as:
![]() |
where the first row represents translation errors along x, y, and z axes and the second rotation errors along the same axes. We defined the translation error estimator TE as:
![]() |
and rotation error estimator RE as the internal product of the two normalized vectors identifying corresponding points in their coordinate system:
![]() |
If the transformation under evaluation closely matches the gold standard transformation then:
![]() |
A third estimator A was defined as the area of the circular sector identified by TE and φ represented in polar coordinates in the estimator domain:
![]() |
The estimator A can be considered a synthetic and global representation of the registration errors because it provides, by means of a scalar variable, which incorporates translation and rotation errors, the distance between the solution given by the algorithm under consideration and the gold standard throughout the whole data sets, without being dependent on the position of vectors n and n′ as is the case for TRE.
Quantitative Validation
The quantitative validation consisted of two parts: the assessment of the accuracy of the image registration results and of the robustness of the algorithm. Accuracy in this context means the algorithm’s ability to find a fusion result near the gold standard result. Robustness is defined as the ability to find the same results on all trials, even when starting from a different initial alignment of the two 3D data sets. Algorithms evaluated were: normalized mutual information for CT-SPECT registrations; normalized mutual information and local correlation for CT-MRI registrations. To evaluate accuracy, TE, RE, and A values were determined and compared to limiting values for the different algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from one patient, randomly chosen among the 10 studied, were produced via software with the scheme reported in Table 1, for a total of 45 perturbations applied (15 translations, 18 rotations, and 12 roto-translations). According to what can be found elsewhere in literature,10,19 image registrations were considered optimal if the registration error between the two images was within the voxel dimension (px, py, pz) of the less resolute image. This implies for translation error TE that the condition TE < TEmax must be fulfilled, where:
![]() |
Table 1.
Summary of Perturbations Applied in the Robustness Evaluation
| Perturbation | Axis | Perturbation |
|---|---|---|
| Translation | x | 0.5-1-1.5-2-2.5 cm |
| y | 0.5-1-1.5-2-2.5 cm | |
| z | 0.5-1-1.5-2-2.5 cm | |
| Rotation | x | 0.5°, 1°, 2°, 3°, 5°,10° |
| y | 0.5°, 1°, 2°, 3°, 5°,10° | |
| z | 0.5°, 1°, 2°, 3°, 5°,10° | |
| Rototranslation | x; y; z; xyz | 1 cm and 1° |
| 2 cm and 3° | ||
| 3 cm and 5° |
Note: Roto-translations of 1 cm and 1°, 2 cm and 2°, and 3 cm and 3° were applied independently over each axis and simultaneously over the three axes.
In our experimental setting, TEmax = 0.55 and 0.34 for CT-SPECT and CT-MRI registration, respectively.
Point misalignments caused by rotational error of registration are amplified when moving from the center toward the periphery of the head. Assuming a spherical shape for the head with a radius r = 7 cm defined on the basis of a standard mean,20 errors caused by small rotation angles may be approximated by:
![]() |
Assuming the diagonal of the less resolute voxel as the reference value for m, the reference value for φ can be derived as:
![]() |
And, in turn, the reference value for RE as:
![]() |
In our experimental setting REmin = 0.9969 and 0.9988 for CT-SPECT and CT-MRI registrations, respectively.
It is worth noting that these conditions are very restrictive, since an error of m at the head boundary will translate in a decreasing value moving toward the head center.
Finally, the reference values for A can be derived as:
![]() |
equaling 0.0119 cm2 and 0.0029 cm2 for CT-SPECT and CT-MRI registrations, respectively.
RESULTS
In Table 2 are reported the results of the accuracy study in phantoms for each couple of image type and registration algorithms employed. In CT-MRI registrations, algorithms of local correlation and mutual information perform equally well, never reaching limiting values. In CT-SPECT registrations, mutual information provided overall accurate registrations in Hoffman and Alderson Rando phantoms.
Table 2.
Accuracy of Estimators TE, RE, and A in Phantoms, Together with Corresponding Limiting Values for Each Couple of Image Type and Registration Algorithm
| Phantom | Registration | Algorithm | TE | TEmax | RE | REmin | A | Amax |
|---|---|---|---|---|---|---|---|---|
| Alderson Rando | CT-SPECT | MIa | 0.36 | 0.55 | 0.9981 | 0.9969 | 0.0040 | 0.0119 |
| CT-MRI | LCb | 0.06 | 0.34 | 0.9999 | 0.9988 | 0.00003 | 0.0029 | |
| CT-MRI | MI | 0.10 | 0.9999 | 0.0001 | ||||
| Hoffman | CT-SPECT | MI | 0.43 | 0.55 | 0.9991 | 0.9969 | 0.0039 | 0.0119 |
| CT-MRI | LC | 0.22 | 0.34 | 0.9998 | 0.9988 | 0.0005 | 0.0029 | |
| CT-MRI | MI | 0.21 | 0.9992 | 0.0009 |
aMI: Mutual information
bLC: Local correlation
In Figure 1 are reported the results of the accuracy study in patients for each couples of image type and registration algorithms employed. Box and whiskers plot of individual values were represented for each estimator.
Fig. 1.
Box and whiskers plot for A. TE , B. RE, C. A estimators in patients. Horizontal lines represent limiting values for CT-MRI (___) and CT-SPECT (‐‐‐) ---registrations.
Mean values of TE were: 0.35 ± 0.18 cm for CT-MRI registrations using local correlation, 0.22 ± 0.18 cm for CT-MRI registrations using mutual information, and 0.22 ± 0.12 cm for CT-SPECT registrations using mutual information. Limiting values of TE in CT-MRI registrations were exceeded in three and one patients for local correlation and mutual information algorithms, respectively. Limiting values were never reached in CT-SPECT registrations employing mutual information.
Mean values of RE were: 0.9992 ± 0.0016 for CT-MRI registrations using local correlation, 0.9993 ± 0.0006 for CT-MRI registrations using mutual information, and 0.9931 ± 0.0208 for CT-SPECT registrations using mutual information. Limiting values of RE in CT-MRI registrations were exceeded in two and one patients for local correlation and mutual information algorithms, respectively. Limiting value was reached in one patient for CT-SPECT registrations employing mutual information.
Mean values of A were: 0.0040 ± 0.0066 cm2 for CT-MRI registrations using local correlation, 0.0015 ± 0.0016 cm2 for CT-MRI registrations using mutual information, and 0.0036 ± 0.0092 cm2 for CT-SPECT registrations using mutual information. Limiting values of A in CT-MRI registrations were exceeded in three and one patients for local correlation and mutual information algorithms, respectively. Limiting value was reached in one patient in CT-SPECT registrations employing mutual information.
Local correlation algorithm resulted accurate in CT-MRI registrations in phantoms but, when applied to patients, exceeded limiting values in 3 of 10 cases. Thus, the evaluation of robustness was restricted to the algorithm of mutual information for both CT-MRI and CT-SPECT registrations.
In Figure 2 are reported the results of the robustness study in one patient, randomly chosen among the 10 studied, with respect to estimator A. None of the A values obtained during robustness evaluation exceeded limiting values of 0.0029 and 0.019 cm2 established for CT-MRI and CT-SPECT registrations, respectively.
Fig. 2.
Robustness study. Estimator A values for translation, rotation and roto-translation perturbations, applied according to the scheme reported in Table 1, for CT-MRI registrations (upper row) and CT-SPECT registrations (lower row).
In Table 3 are reported the times required to achieve image registrations in patients by human observers and using the various algorithms implemented in Syntegra. Results are expresses as mean times and standard deviations.
Table 3.
Times to Achieve Image Registration in Patients
| Images | Algorithm | Mean (Sec) | Standard Deviation (Sec) |
|---|---|---|---|
| CT-SPECT | MI a | 34.1 | 19.3 |
| CT-MRI | LCb | 24.4 | 15.3 |
| CT-MRI | MI | 69.2 | 34.6 |
| Human observers | 852 | 94 |
aMI: mutual information
bLC: local correlation
DISCUSSION
In the assessment of the registration accuracy of retrospective intermodality image registration there is no established gold standard, so that any registration system whose accuracy is known to be high can be adopted. In this study, the gold standard for the image registration accuracy study was defined by means of a manual interactive method based on 3D rigid body transformations. The advantage of interactive techniques is that they are fully retrospective and universally applicable in various situations combining different image modalities. Although many automated registration methods based on different approaches such as surface matching, mutual information, voxel intensity correlation have been developed in the last decade, the accuracy of interactive image registration, which relies on human visual system, is still unsurpassed by the automated techniques. Studies conducted to assess the accuracy of visual inspection in the detection of failures in retrospective intermodality registration showed that the human visual system can detect misregistration ranging from 2 mm for CT-MRI17 to 4 mm for MRI-PET.21 A major criticism to interactive matching is its dependence from the judgment and the training of the operator. Nevertheless, when a suitable user interface is available and applicable and many details are visible on the images, the introduced subjectivity is negligible. Pfluger et al,14 while performing an intercomparison of automatic and interactive methods for brain MRI-SPECT image registration, reported an intraobserver/interobserver variability of 1.5 mm/1.6 mm for interactive matching and overall registration errors of 2.2 and 2.3 mm for two automatic methods based on surface matching and Woods’ algorithm, respectively. On the basis of the high accuracy required in the radiotherapy treatment of intracranial tumors and the rigidity of the examined anatomy, the results of image fusion were considered clinically acceptable if the mismatch with the solution provided by the gold standard was globally below the voxel size of the less resolved modality for all the points of the two registered data sets. As the TRE depends on the position in the data set and generally decreases moving toward the center of the image, this constraint may appear to be much severe; however, it must be considered that the solutions provided by the gold standard are not the ground truth, but are affected by an error, which sums quadratically with the one generated by the tested method, so that the imposed limits are, in our opinion, consistent with the clinical tasks of radiotherapy.
The decomposition of the registration errors in the translation and rotation components is of clinical value in radiotherapy applications of multimodality image registration. Radiotherapy planning is involved with extended and irregularly shaped volumes, and uncertainties in the spatial orientation of these volumes can heavily affect the shape of the treatment fields. Rotation errors are more difficult to manage than translation errors, which can be simply taken into account by adding a given margin, so it is important to estimate their weight in the registration procedure. TRE by itself does not express this kind of information. Fiducial registration error (FRE) relies on point-based registration systems, but it must be emphasized that the estimators we proposed are directly measured by comparing the transformation matrices with the gold standard matrix, which has been determined by means of an interactive method based on internal landmarks. This technique incorporates the selection of multiple internal landmarks and surfaces. During this interactive registration, the operator uses visual criteria, like matching of contours around morphological structures in three dimensions. The superposition of the internal anatomical landmarks is evaluated to check the result of the registration, but it is not used to determine the transformation itself so we are not able to define a FRE because we do not use fiducial localization errors (FLEs).
Local correlation performs equally well or even better than mutual information algorithms when applied to CT-MRI registration in phantoms. Nevertheless, when accuracy was evaluated in patients, local correlation leads to unacceptable registration errors in 3 of 10 patients compared to only one misregistration obtained using mutual information, which seems to be the best performing algorithm also when registering differing modalities in which equivalent features can easily be seen (as is the case of CT-MRI registration). This discrepancy may be explained by considering that the gray level histograms are much more dispersed in patient than in phantom images, which roughly corresponds to a situation of subsampled data where mutual information algorithm is known to be less accurate. This, in turn, is a warning against the exclusive use of phantoms in the quality assurance of multimodality image registration.
The algorithm of mutual information, applied to CT-SPECT image registrations, exceeded reference values in one patient. It is worth noting that in this case the mismatch corresponds to a maximum TRE of 5.8 mm, well below the spatial resolution of SPECT imaging (10–15 mm) and of the same order of state of the art PET tomographs.22 Thus, one may wonder if such an error may be of clinical relevance in the context of anatomical-functional image registrations.
The algorithm of mutual information, implemented into the Syntegra software, proved to be not only accurate, but also robust. In fact, limiting values were not exceeded with translation perturbations up to a limit of 2.5 cm, rotation perturbations up to 10° and roto-translation perturbations up to 3 cm and 5°. This seems to be of great relevance in the clinical practice because, notwithstanding immobilization systems, repositioning errors of few millimeters and movement artefacts are almost unavoidable.
Registration algorithms implemented into the Syntegra software proved to be fast: the time to achieve image registration is one order of magnitude lower than the one necessary to expert human observers to perform manual registration.
CONCLUSION
Normalized mutual information algorithms implemented into the Syntegra software resulted quite accurate for CT-MRI and CT-SPECT registrations both in phantoms and in patients. They proved to be fast and also robust with respect to perturbations that may be encountered in the clinical setting.
Acceptance testing of image registration software, before its introduction into clinical practice, proved to be useful in the choice of algorithms to employ for the different tasks and in the characterization of performances.
References
- 1.Hajnal JV, Hill DLG, Hawkes DJ: Medical Image Registration. CRC Press, 2001
- 2.Hill DL, Batchelor PG, Holden M, Hawkes DJ. Medical image registration. Phys Med Biol. 2001;46:R1–R45. doi: 10.1088/0031-9155/46/3/201. [DOI] [PubMed] [Google Scholar]
- 3.Hutton BF, Braun M. Software for image registration: algorithms, accuracy, efficacy. Semin Nucl Med. 2003;23:180–192. doi: 10.1053/snuc.2003.127309. [DOI] [PubMed] [Google Scholar]
- 4.Pluim JP, Maintz JB, Viergever MA. Mutual-information-based registration of medical images: a survey. IEEE Trans Med Imag. 2003;22:986–1004. doi: 10.1109/TMI.2003.815867. [DOI] [PubMed] [Google Scholar]
- 5.Sykes JR, Amer A, Czajka J, Moore CJ. A feasibility study for image guided radiotherapy using low dose, high speed, cone beam X-ray volumetric imaging. Radiother Oncol. 2005;77:45–52. doi: 10.1016/j.radonc.2005.05.005. [DOI] [PubMed] [Google Scholar]
- 6.Junk L, Moen JG, Hutchins GD, Brown MB, Juhl DE. Correlation methods for the centering, rotation and alignment of functional brain images. J Nucl Med. 1990;31:1220–1226. [PubMed] [Google Scholar]
- 7.Andersson JLR, Sundin A, Valind S. A method for coregistration of PET and MRI brain images. J Nucl Med. 1995;36:1307–1315. [PubMed] [Google Scholar]
- 8.Rizzo G, Pasquali P, Gilardi MC. Multimodality biomedical image integration: use of cross-correlation technique. Proc IEEE Eng Med Biol Soc. 1991;13:219–220. [Google Scholar]
- 9.Studholme C, Hill DLG, Hawkes DJ. An overlap invariant entropy measure of 3D medical image alignment. Pattern Recogn. 1998;32:71–86. doi: 10.1016/S0031-3203(98)00091-0. [DOI] [Google Scholar]
- 10.Maes F, Collignon A. Multimodality image registration by maximization of mutual information. IEEE Trans Med Imag. 1997;16:187–198. doi: 10.1109/42.563664. [DOI] [PubMed] [Google Scholar]
- 11.Mutic S, Dempsey JF, Bosch WR, Low DA, Drzymala RE, Chao KS, Goddu SM, Cutler PD, Purdy JA. Multimodality image registration quality assurance for conformal three-dimensional treatment planning. Int J Radiat Oncol Biol Phys. 2001;51:255–260. doi: 10.1016/s0360-3016(01)01659-5. [DOI] [PubMed] [Google Scholar]
- 12.Lavely WC, Scarfone C, Cevikalp H, Li R, Byrne DW, Cmelak AJ, Dawant B, Price RR, Hallahan DE, Fitzpatrick JM. Phantom validation of coregistration of PET and CT for image-guided radiotherapy. Med Phys. 2004;31:1083–1092. doi: 10.1118/1.1688041. [DOI] [PubMed] [Google Scholar]
- 13.Moore CS, Liney GP, Beavis AW. Quality assurance of registration of CT and MRI data sets for treatment planning of radiotherapy for head and neck cancers. J Appl Clin Med Phys. 2004;5:25–35. doi: 10.1120/jacmp.26.147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Pfluger T, Vollmar C, Wismuller A, Dresel S, Berger F, Suntheim P, Leinsinger G, Hahn K. Quantitative comparison of automatic and interactive methods for MRI-SPECT image registration of the brain based on 3-dimensional calculation of error. J Nucl Med. 2000;41:1823–1829. [PubMed] [Google Scholar]
- 15.Pietrzyk U, Herholz K, Fink G, Jacobs A, Mielke R, Slansky I, Wurker M, Heiss WD. An interactive technique for three-dimensional image registration: validation for PET, SPECT, MRI and CT brain studies. J Nucl Med. 1994;35:2011–2018. [PubMed] [Google Scholar]
- 16.Hays WL. Statistics. 4. Fort Worth, TX: Holt, Rinehart and Winston Inc; 1988. [Google Scholar]
- 17.Fizpatrick JM. Handbook of medical imaging, Volume 2: medical image processing and analysis, chapter 6. Bellingham, WA: SPIE press; 2000. [Google Scholar]
- 18.Maurer CR, Aboutanos GB, Dawant BM, Maciunas RJ, Fitzpatrick JM. Registration of 3-D images using weighted geometrical features. IEEE Trans Med Imaging. 1996;15:836–849. doi: 10.1109/42.544501. [DOI] [PubMed] [Google Scholar]
- 19.West J, et al. Comparison and evaluation of retrospective intermodality brain image registration techniques. J Comput Assist Tomogr. 1997;21:554–566. doi: 10.1097/00004728-199707000-00007. [DOI] [PubMed] [Google Scholar]
- 20.Annals of the ICRP: Recommendations of the International Commission on Radiological Protection, ICRP Publication 26, 1977
- 21.Wong JC, Studholme C, Hawkes DJ, Maisey MN. Evaluation of the limits of visual detection of image misregistration in a brain fluorine-18fluorodeoxyglucose PET-MRI study. Eur J Nucl Med. 1997;24:642–650. doi: 10.1007/BF00841402. [DOI] [PubMed] [Google Scholar]
- 22.Brambilla M, Secco C, Dominietto M, Matheoud R, Sacchetti G, Inglese E. Performance characteristics obtained for a new 3-dimensional lutetium oxyorthosilicate-based whole-Body PET/CT scanner with the National Electrical Manufacturers Association NU 2-2001standard. J Nucl Med. 2005;46:2083–2091. [PubMed] [Google Scholar]















