Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2008 Nov 10.
Published in final edited form as: IEEE Trans Med Imaging. 2006 Jul;25(7):838–844. doi: 10.1109/tmi.2006.871397

Use of Three-Dimensional Gaussian Interpolation in the Projector/Backprojector Pair of Iterative Reconstruction for Compensation of Known Rigid-Body Motion in SPECT

Bing Feng 1, Howard C Gifford 1, Richard D Beach 1, Guido Boening 1, Michael A Gennert 1, Michael A King 1
PMCID: PMC2581802  NIHMSID: NIHMS77313  PMID: 16827485

Abstract

Due to the extended imaging times employed in SPECT and PET, patient motion during imaging is a common clinical occurrence. The fast and accurate correction of the three-dimensional (3D) translational and rotational patient motion in iterative reconstruction is thus necessary to address this important cause of artifacts. We propose a method of incorporating 3D Gaussian interpolation in the projector/backprojector pair to facilitate compensation for rigid-body motion in addition to attenuation and distance-dependent blurring. The method works as the interpolation step for moving the current emission voxel estimates and attenuation maps in the global coordinate system to the new patient location in the rotating coordinate system when calculating the expected projection. It also is employed for moving back the backprojection of the ratio of the measured projection to the expected projection and backprojection of the unit value (sensitivity factor) to the original location. MCAT simulations with known six-degree-of-freedom (6DOF) motion were employed to evaluate the accuracy of our method of motion compensation. We also tested the method with acquisitions of the Data Spectrum Anthropomorphic phantom where motion during SPECT acquisition was measured using the Polaris IR motion tracking system. No motion artifacts were seen on the reconstructions with the motion compensation.

I. Introduction

Patient motion during SPECT acquisition causes inconsistent projection data and reconstruction artifacts which can significantly affect the diagnostic accuracy of SPECT if not corrected [1-4]. There has been a significant amount of research on motion detection and correction in recent years. A number of investigators have investigated determining patient motion from solely analyzing the emission data collected for reconstruction [5-12]. Besides these strictly data-driven approaches, optical and infrared cameras have been utilized to track patient motion [13-16]. Our current hypothesis is that detection of patient motion through optical or infrared monitoring of small spherical reflectors on elastic belts about the patient, combined with a data-driven determination of the actual motion of the internal organ of interest, is capable of providing clinically robust correction for patient motion in emission imaging [17].

Known motion can be corrected within the image reconstruction by transforming the projector to account for changing position of the patient. Fulton et al and Hutton et al [18, 19] have proposed a method of modeling known rigid-body patient motion in iterative reconstruction by dividing the projection data into subsets where no motion occurred, and then using bilinear interpolation to move the object estimate for each subset to match the patient motion. We follow their lead by performing motion compensation through an iterative reconstruction process. In addition to motion compensation, we incorporate compensations for attenuation, and distance-dependent resolution in reconstruction.

To accomplish the combined rotation of the SPECT detector with gantry angle and possible rotation and translation of the patient, interpolation is required in transforming between the global (stationary) and rotating coordinate system. We propose use of three-dimensional (3D) Gaussian interpolation to provide the interpolation required in these steps. Two-dimensional (2D) Gaussian interpolation is widely used in iterative reconstruction because of its outstanding properties as a slice rotator [20]. It rotates a slice of the image grid (2D coordinate system) to be aligned with the camera head. For parallel-hole collimators this allows projection and backprojection to occur just along columns and/or rows thereby facilitating modeling the distance-dependent burring of the imaging system [21]. Furthermore, the additional smoothing caused by interpolation [20] can be accounted for as part of detector resolution compensation (DRC) by making the magnitude of the FWHM of the Gaussian blurring employed in DRC equal to the square root of the square of the FWHM of the modeled system resolution minus the FWHM squared of the Gaussian interpolation. This results in the convolution of the Gaussian function employed in DRC and the Gaussian function used in interpolation being equal to the Gaussian function which models the actual system resolution. Wallis and Miller [20] also demonstrated the superiority of Gaussian interpolation in terms of metrics such as impulse constancy, uniformity, position error, and angular dependence of blurring when used as a rotator over more commonly employed forms of interpolation such as nearest-neighbor, bilinear, bicubic-polynomial, and cubic-spline interpolation. As rigid-body motion often contains out-of-plane components, the Gaussian interpolation must be extended to 3D to handle arbitrary motion in three-dimensional space.

Herein, we use the 3D Gaussian interpolation to describe a forward mapping between the original object in the global coordinate system and the transformed object due to translational and rotational patient motion, and rotation of the camera in the rotating coordinate system. The 3D Gaussian is used to distribute the value at a point in the original object to the corresponding location and its three-dimensional neighbor in the transformed object as viewed in the rotating coordinate system as a function of exactly how the point is positioned in the rotating coordinate system. As an extension of the 2D Gaussian rotator, 3D Gaussian interpolation gives accurate representation of the transformed object with a small uniform blurring over the 3D space. Replacing the 2D Gaussian rotator in our current reconstruction code, the 3D Gaussian interpolation models the motion of the object and the rotation of the coordinate system with the camera head through a single interpolation, which separates the motion compensation from other compensations, such as attenuation compensation and compensation for the distance-dependent blurring. It thus allows them to be performed as in the no motion case (assuming attenuation maps are also moved prior to the projection and backprojection steps). This method also has the advantage that interpolation errors do not accumulate over angles and with iteration since in each case it is a single estimate of the source distribution which is being projected or updated by backprojection.

II. Methods

A. Modeling the Translational and Rotational Motion of an Object, and Rotation of the Coordinate System with Acquisition angle using 3D Gaussian Interpolation

Rigid-body motion of an object can be described by 3 translational degrees-of-freedom and 3 rotational degrees-of-freedom. We will herein focus on the use of frame-mode acquisition and assume no motion takes place during imaging at each gantry angle. To correct for motion which occurs during acquisition at a given gantry angle, a list-mode acquisition could be used and the list-mode data binned into sub-frames in which the patient was temporarily stationary at the given gantry angle. Reconstruction would be the same except for the replacing one frame with several sub-frames at the given gantry angle.

To start with, we assume that during acquisition of an object, rigid-body motion occurs (maybe many times) and is recorded. We also assume that f(x) is the tracer distribution before any motion occurs, which is the quantity we are trying to reconstruct. Hereafter it will be called the “template” object or image, Using the conventional voxel basis, we can approximate f(x) by

f(x)Σifivi(x), (1)

where vi(x)=1 if x ∈ voxel i , vi(x)=0 otherwise. We define fif(xi) , where xi is the center of the voxel i .

With rigid-body motion, an arbitrary point at coordinates x in the object moves to new coordinates

xm=Rx+t, (2)

where R is the rotation matrix, and t is the translation vector. The tracer distribution after motion is

g(xm)=f(x),org(xm)=f(RT(xmt)) (3)

where RT is the transpose of R . g(xm) is called the “target” object or image, and can be approximated by

g(xm)Σigivi(xm), (4)

where vi(xm) is the voxel basis in the target image, vi(xm)=1 if xm ∈ voxel i′ , vi(xm)=0 otherwise. We define gig(xim) , where xim is the center of the voxel i′ in the target image. Making use of (2) and (3), we therefore have

fif(xi)=g(Rxi+t)g(xi~m)=gi~ (5)

where xi~mR(xi+t) is the center of voxel ĩ′ in the target image.

Equation (5) describes that due to the motion the center of voxel i in the template image moves into the voxel ĩ′ in the target image. Using a forward interpolation, we diffuse fi into the voxel ĩ′ of the target image and its neighbor. We can write the forward interpolation as

g=Tf, (6)

where the motion operator T = {Ti′i} forwardly maps the template image f = {fi} into the target image g = {gi′} through interpolation. The contents of T depend on the motion and the interpolation function employed, but not on f .

The measurement model is

pj~Poisson{yj},yj=[Ag]j+rj (7)

where Poisson{} stands for the Poisson distribution, ȳj , pj , and rj are the average counts, measured counts, and average background (including scatter and/or cross-talk) detected at the jth detector bin, respectively, and the system matrix A = {Aji} describes the probability of detecting photons emitted at voxel i on the jth detector bin. We can express the system matrix as

A=AsensAgeo_atten, (8)

where Asens is a diagonal matrix that accounts for the variation of the detector sensitivity, Ageo_atten is a projection matrix that describes the imaging geometry and attenuation. To calculate Ageo_atten , we should move the attenuation-map the same way as we move the object. In the case of modeling the distance-dependent resolution, the contribution of detector blurring can be absorbed in Ageo_atten .

Making use of (6), we can rewrite (7) as

yj=[A~f]j+rj, (9)

where à = AT is the equivalent system matrix that incorporates motion. The OSEM algorithm [22] can be written as

fi(n+1)=fi(n)ΣjSA~jiyjyjΣjSA~ji=fi(n)ΣmTmi(ΣjSAjmyjyj)ΣmTmi(ΣjSAjm), (10)

where fi(n+1) , fi(n) are the image estimate at (n+1)th and nth iteration, respectively, S is one of subsets of the rays. In each iteration the algorithm cycles through all subsets.

Equation (10) describes that the motion compensation can be achieved by first moving the template object f prior to the projection operation (shifting voxel i to voxel ĩ′ , as in (5)), and then after backprojection at each angular view moving back the backprojected items (the ratio of measurement to expected projection, and unit value used in normalization).

To compensate the distance-dependent blurring of the imaging system, it is convenient to adopt the coordinate system rotating with the camera head when performing the projection/backprojection (Fig. 1). In this rotating system image grids are parallel or perpendicular to coordinates axes.

Fig. 1.

Fig. 1

In the global (left) and rotating (right) coordinate system, the template object (object before motion indicated by dashed line) is shown as having moved to the target object (object after motion shown in solid line) location. We propose use of a single 3D Gaussian interpolation to account for both patient motion and camera rotation as illustrated by arrow between two coordinate systems.

It is in the global stationary coordinate system that we represent the template image{fi} . From (2), the point moved from x to xm has coordinates m = Mxm = MRx + Mt in the rotating coordinate system, where M is the rotation matrix for the coordinate transformation between the rotating and global coordinate systems (Fig. 1). To save notation we absorb MR into R , Mt into t , and still call m as xm . By doing this we can arrive at equations (2-10) once more. In the projector, at each angular step we map the template slices {fi} and the attenuation maps {μi} into the rotating coordinate system, using the 3D Gaussian interpolation which moves a function value at voxel i (in the global coordinate system) to voxel ĩ′ (in the rotating coordinate system) and its 26 neighboring voxels in 3D with weights calculated from a 3D Gaussian distribution function. The weights depend on the full-width-at-half-maximum of the Gaussian function and the location within the voxel ĩ′ (sub-voxel indices). The sigma of the Gaussian function was empirically chosen as 0.5 pixels which maximizes count uniformity and allows use of a 3×3×3 interpolation without significant truncation of the tails of the Gaussians [20]. Since a 3D Gaussian function can be separated into the multiplication of three orthogonal 1D Gaussians, only weights for 1D Gaussians were pre-computed (Fig. 2). Implementation of the 3D Gaussian interpolation is a direct extension of the 2D Gaussian rotator [20]. Once the object and the attenuation map were mapped into the rotating coordinates, compensations for the attenuation and distance-dependent blurring were performed using standard methods. In the backpojector, we mapped the backprojections (the ratio of measurement to expected projection, and unit value) at each angular step from voxel ĩ′ in the rotating coordinate system to voxel i in the global coordinate system, also using the 3D Gaussian interpolation. Thus only two interpolation steps are needed, once in the projector and another in the backprojector, as in reconstruction using the 2D Gaussian rotator.

Fig. 2.

Fig. 2

The 1D Gaussian interpolation diffuses the function value into the new location and two neighboring pixels, according to a weight which equals to the area under the 1D Gaussian function enclosed by each of the three pixels.

B. MCAT Based Simulations of the Rigid-Body Motion

Using a numerical projector, which models attenuation and distance-dependent blurring, we generated projections of the MCAT digital phantom [23]. A 204-degree acquisition of the MCAT phantom was simulated to mimic the IRIX SPECT system in our clinic. The source and attenuator distributions were repositioned by know changes in position and orientation prior to the calculation of the projection images. The motion was simulated as having occurred 3 times during acquisition as illustrated in Figure 3. Poisson noise was added to the projection data after scaling to half a million counts. The OSEM algorithm was used to reconstruct the template object from the projection data with: 1) no compensation for motion, and 2) compensation for motion. As a comparison, a motion-free acquisition was also simulated and reconstructed. To quantitatively evaluate the impact of motion compensation, the motion-free data were reconstructed as the “gold standard” for the count (or activity concentration) distribution. The root-mean-square error (RMSE) of reconstruction (with and without motion compensation) was calculated by

RMSE=ΣiROI(countsitrue_countsi)2N (11)

where ROI is a region of interest including the entire heart, N is the number of voxels inside the ROI, countsi and true_countsi are counts and true counts at voxel i , respectively, which are inside the ROI.

Fig. 3.

Fig. 3

Translations and rotations as simulated during the four motion phases of acquisition. Motion was quantified in the global coordinate system, compared with the template object.

C. Acquisition of the Anthropomorphic Phantom With Motion as Monitored by the Infrared Polaris System

We did series of measurements of the Data Spectrum Anthropomorphic phantom with cardiac insert on our IRIX system. The object was to simulate rigid-body motion of the phantom during acquisition. In preparation of the experiment, 1 mCi, 10.2 mCi, and 6.3 mCi of Tc-99m was added into heart, liver, and body of the phantom, respectively. This resulted in approximated concentrations of 1.0, 0.1, and 1.0 in these three structures. A Polaris (Northern Digital Inc.) tool consisting of four IR reflecting spherical markers was attached tightly to the top of the anthropomorphic phantom. An initial position as measured by Polaris was chosen as the baseline for the motion-free case. A simultaneous emission-transmission acquisition was then performed. Transmission imaging for attenuation correction was performed using the Beacon Transmission system on the IRIX [24]. Subsequent acquisitions were performed after changing the anthropomorphic phantom position and/or orientation. Polaris was again used in order to determine the 6DOF-motion of the phantom. In each acquisition, two heads of the IRIX acquired a total of 204 degree data with 102 degree gantry rotation at 34 angular steps. A scatter window (5% width centered at 120 keV) was also acquired. We combined projection data from these emission measurements with the phantom positioned differently to simulate acquisition of a moving anthropomorphic phantom which was tracked by the Polaris.

A calibration study was performed to allow the conversion of the Polaris coordinates to the SPECT coordinates of our IRIX system [16]. This consisted of placing a small volume of concentrated Tc-99m into a hole at the tip of each mount used to hold the IR reflecting spheres in place. The location of the spheres was then recorded by the Polaris, and a SPECT acquisition was performed to determine the corresponding centers of the spheres in SPECT coordinates. The translational and rotation mapping required to convert the Polaris coordinates into the SPECT coordinates was then determined from these data. For routine use, the Polaris system can be pre-calibrated and used for all patient studies if the Polaris system remains stationary relative to the SPECT system. This can be achieved by attaching the Polaris system firmly on the wall in a low traffic area, and monitoring the location of reflective spheres attach to a stationary portion of the gantry to determine if the transformation from the Polaris system to the SPECT system needs to be recalibrated. Thus a recalibration is needed only when indicated.

III. Results

A. MCAT phantom Simulations

For the simulated acquisition of the MCAT phantom which underwent a series of translations and rotations (Fig. 3), reconstruction with accurate motion compensation shows no artifacts (Fig. 4), while severe motion artifacts were seen without motion compensation. Using the motion-free simulation as the gold-standard, motion compensation greatly increases the quantitative accuracy of reconstruction in terms of the count-profile (Fig. 5) and the root-mean-square error (RMSE) of the count distribution. Using a region of interest (ROI) containing the entire heart, the RMSE is reduced from 1.68 without motion compensation to 0.233 with motion compensation, each in units of counts/time interval per view/voxel.

Fig. 4.

Fig. 4

The MCAT simulations. (Left) A trans-axial slice from reconstruction of the motion-free projection data. (Middle) The same slice from reconstruction of motion-present data without motion compensation. (Right) The same slice from reconstruction of motion present data with motion compensation. Note that on each slice is shown a line, along which the count profile was obtained for comparison in Fig. 5.

Fig. 5.

Fig. 5

The count profiles from along the line shown in Fig. 4 are plotted from lower right to upper left for motion-free (dotted-line), for motion-present without motion compensation (dashed-line), and for motion-present with motion compensation (solid-line).

Compensations for attenuation and distance-dependent blurring were included for all reconstructions. Each reconstruction was from 10 iterations of the OSEM algorithm with 15 subsets and 4 projections per subset. Only 180-degrees (from 135 to 315 degrees) of the 204-degree acquisition were used in reconstruction. Each reconstruction was post-smoothed by a Gaussian filter with 1 pixel sigma. Compensation for the motion slows reconstruction by approximately a factor of two. On a 2.8-GHZ Xeon computer running Linux, the time to reconstruct a set of 128 slices of 128 × 128 image using 60 projections over 180 degrees increased from 29.7 seconds per iteration in the case of the 2D Gaussian rotator to 69.5 seconds per iteration with correction for the 3D motion relative to the initial location.

B. Motion Compensation for the Acquisition of Anthropomorphic Phantom

As described in Section C of METHODS, we did 4 emission measurements with the anthropomorphic phantom at different orientations, which were recorded by Polaris. By mixing the projection data, we generated an acquisition of the anthropomorphic phantom whose motion as detected by Polaris was as follows when compared with the original orientation:

  • (1) For the first 8 angular steps, no motion;

  • (2) Before acquiring the next angular step, phantom moved axially by - 6.3 cm and remained stationary during acquiring the next 8 steps;

  • (3) Before acquiring the next angular step, phantom rotated around the vertical axis by 5.9 degrees and remained stationary during acquiring the next 8 steps; and

  • (4) Before acquiring the next angular step, phantom rotated around horizontal axis (left to right on patient) by −15.6 degrees and remained stationary during acquiring the last 10 steps of gantry rotation.

There were also sub-pixel translations unreported in steps (3) and (4). For the rigid-body motion consisting of simultaneous translation and rotation, the translation generally depends on the reference point chosen for the origin of the co-moving coordinates system, which is a standard approach to describing the rigid-body motion. In our case the sub-pixel translations were measured at the center-of-mass of the four marks and also included in motion compensation. Using the OSEM algorithm incorporating 3D Gaussian interpolation, we reconstructed the motion-present data with compensations for the motion, attenuation, detector resolution, and scatter. Scatter correction was by the TEW method [25]. As comparison, we also reconstructed the motion-free data and the motion-present data but without motion compensation. Scatter compensation (SC), attenuation compensation (AC), and detector resolution compensation (DRC) were also included in the last two cases. A trans-axial slice from each reconstruction is shown in Figure 6. Severe motion artifacts are present on the reconstruction of the motion-present data without motion compensation. The motion-compensated reconstruction of the same data was almost identical to the reconstruction of motion-free data.

Fig. 6.

Fig. 6

(Left) Reconstruction of the motion-free projection data of anthropomorphic phantom. (Middle) The same trans-axial slice from reconstruction of the motion-present projection data, without motion compensation. (Right) The same trans-axial slice from reconstruction of the motion-present projection data, with motion compensation. Note that on each slice is shown a line, along which the count profile shown in Fig 7 was obtained.

Using the motion-free simulation as the gold-standard, motion compensation greatly increases the quantitative accuracy of reconstruction in terms of the count-profile (Fig. 7) and the RMSE of the activity concentration. Using a region of interest (ROI) containing the entire heart, the root-mean-square error is reduced from 3.48 without motion compensation to 0.683 with motion compensation, each in a unit of counts/time interval per view/voxel. The trans-axial images from reconstruction were reoriented into short-axis, vertical-long-axis, and horizontal-long-axis images, which are shown in Figure 8. Due to the exaggerated motion in the motion-present data, the reconstruction without motion compensation shows severe artifact. Motion-compensated reconstruction shows no obvious artifact.

Fig. 7.

Fig. 7

Plotted are the count-profiles along the lines shown in Fig. 6 from lower right to upper left, for motion-free (dotted-line), for motion-present, without motion compensation (dashed-line), and for motion-present, with motion compensation (solid-line).

Fig. 8.

Fig. 8

The short-axis (Left), vertical-long-axis (Middle), and horizontal-long-axis images (Right) were reoriented from the transaxial images shown in Fig. 6. From top to bottom are images for the case of no motion (Top), motion-present with no motion compensation (Middle), and motion-present with motion compensation (Bottom). Due to the exaggerated motion in the motion-present data, the reconstruction without motion compensation shows severe artifact. Motion-compensated reconstruction shows no obvious artifact.

IV. Discussion

Motion tracking using Polaris is described in more detail in Beach et al [16]. The Polaris system has a manufacturer stated accuracy of 0.35 mm, which is much more accurate than SPECT (approximately 12 mm resolution) or PET (approximately 4-6 mm). In experiments detailed elsewhere [16] we determined the positional accuracy of the Polaris to be in the range stated by the manufacturer. Thus, the small errors in Polaris determining the motion of the phantom should not affect much the motion compensation herein. Other external tracking systems like an optically based visual-tracking-system (VTS) [26] we are developing can also provide sufficient accuracy to facilitate motion compensation. Recently we also proposed estimation of the rigid-body motion directly from a time series of three-dimensional images which could be available in PET, or gated SPECT studies, using mathematically defined landmarks in images (so called “the generalized center-of-mass points”) [27]. The rigid-body motion derived from the images could be compensated in reconstruction the same way as using Polaris.

In this paper we have investigated the correction of rigid-body-motion. Other forms of motion such as respiratory motion of the heart or other structures could be compensated by our proposed method if we know what the motion is. However, external markers may not exactly reflect the motion of the internal organs. Thus, in our opinion, lots of work still needs to be done to robustly determine the respiratory motion of internal organs such as the heart from the external markers.

Though our investigations employed SPECT, the 3D Gaussian interpolation can also be applied to the motion compensation for PET. In principle other interpolations such as the tri-linear and Cubic Spline can also be employed. The advantage of the 3D Gaussian interpolation over these interpolations is that the 3D Gaussian causes uniform 3D blurring of the interpolated object, while the blurring from other interpolations is spatially variant, which may lead to reconstruction artifacts [20]. The speed of the 3D Gaussian interpolation is between that of tri-linear and Cubic Spline interpolation.

We implemented the 3D Gaussian interpolation as a forward interpolation, which pushes forward the function value at each voxel center in the template object to the target object (the smoothing is applied in the target object). Conversely, 3D Gaussian inverse interpolation pulls back the function value at each voxel center in the target object to form the template object (the smoothing is applied in the template image). 3D Gaussian inverse interpolation is expected to work as well as the forward version, since they are equivalent for describing the rigid-body motion. The transformation matrix associated with the rigid-body motion has a unit determinant, which implies that the interpolation will change neither the sampling density nor the local function value. The difference between the 3D Gaussian forward interpolation and the 3D Gaussian inverse interpolation is analogous to the difference between the ray-driven and pixel-driven projectors. We also implemented 3D Gaussian inverse interpolation and determined it works as well as the forward interpolation to correct for the rigid body motion. The major reason we concentrated on the forward interpolation herein is that the conventional 2D Gaussian rotator is usually implemented as a forward interpolation [20]. For more general motions (like cardiac motion) which are non-rigid-body motion, the determinant of the transformation matrix is not unity. The 3D Gaussian forward interpolation does not apply to these cases, since it will change the local function value due to over-sampling or under-sampling. To compensate for a known non-rigid-body motion, the 3D Gaussian inverse interpolation might be used since it always keeps the local function value.

V. Conclusion

We developed and validated a method of incorporating 3D Gaussian interpolation in the projector/backprojector pair to facilitate 6DOF motion compensation, as well as compensation for the attenuation and distance-dependent blurring in SPECT.

Acknowledgments

This work was supported by the National Institute for Biomedical Imaging and Bioengineering grant R01 EB001457, and by grant HL50349 from the National Heart, Lung, and Blood Institute. The contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health.

VI. References

  • 1.Botvinick EH, Zhu YY, O'Connell WJ, Dae MW. A quantitative assessment of patient motion and its effect on myocardial perfusion SPECT images. J.Nucl.Med. 1993 Feb.34(2):303–310. [PubMed] [Google Scholar]
  • 2.Prigent FM, Hyun M, Berman DS, Rozanski A. Effect of motion on thallium-201 SPECT studies: a simulation and clinical study. J.Nucl.Med. 1993 Nov.34(11):1845–1850. [PubMed] [Google Scholar]
  • 3.Cooper JA, Neumann PH, McCandless BK. Effect of patient motion on tomographic myocardial perfusion imaging. J.Nucl.Med. 1992 Aug.33(8):1566–1571. [PubMed] [Google Scholar]
  • 4.Bai C, Conwell R. A Systematic Simulation Study of the Effects of Patient Motion on Cardiac Perfusion Imaging Using Single Photon Emission Computed Tomography; Society of Nuclear Medicine 52nd Annual Meeting; Toronto, Canada. June 18-22, 2005; abstract. [Google Scholar]
  • 5.Eisner R, Churchwell A, Noever T, Nowak D, Cloninger K, Dunn D, et al. Quantitative analysis of the tomographic thallium-201 myocardial bullseye display: critical role of correcting for patient motion. J.Nucl.Med. 1988 Jan.29(1):91–97. [PubMed] [Google Scholar]
  • 6.O'Connor MK, Kanal KM, Gebhard MW, Rossman PJ. Comparison of four motion correction techniques in SPECT imaging of the heart: a cardiac phantom study. J.Nucl.Med. 1998 Dec.39(12):2027–2034. [PubMed] [Google Scholar]
  • 7.Leslie WD, Dupont JO, McDonald D, Peterdy AE. Comparison of motion correction algorithms for cardiac SPECT. J.Nucl.Med. 1997 May;38(5):785–790. [PubMed] [Google Scholar]
  • 8.Huang SC, Yu DC. Capability evaluation of a sinogram error detection and correction method in computed tomography. IEEE Trans Nucl Sci. 1992;39(4):1106–1110. [Google Scholar]
  • 9.Arata LK, Pretorius PH, King MA. Correction of organ motion in SPECT using reprojection data; Proceedings of 1995 Nuclear Science Symposium; 1996. pp. 1456–1460. [Google Scholar]
  • 10.Lee KJ, Barber DC. Use of forward projection to correct patient motion during SPECT imaging. Phys.Med Biol. 1998 Jan.43(1):171–187. doi: 10.1088/0031-9155/43/1/011. [DOI] [PubMed] [Google Scholar]
  • 11.Matsumoto N, Berman DS, Kavanagh PB, Gerlach J, Hayes SW, Lewin HC, et al. Quantitative assessment of motion artifacts and validation of a new motion-correction program for myocardial perfusion SPECT. J.Nucl.Med. 2001 May;42(5):687–694. [PubMed] [Google Scholar]
  • 12.Kyme AZ, Hutton BF, Hatton RL, Skerrett DW, Barden LR. Pratical aspects of a data-driven motion correction approach for brain SPECT. IEEE Trans Med Imag. 2003;22(6):722–729. doi: 10.1109/TMI.2003.814790. [DOI] [PubMed] [Google Scholar]
  • 13.Fulton RR, Eberl S, Meikle SR, Hutton BF, Braun M. A practical 3D tomographic method for correcting patient head motion in clinical SPECT. IEEE Trans Nucl Sci. 1999;46(3):667–672. [Google Scholar]
  • 14.Lopresti BJ, Russo A, Jones WF, Fisher T, Crouch DG, Altenburger DE, et al. Implementation and performance of an optical motion tracking system for high resolution brain PET imaging. IEEE Trans Nucl Sci. 1999;46(6):2059–2067. [Google Scholar]
  • 15.Fulton RR, Meikle SR, Eberl S, Pfeiffer J, Constable RT, Fulham MJ. Correction for head movements in positron emission tomography using an optical motion-tracking system. IEEE Trans Nucl Sci. 2002;49(1):116–123. [Google Scholar]
  • 16.Beach RD, Pretorius PH, Boening G, Bruyant PP, Feng B, Fulton RR, Gennert MA, Nadella S, King MA. Feasibility of stereo-infrared tracking to monitor patient motion during cardiac SPECT imaging. IEEE Trans Nucl Sci. 2004;51:2693–2698. doi: 10.1109/TNS.2004.835786. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Boening G, Bruyant PP, Beach RD, Byrne CL, King MA. Motion correction for cardiac SPECT using a RBI-ML partial-reconstruction approach; Proceedings of 2004 IEEE Medical Imaging Conference; M4-6, in press. [Google Scholar]
  • 18.Fulton RR, Hutton BF, Braun M, Ardekani B, Larkin RS. Use of 3D reconstruction to correct for patient motion in SPECT. Phys Med Biol. 1994;39:563–574. doi: 10.1088/0031-9155/39/3/018. [DOI] [PubMed] [Google Scholar]
  • 19.Hutton BF, Kyme AZ, Lau YH, Skerrett DW, Fulton RR. A hybrid 3-D reconstruction / registration algorithm for correction of head motion in emission tomography. IEEE Trans Nucl Sci. 2002;49(1):188–194. [Google Scholar]
  • 20.Wallis JW, Miller TR. An optimal rotator for iterative reconstruction. IEEE Trans Med Imag. 1997;16:118–123. doi: 10.1109/42.552061. [DOI] [PubMed] [Google Scholar]
  • 21.McCarthy AW, Miller MI. Maximum likelihood SPECT in clinical computation times using mesh-connected parallel computers. IEEE Trans Med Imag. 1991;10:426–436. doi: 10.1109/42.97593. [DOI] [PubMed] [Google Scholar]
  • 22.Hudson HM, Larkin RS. Accelerated image reconstruction using ordered subsets of projection data. IEEE Trans Med Imag. 1994;13(6):601–609. doi: 10.1109/42.363108. [DOI] [PubMed] [Google Scholar]
  • 23.Pretorius PH, King MA, Tsui BMW, LaCroix KJ, Xia W. A mathematical model of motion of the heart for use in generating source and attenuation maps for simulating emission imaging. Phys Med Biol. 1999;26:2323–2332. doi: 10.1118/1.598746. [DOI] [PubMed] [Google Scholar]
  • 24.Gagnon D, Tung CH, Zeng GL, Hawkins W. Design and early testing of a new medium-energy transmission device for attenuation correction in SPECT and PET; Conference Record of the 1999 IEEE Nuclear Science Symposium; Seattle, WA. Oct, 24-30. [Google Scholar]
  • 25.Ogawa K, Ichihara T, Kubo A. Accurate scatter correction in single photo emission CT. Ann Nucl Med Sci. 1994;7:145–150. [Google Scholar]
  • 26.Bruyant PP, Gennert MA, Speckert GC, Beach RD, Morgenstern JD, Kumar N, Nadella S, King MA. A robust visual tracking system for patient motion detection in SPECT: Hardware solutions. IEEE Trans Nucl Sci. doi: 10.1109/TNS.2005.858208. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Feng B, Bruyant PP, Pretorius PH, Beach RD, Gifford HC, Dey J, Gennert M, King MA. Estimation of the rigid-body motion from three-dimensional images using a generalized center-of-mass points approach; Proceedings of 2005 IEEE Medical Imaging Conference; M7-248, in press. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES