Abstract
Purpose:
Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method.
Methods:
The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation.
Results:
The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions.
Conclusions:
The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type.
Keywords: PET, motion tracking, motion compensation, registration, reconstruction
1. INTRODUCTION
PET brain imaging is negatively impacted by head motion during PET imaging as well as between CT and PET imaging. The loss of resolution and misalignment of the CT attenuation map due to motion can cause distortions affecting the image quality and quantitation of PET images. In healthy volunteers undergoing simulated imaging and Parkinson’s disease (PD) patients being PET imaged, continuous or drifting type motions, repetitive motions occurring about a mean position, and large step motions that resulted in a sustained change in position were observed.1 In this study long drift motions measured throughout healthy subject scans correlated to the subject falling asleep; whereas, for PD subjects these motions often reflected a commonly observed tendency to pull to one side. Drifting motions up to 13 mm were observed for one PD subject. Rotations of up to 3.0° were present in several PD data sets. In one case, brain drifts of up to 6 mm were measured in the striatum, and up to 15 mm in the occipital cortex region. In another study, about 15% of 500 2-h brain scans which were divided into 24 5-min frames had more than 3 mm intraframe motion.2 This may occur despite head-restraints to limit motion3–5 where typical translations in the range of 2–20 mm and rotations of 1°–4° were observed, depending on the type of mask and the duration of scan. Please note that the actual impact of motion due to rotations depends on the choice of the rotation center.
A number of authors have proposed methods to estimate and compensate for such head motion during PET brain imaging.6–12 Motion estimation can be broadly grouped into external-tracking based and data-driven methods. External tracking utilizing an electro-mechanical system was used in Green et al.;13 however, by far the most commonly used method to track motion is with infrared stereo cameras by affixing passive reflective markers1,3,4,6,7,10,11,14–21 or active markers that emit light22 to the head of the patient. Recently, researchers have begun investigating the use of structured light cameras, such as the Microsoft Kinect23 or other devices5,9,24 which can be used to track the surface of the head without the need for markers. Data-driven methods which estimate motion from temporal frames of reconstructed PET data using registration to a reference frame2,22,25–28 have also been reported. These methods have the advantage that no external equipment is necessary and motion compensation can be applied retrospectively. Nonattenuation corrected frames were used in Andersson et al.28 to avoid registration errors due to motion that may have occurred between PET emission and corresponding attenuation scan. Fulton et al.14 and Rahmim et al.29 provide extensive reviews of the various methods of motion tracking and compensation in PET.
In this work we report on a data-driven motion estimation strategy for PET brain imaging which also uses nonattenuation corrected images reconstructed from clinical patient list-mode PET data to avoid errors associated with misaligned CT images.2 The list-mode data are first divided into frames of 5 s, which are then reconstructed to produce a sequence of 3D nonattenuation corrected PET brain images. In this paper we will use the term “reconstructed frames” and images interchangeably. The images are then preprocessed (details discussed in Sec. 2.C) and spatially registered using an automated image registration algorithm30 to the first image in the sequence to estimate the motion relative to it. In the absence of prior knowledge about motion, we chose fixed 5-s frames for data-driven motion estimation in PET. In prior work, the multiple acquisition frame (MAF) approach refers to the division of PET data to multiple frames based on a motion-threshold and compensation of motion by alignment and summation of reconstructed frames. Unlike MAF approaches,3,4,15,22,31,32 we derive the motion estimates from the PET data itself using image registration. Similar methods as ours have been proposed previously.2,25,26 However, none of the authors reported motion estimation using image registration from frames as short as 5 s. Jin et al.2 used 5-min frames, Chen et al.25 used frame sizes of 2 s to 5 min, but did not report registration error from first 22 s of data that consisted of 2-s frames. Costes et al.26 used 2-min and longer frame sizes with intraframe motion occurring at least 30 s apart. One major argument against MAF has been the inability to compensate for intraframe motion due to the use of frame sizes 30 s or longer.26 Jin et al.32 reported on a MAF-based method using minimum frame size of 2–4 s but the motion itself was estimated from external tracking system.
Herein, our objective is to investigate the tradeoffs in data-driven motion estimation using frame sizes as small as 5 s. Small frames have the potential to increase the accuracy of motion estimation but also create more noise affected images due to poor statistics compared to larger frame sizes. Furthermore, our method can be used to compensate for misalignment errors in CT-based attenuation correction by aligning all other PET reconstructed frames to a reference image that is closest to the CT image, followed by estimating the motion between the reference and the CT image.
2. MATERIALS AND METHODS
2.A. Patient data
We developed and tested our motion estimation method using eight anonymized patient list-mode datasets which we obtained with the approval of the Institutional Review Board (IRB). The selected studies had very little intrinsic motion as confirmed through visual assessment by an expert physician observer. All the studies used F-18 FDG as the PET tracer and the purpose of imaging was to assess regional glucose metabolism in the brain. The injected dose was between 200 and 555 MBq (328 ±126 MBq) and the time of injection was between 0.9 and 1.5 h before acquisition. All patient studies used a CT based attenuation map which was acquired at the beginning of the study on a Philips Ingenuity TOF PET/CT system. Herein, it is assumed that these attenuation maps were aligned with the start of the 600 s duration of PET acquisition.
Our motion estimation and compensation methodology is schematically illustrated in Fig. 1. Each 10 min-long patient list-mode dataset was subdivided into 120 5-s time intervals (called frames) and reconstructed into 3D images. All processing (motion estimation and compensation) was performed on these images. Reconstruction was performed using PET/CT Ingenuity TF reconstruction software (version 4.0.1) provided by Philips Healthcare. The parameters of the protocols used for reconstructing these images were those provided to customers for use in the Ingenuity TOF PET/CT clinical systems. The reconstruction used the RAMLA 3D algorithm33 (3 iterations, 33 subsets, and smoothing parameter set to normal) without attenuation correction producing reconstructed frames with 128 × 128 × 90 voxels of 2 × 2 × 2 mm3 volume.
FIG. 1.
Schematic diagram of our motion estimation and compensation pipeline. Our method divides PET list-mode acquisition data into a number of time frames (N) which are reconstructed. In our test strategy we apply artificial motion (box with dashed outline) to the images reconstructed with and without attenuation. We then use preprocessing and image registration to estimate motion relative to the first image (reference). The resulting N − 1 transforms are used to move N − 1 images to the pose of the reference image and all N images are then summed to produce a single motion compensated image.
2.B. Motion simulation
To simulate head motion during acquisition, we artificially added motion to the patient data. This was done by applying six-degree-of-freedom (6-DOF) rigid-body transforms to the 5-s images of each patient as shown with a box with dashed outline in Fig. 1. Therefore, our motion simulation only accounts for interframe motion at a granularity of 5 s.
Four different motions were selected based on movements reported clinically.1–4 The first was a Step motion which was simulated as sudden movements. This motion was applied with a maximum extent of ∼13 mm in the axial-direction (Z-axis), and about 4 mm in the lateral direction (X-axis). A small rotation of up to 6° about X-axis was also added. The rotation center was fixed at the center of the first 5-s reconstructed frame. The second was a Gradual motion which was applied as a slow drift along the lateral direction with a maximum extent of 10 mm. This transform is intended to simulate the motion of the head which can be associated with patients falling asleep and is similar to that in Olesen et al.24 Gradual drifts, not necessarily associated with patients falling asleep, have been widely reported.1,5,7,27 The third motion was that of an actual Volunteer. This motion was obtained by using a marker-based visual tracking system34 tracking the head movement of a human volunteer who was coached to move while lying in the gantry of a SPECT/CT system where the motion tracking system was installed. This movement consisted of a complex set of small and large motions with a maximum extent of 3 cm. The fourth set called Baseline was without artificially applied motion and contained only the pre-existing motion in the patient datasets. Intraframe motion was also simulated by summing the moved 5-s frames accordingly to obtain frames with different time intervals, i.e., 10 and 60 s. Thus, for each of the eight patient datasets, we simulated three different frame sizes with four different motion transforms resulting in 96 motion datasets.
2.C. Preprocessing steps
As shown in Fig. 1, the images were preprocessed before image registration to suppress noise and enhance the outer edges of the brain. Our preprocessing steps consisted of Gaussian smoothing coupled with a median filter, followed by a nonlinear gamma correction to enhance the contrast of the edges. Together, these operations smoothed the reconstructed frame while preserving the edges needed for image registration. All the preprocessing filters were 2D and therefore applied to the volume slice-by-slice. Gaussian smoothing with σ = 2 pixels was applied for noise reduction. A median filter with a wide square neighborhood (17 × 17 pixels) was applied for additional smoothing while preserving the edges of major structures. This smoothing degraded the contrast which was regained using a gamma power function35 with an exponential of γ = 2.5 (γ > 1 signifies an expansion and γ < 1 signifies a compression). The gamma power function raises the intensity of each pixel P to the power of γ which is then scaled by the maximum voxel value M. Thus, the new pixel value was obtained from .
An illustration of the alteration of the slices by our preprocessing method is shown in Fig. 2, where the two rows show the same slice from two different 5-s reconstructed frames of patient data. In each row, the first image on the left is the original reconstructed slice with pronounced noise, the second is after Gaussian filtering, the third after the additional median filtering, and the last image was obtained after gamma correction. Line profiles of all four images are plotted together on the right. The location of the outer edge of the brain in these profiles is indicated by red arrows. Note that for both slices the edge after preprocessing remains close to that in the unprocessed image, while other variations in the profile are suppressed.
FIG. 2.
Effect of our preprocessing method on 5-s reconstructed frames. The two rows show the same slice from two randomly selected 5-s images in a patient with small motion. In each row, the first image on the left is the reconstructed slice showing the level of noise in the frames, the second is after Gaussian filtering, the third after additional median filtering, and the last image is obtained after gamma correction. The line profiles of all four images are plotted together on the right for each row. The line profiles for the unprocessed slices are shown with dotted lines; dashed lines show the Gaussian filtered profiles, the dashed-dotted lines show the median filtered profiles, and the final gamma-corrected profiles are shown in bold black lines. The outer edge of the distribution is indicated with red arrows. Note that the preprocessing sequence keeps the outer edge intact.
Figure 3 further illustrates the stability of the outer edge of the brain in 5-s PET images after preprocessing. The images shown are edge maps identified using a Sobel operator in the extraction method implemented in the image processing and analysis tool Fiji.36 These images are meant for illustration purposes only and were not used for image registration. The top two rows show the overlaid edges in five slices of three different preprocessed 5-s images selected from a patient, and the bottom row shows the same images after adding simulated motion. In all the images, red, green, and blue color channels are used to overlay the three images. Therefore, the edges appear in white where they overlap in all three images. The top row shows the edges after the first stage of preprocessing, i.e., Gaussian filtering, and the middle row shows the edges after the full preprocessing sequence illustrated in Fig. 2. From the top row, it is evident that the outer boundary of the count distribution indicated by red arrows is consistent across images (due to its white color). Everywhere else, the edges do not perfectly overlap due to noise. By comparing the top and middle row, it is evident that our preprocessing sequence improved the continuity and consistency of the outer edges compared to plain Gaussian smoothing. This presents as improved sharpness of the edge and brightness of the white color in the middle row overlays. Additionally, the spurious edges in the region indicated by green arrows in the top row (background) are also suppressed in the middle row due to nonlinear histogram adjustment by the gamma function. Thus, gamma correction increased the consistency of the distribution across images when motion was small facilitating image registration which is based on maximizing mutual information. In the middle row, the orange arrows indicate better delineation of the ventricles in the brain after preprocessing which are hard to identify in the top row. The bottom row shows the preprocessed slices from the same images after artificial motion was applied. The displacement of the outer edges is clearly seen as the red, green, and blue colors separate to a different extent in proportion to the amount of motion.
FIG. 3.
Five different coronal slices (slices numbers are at the bottom) of the overlays of the edges from three 5-s images of a brain PET study. Edges were extracted using an edge enhancement operator. The top row shows edges extracted after Gaussian filtering only, middle row shows the same after our preprocessing, and the bottom row shows the edges after our preprocessing when artificial motion is applied to the images. The edges for each of the three images are displayed in a different color (red, blue, and green). Thus when there is overlap of all the images, the edge is displayed as white. Note that the outer boundary of the brain indicated by the red arrows is consistent across frames (in absence of artificial motion). The spurious edges in the background region indicated by green arrows in the top row are also suppressed by preprocessing in the middle row. When motion is applied, the displacement of the outer edges is clearly visible as the red, green, and blue colors separate to a different extent in proportion to the amount of motion. (See color online version.)
2.D. Rigid-transform estimation
After preprocessing, the images were registered in 3D using the ITK multiresolution registration algorithm.37 In this algorithm, a multiresolution pyramid filter framework was used to progressively down-sample the image at successive levels. The transform parameters were estimated at the coarsest level by computing the best transform (in an optimization sense) that aligned one image onto the other and this estimate was then used to initialize the registration at the subsequent finer level. The registration method used a statistical mutual-information metric38 to align the images. Mutual information based registration was chosen so that the differences in reconstructed voxel values between images did not affect the registration process. This method has been previously shown to be effective for interframe registration methods.25 Image registration was performed using a regular step gradient-descent based optimization algorithm. The optimizer used 6-DOF rigid transform parameters, with 3-DOF for translations and 3-DOF for rotations. No scale parameters were applied or estimated as the images were from the same patient and a single modality. The center of rotation was fixed at the center of the reference image.
To avoid an accumulating error between successive interframe registrations, the transform was estimated using the first image in the sequence as a reference for registering subsequent images. We performed this operation for the three frame sets of different temporal duration (5, 10, and 60 s) and for every patient.
2.E. Motion compensation
For motion compensation, the rigid-transform estimates determined in Sec. 2.D were used to spatially transform each image in 3D. The transforms estimated by the ITK registration were of the following form:
| (1) |
where is the position of a point in the ith frame and is its compensated position, i.e., transformed to align with the reference image; the rotation is specified by the 3 × 3 matrix Ri and the translation given by 3 × 1 vector . The transform was estimated relative to the fixed center , i.e., [63.5,63.5,44.5] voxels . Therefore, was obtained as
| (2) |
where Ri and constitute the matrix and offset, respectively, that is used to specify the ITK transform for motion compensation. We resampled each image using a linear interpolation scheme in ITK. Once all the images were spatially aligned, we summed them to generate a single motion compensated image.
2.F. Assessment of accuracy of motion estimation and compensation
To assess the accuracy of our data-driven motion estimation method, we compared the estimated motion to the “ground truth.” This ground truth was in fact an approximation since the motion between images was a combination of the applied motion and the motion already present in the patient data, for which we only had an estimate. Registration of the no-motion-added baseline images provided this estimate. We compared the ground truth and estimated transforms by applying them both to a fixed point in 3D space and plotting its displacement along X, Y, and Z axes over time. This point was selected as the centroid of the first 5-s baseline image. The ground truth transform was applied as
| (3) |
where is the position of a point in the ith image when moved by the ground truth transform. The ground truth transform which is a combination of the baseline motion estimates and the applied motion was obtained as
| (4) |
Herein, the baseline transform as estimated by image registration was inverted and combined with the true applied motion. The transforms estimated by image registration move the subsequent images to the pose of the reference image, which therefore must be inverted in order to move a point in the reference image to the pose of the subsequent images. Thus, the displaced position of the same point in the ith image when moved by the estimated transform was obtained as follows:
| (5) |
For visual assessment of motion estimation, we plotted the pair of positions for the selected point in Fig. 4 for each of the four motion types and three frame durations in a patient study. We also measured the mean and variance of the Euclidean distance between the pair for all patients as a measure of motion estimation accuracy (Tables II and III). For quantitative assessment of the efficacy of the applied motion compensation, we transformed the reconstructed frames by the estimated transform and summed all the motion compensated images to obtain the registered transform compensated (RTC) image. We did the same using ground truth transformations which we will refer to as the ground truth corrected (GTC) image. We then computed the root-mean-squared difference (RMSD) of these images (RTC and GTC) with respect to the baseline transform compensated image (BTC). BTC was obtained by applying motion compensation to the 5-s baseline images using the motion estimated by image registration. BTC was thus expectedly compensated for pre-existing motion in the data. Using the BTC for RMSD computation allows the comparison of motion compensation accuracy between different motion types and frame sizes without the complicating factors of pre-existing motion or reconstruction bias which would be present if the full reconstruction were to be used. The difference images of RTC and GTC with respect to BTC for all motion datasets were also assessed visually.
FIG. 4.
Change in position of the centroid of the 5-s reference image for estimated versus ground truth transforms for patient 1 in Table I. The first row at the top shows the displacement along X (lateral), second row along Y (vertical), and the third row along Z (axial) direction as estimated by registration of 5-, 10-, 60-s images, and as per the true displacement (ground truth). The columns show the same for each motion type simulated, i.e., Step, Gradual, and Volunteer motion. The Euclidean distance from ground truth is shown in the last row at the bottom. Registration of 5-s images produces the smallest distance error for all motion types.
TABLE II.
Motion estimation accuracy for ground truth versus estimated (mean ± std. in mm).
| Reconstructed frame size | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Patient 1 | Patient 2 | Patient 3 | Patient 4 | Patient 5 | Overall (8 patients) | |||||||||||||
| Motion type | 5-s | 10-s | 60-s | 5-s | 10-s | 60-s | 5-s | 10-s | 60-s | 5-s | 10-s | 60-s | 5-s | 10-s | 60-s | 5-s | 10-s | 60-s |
| Step | 0.22 ± 0.18 | 0.42 ± 0.40 | 1.13 ± 2.18 | 0.09 ± 0.05 | 0.35 ± 0.50 | 1.39 ± 2.67 | 0.09 ± 0.06 | 0.38 ± 0.38 | 1.15 ± 2.18 | 0.22 ± 0.14 | 0.45 ± 0.41 | 1.19 ± 2.14 | 0.22 ± 0.17 | 0.46 ± 0.42 | 1.31 ± 2.36 | 0.20 ± 0.16 | 0.48 ± 0.42 | 1.30 ± 2.31 |
| Gradual | 0.23 ± 0.12 | 0.34 ± 0.13 | 0.60 ± 0.23 | 0.14 ± 0.05 | 0.20 ± 0.08 | 0.67 ± 0.32 | 0.05 ± 0.04 | 0.18 ± 0.08 | 0.53 ± 0.29 | 0.15 ± 0.07 | 0.23 ± 0.10 | 0.68 ± 0.30 | 0.11 ± 0.06 | 0.32 ± 0.14 | 0.65 ± 0.37 | 0.15 ± 0.10 | 0.30 ± 0.14 | 0.65 ± 0.31 |
| Volunteer | 0.35 ± 0.27 | 4.81 ± 1.01 | 2.18 ± 1.58 | 0.14 ± 0.10 | 4.31 ± 1.22 | 2.79 ± 1.76 | 0.19 ± 0.21 | 4.50 ± 0.97 | 2.11 ± 1.53 | 0.28 ± 0.19 | 4.80 ± 0.99 | 1.97 ± 1.59 | 0.23 ± 0.15 | 4.58 ± 1.07 | 2.26 ± 1.63 | 0.29 ± 0.25 | 4.70 ± 1.20 | 2.28 ± 1.67 |
TABLE III.
Motion estimation accuracy ground truth versus estimated (mean ± std. in mm) with and without preprocessing on 5-s reconstructed frames.
| Reconstructed frame size | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Patient 1 | Patient 2 | Patient 3 | Patient 4 | Patient 5 | Overall (8 patients) | |||||||
| Motion type | With PP | Without PP | With PP | Without PP | With PP | Without PP | With PP | Without PP | With PP | Without PP | With PP | Without PP |
| Step | 0.22 ± 0.18 | 0.53 ± 0.17 | 0.09 ± 0.05 | 0.31 ± 0.12 | 0.09 ± 0.06 | 0.46 ± 0.21 | 0.22 ± 0.14 | 0.69 ± 0.32 | 0.22 ± 0.17 | 3.75 ± 6.26 | 0.20 ± 0.16 | 2.80 ± 4.01 |
| Gradual | 0.23 ± 0.12 | 0.47 ± 0.15 | 0.14 ± 0.05 | 0.25 ± 0.12 | 0.05 ± 0.04 | 0.40 ± 0.18 | 0.15 ± 0.07 | 0.58 ± 0.24 | 0.11 ± 0.06 | 1.22 ± 1.46 | 0.15 ± 0.10 | 1.10 ± 1.25 |
| Volunteer | 0.35 ± 0.27 | 0.60 ± 0.24 | 0.14 ± 0.10 | 0.34 ± 0.17 | 0.19 ± 0.21 | 0.49 ± 0.29 | 0.28 ± 0.19 | 0.55 ± 0.21 | 0.23 ± 0.15 | 4.61 ± 6.98 | 0.29 ± 0.25 | 6.20 ± 5.50 |
3. RESULTS
3.A. Baseline motion estimation
We estimated the motion already present in the patient studies (also referred to as Baseline) by registering 5-s reconstructed frames to the first one in the sequence. As the time intervals are made shorter, the data become photon-limited and nonlinearity in the reconstruction may introduce bias which could have an impact on our motion compensation methodology. To assess the degree of such nonlinearity in the reconstruction, we compared the sum of the reconstructed voxel values over the whole brain between the summed reconstructed (SR) frames and the full reconstructed (FR) image. We determined that the SR from the 5-s list-mode subsets were on average 3% lower in the total voxel value than the FR.
We also examined the impact of using larger frame sizes on motion estimation. The 10 and 60 s frames were created by summing the 5-s frames. Baseline motion estimates for the 5-s images are summarized in Table I with X being the lateral direction, Y the anterior/posterior direction, and Z being the superior/inferior direction. The pre-existing motion in these eight patients is small as seen from the values of the maximum Euclidean distance from the start of the acquisition (all <2 mm).
TABLE I.
Baseline motion estimated: standard deviation of position about the mean along X, Y, Z and the maximum Euclidean distance from the start of the acquisition in mm.
| Reconstructed frame size | |||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Patient 1 | Patient 2 | Patient 3 | Patient 4 | Patient 5 | Overall (8 patients) | ||||||||||||||
| Motion type | 5-s | 10-s | 60-s | 5-s | 10-s | 60-s | 5-s | 10-s | 60-s | 5-s | 10-s | 60-s | 5-s | 10-s | 60-s | 5-s | 10-s | 60-s | |
| Baseline-no motion added | Std. X | ±0.22 | ±0.20 | ±0.18 | ±0.14 | ±0.13 | ±0.08 | ±0.20 | ±0.19 | ±0.19 | ±0.15 | ±0.11 | ±0.07 | ±0.32 | ±0.24 | ±0.16 | ±0.19 | ±0.16 | ±0.12 |
| Std. Y | ±0.13 | ±0.11 | ±0.07 | ±0.20 | ±0.17 | ±0.16 | ±0.31 | ±0.31 | ±0.31 | ±0.24 | ±0.20 | ±0.18 | ±0.32 | ±0.25 | ±0.10 | ±0.33 | ±0.30 | ±0.25 | |
| Std. Z | ±0.40 | ±0.39 | ±0.38 | ±0.33 | ±0.31 | ±0.30 | ±0.15 | ±0.13 | ±0.11 | ±0.47 | ±0.48 | ±0.44 | ±0.27 | ±0.24 | ±0.20 | ±0.40 | ±0.38 | ±0.35 | |
| Maximum Euclidean distance | 1.95 | 1.80 | 1.41 | 1.59 | 1.39 | 1.10 | 1.30 | 1.19 | 1.02 | 2.02 | 1.92 | 1.43 | 1.36 | 1.10 | 0.81 | 2.02 | 2.01 | 1.53 | |
The effect of frame size on motion estimation is also shown in Table I, where the standard deviation about the mean position is shown for 5-, 10-, and 60-s frames. The standard deviation is slightly higher for the 5-s frame-based estimation in part due to increased noise in the estimates, but may also indicate pre-existing motion that is estimated more accurately using short duration frames. In patient 3 the actual motion is very small, so most of the variance in the estimate using 5-s frame size is due to noise. However in patient 1 along the Z-direction, the higher standard deviation is due to the presence of a monotonic trend. This trend can be visualized in Fig. 4 (third row, middle column) as a displacement along Z-axis for the case of Gradual motion that was simulated along X-axis only. From Table I we can see that motion estimation using 5-s temporal frame size was not disadvantaged by an increase in statistical variation compared to the longer frames. We also assessed the impact of larger frame size in Sec. 3.B using simulated motion datasets, with the ground truth transformation computed as in Sec. 2.F.
3.B. Accuracy of motion estimation versus time interval
Figure 4 shows plots of the spatial displacement of the centroid of the first 5-s baseline image in patient 1 when moved by the ground truth transformation (solid lines) as defined in Eq. (3); and the estimated transformation obtained by registration of 5-s (dotted lines), 10-s (dashed-dotted lines), and 60-s images (dashed lines).
As is evident, the motion estimated from 60-s images is rather coarse and has larger bias probably due to intraframe motion. The motion estimated from 5-s frames (which matches the frequency at which the motion was simulated) is the closest to the ground truth motion. Table II quantitatively summarizes the motion estimation accuracy for various motion types shown in Fig. 4 using the average Euclidean distance (AED) between the position of the centroid when moved by the ground truth and the estimated transforms.
Larger AED implies lower accuracy. The standard deviation of the Euclidean distance is also shown in Table II, with larger values indicating larger deviations in the estimates from ground truth during the imaging time. The AED was the largest for the 60-s frame size due to intraframe motion for all motion types except the Volunteer motion. For the Volunteer motion type, 10-s frame size had an AED of about 5 mm for all patients. This was caused by a reference image containing uncompensated large motion (∼10 mm). The reference 10-s image for Volunteer motion type was formed by adding two 5-s images displaced by about 15 mm along the Z-direction. On one hand, this caused the centroid of the 10-s reference image to be displaced with respect to the 5-s reference image and is seen in Fig. 4 as a bias in the Z-axis plot (right column, third row from top). On the other hand, using a reference image with motion artifacts resulted in underestimated motion. This scenario can be avoided in practice by using at the minimum a reference image reconstructed from a frame with small intraframe motion. The maximum position error was approximately 2 mm for 60-s images at the centroid for all motion types. Additionally, the larger standard deviation at 60-s frame size indicates even larger position errors during the imaging time due to underestimation of motion. At 5-s frame size, the position error for the centroid was lower than 0.6 mm for all motion types. Table III shows the motion estimation accuracy with and without preprocessing before image registration.
As is evident, the errors without preprocessing are larger in all patients, especially patients 5–8. This is due to complete failure of image registration due to noise in some reconstructed frames as shown in Fig. 5. Figure 5 is similar to Fig. 4 and shows the plots of the spatial displacement of the centroid of the first 5-s baseline image in patient 5 when moved by the ground truth transformation (solid lines) as defined in Eq. (3); and the estimated transformation obtained by registration of 5-s images with (dotted lines) and without preprocessing (dashed lines). There are large deviations from ground truth for some frames of the order of 1 cm or more when no preprocessing is used. This demonstrates the positive impact of preprocessing in the registration of 5-s images.
FIG. 5.
Change in position of the centroid of the 5-s reference image for motion estimated with and without preprocessing versus the ground truth transform for patient 5 in Table I. The first row at the top shows the displacement along X (lateral), second row along Y (vertical), and the third. row along Z (axial) direction as estimated by registration of 5-s images with preprocessing (dotted lines), without preprocessing (dashed lines), and as per the ground truth (solid lines). The columns show the same for each motion type simulated, i.e., Step, Gradual, and Volunteer motion. The Euclidean distance from ground truth is shown in the last row at the bottom. Registration of 5-s images without preprocessing fails completely due to noise for some frames as seen by the large error.
3.C. Motion compensation efficacy
For a quantitative assessment of the efficacy of the applied motion compensation, the RMSD between GTC and RTC images with BTC is shown in Table IV for each patient, interval, and motion type.
TABLE IV.
RMSD between motion-corrected and baseline transform corrected image from 5-s reconstructed frames. The column uncompensated is without any motion compensation. GTC and RTC use ground truth and estimated motion respectively for motion compensation.
| Reconstructed frame size | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Patient 1 | Patient 2 | Patient 3 | Patient 4 | Patient 5 | Overall (average of 8 patients) | |||||||||||||
| Motion type | Uncompensated | GTC | RTC | Uncompensated | GTC | RTC | Uncompensated | GTC | RTC | Uncompensated | GTC | RTC | Uncompensated | GTC | RTC | Uncompensated | GTC | RTC |
| Baseline | 123.9 | 0.0 | 0.0 | 113.2 | 0.0 | 0.0 | 38.0 | 0.0 | 0.0 | 48.2 | 0.0 | 0.0 | 26.7 | 0.0 | 0.0 | 63.6 | 0.0 | 0.0 |
| Step | 457.1 | 92.2 | 92.5 | 891.8 | 99.9 | 100.1 | 488.5 | 55.9 | 55.8 | 200.7 | 34.5 | 34.8 | 255.0 | 71.0 | 71.3 | 388.7 | 59.5 | 59.6 |
| Gradual | 390.5 | 6.7 | 6.8 | 757.8 | 11.7 | 14.2 | 470.0 | 10.4 | 10.9 | 200.5 | 3.4 | 3.7 | 222.0 | 4.2 | 4.3 | 355.7 | 6.7 | 7.3 |
| Volunteer | 553.1 | 111.3 | 111.6 | 897.0 | 202.9 | 204.6 | 560.8 | 47.4 | 50.3 | 256.7 | 9.7 | 9.8 | 297.4 | 13.5 | 13.6 | 444.2 | 61.6 | 62.4 |
In the studies used herein, we had one patient with a large perfusion defect (patient 4) which may have facilitated the image registration leading to the smallest RMSDs among all the patients. The RMSDs of all patients were smaller after motion compensation and very close to the RMSD when ground truth was used for compensation. This suggests the registration of 5-s frames to be accurate. Note that adding 5-s registered reconstructions for motion compensation introduce some bias compared to the full reconstruction, in addition to that from small pre-existing motion in patient data. With respect to motion compensation, Faber et al.21 demonstrated deconvolution as a good postreconstruction compensation strategy which may be employed once the motion is estimated via image registration. For future clinical software implementation, we therefore envision the motion transforms will be used to correct LORs within reconstruction using the full dataset.39
Figure 6 shows an example of our data-driven motion compensation performed on patient 1 using 5-s reconstructed frames.
FIG. 6.
Motion compensation in patient 1 using 5-s reconstructed frames. Colorbar at the bottom of the figure shows the range of the voxel values in each column.
The baseline reconstructions (no motion added) are shown in the top row, with the leftmost being the full reconstruction (without subdividing list-mode into frames), followed by the summed reconstruction (SR) in the middle, and the BTC image on the right. The corresponding difference images using BTC are also shown alongside each reconstructed slice. The second row shows uncompensated (left), corrected with ground truth (middle), and compensated with estimated motion (right) slices for the Step motion. The third row shows the same for Gradual motion, and the last row for the Volunteer motion as described in Sec. 2.B. The uncompensated slices for all motion types have the largest difference as expected. The difference image for each uncompensated slice illustrates where the variations are a result of motion. The compensated images with motion estimates (RTC, right column) are very similar to the corrected images with ground truth motion (GTC, middle column, rows 2–4). Note that the RMSDs in Table IV for the RTC images are only slightly higher than the GTC images, confirming the visual impression of how close the two images appear in Fig. 6. This is also evident in the smaller difference image. The Volunteer motion type was estimated with larger error compared to the others. The complexity of this motion was also higher compared to the other two types involving all 6-DOFs. The RTC for all motion types was qualitatively restored to the BTC image indicating motion compensation efficacy.
4. DISCUSSION
As can be seen in Table II, we were able to obtain motion estimates within 1 mm accuracy using the 5-s images. The result is that, as can be seen in Fig. 6 where there is very little visual difference in the images using our estimate and the ground truth, there are obvious differences with the uncompensated images. The RMSD values further support the utility of our motion estimation strategy. To appreciate the difficulty of the task of motion estimation, please note the high noise level of the 5-s transverse reconstructions illustrated at the left in Fig. 2. The 5 s duration of these frames represents 1/120th of the data a physician would use for clinical diagnosis. To the best of our knowledge, data-driven motion estimation from static PET frames as small as 5 s has not been reported before. Using frame sizes of 5 s allowed us to estimate gradual monotonic drifts with sub-mm accuracy. Motion estimation using registration allows full 6-DOF motion compensation, which cannot be achieved using sinogram-based motion estimation as performed in data-driven gating methods.40 Sinogram-based motion estimation can however be used to identify when the motion occurred and improve the efficiency of our motion estimation scheme by matching the frames to instances of motion occurrence. Alternatively, automatic frame division using tracking methods41 can be used to divide frames based on motion occurrence. Recent work in marker-less tracking42 using inexpensive consumer grade depth-sensing cameras shows that motion occurrence can be reliably detected. Further, if an external motion tracking device is available, the registration-based estimates can be used to augment the motion capture accuracy. This is especially useful when acquisition conditions cause a loss of motion capture accuracy, such as obstruction of line of sight, tracking device malfunction, or loss of time-synchronization with the PET list-mode, and insufficient signal to noise ratio due to patient features, garment, etc.
In Jin et al.2 it was noted that image registration may work as well as having true motion information for high count frames, but accuracy may be less for noisy frames. Herein, we have demonstrated a preprocessing method to suppress noise effectively and use image registration for low count PET frames (2–4 × 106 prompts in the 5-s frames). Table III demonstrates the efficacy of our preprocessing method for reducing the impact of the high noise level in the 5-s frames. Registration of the gross shape of the count distribution using mutual information is not affected by any bias in the reconstructed voxel values which may occur for OSEM-based reconstruction in low count data.43 However, registering to a reference image with motion artifacts as in the case of the 10-s frames for the Volunteer motion causes degradation of accuracy. As the registration used reconstructions without attenuation and scatter correction similar to the methods described in Jin et al.,2 the motion estimates were not affected by motion between CT and PET images. Although we used fixed 5-s frame sizes for registration which is computationally expensive due to the increased number of reconstructions, future parallelization using GPUs on clinical Philips PET systems is expected to reduce the time taken to obtain the motion estimates. Our preprocessing and registration scheme takes about 8 s on a system with 2.7 GHz Intel processor and 8.00 GB RAM for registering two reconstructed frames with 128 × 128 × 90 voxels irrespective of the frame duration. On the same system, reconstruction of 5-s frames takes 4–5 min using a non-TOF protocol. However, migrating from a CPU to GPU implementation has improved the reconstruction times by at least a factor 10.
A limitation of data-driven motion estimation has been that it is dependent on the specific radiotracer and may not work equally well in other types of radiotracer images. In this work we used brain perfusion images using F-18 FDG and observed that in all the studies, the brain boundary of the reconstructed voxel values was consistent between images when no motion occurred. The success of current data-driven estimation scheme in other radiotracer distributions will rely on the presence of similar consistent features. The underlying assumption was that in practice the motion of a patient’s head is also fairly rigid. However, if a change in voxel intensity distribution was not associated with movement such as a change due to tracer kinetics (e.g., in dynamic studies), or if the movement did not present itself as a rigid pose change, then the accuracy of data-driven estimation will be reduced. In dynamic studies, motion may be estimated using our scheme by registering adjacent frames where the shape of the voxel intensity distribution appears fairly similar.
5. CONCLUSION
In conclusion, we have shown that we can estimate and compensate for motion for time frames with time intervals as short as 5 s with an accuracy of 0.3 mm using nonattenuation corrected F-18 FDG PET brain images. Intraframe motion in 60-s frames results in degradation of accuracy to 2 mm or more based on the motion type. Using a reference image with motion artifacts causes lower accuracy as in the case of the estimation of complex Volunteer motion from 10-s frames. Finally, appropriate preprocessing is necessary for successful image registration of short 5-s frames.
In future work, we will use patient data with motion during emission and move the CT to the position of the frames using registration-based motion estimates for attenuation and scatter corrected reconstruction. We will also test the effect of varied count distributions of other radioactive tracers (such as FDA approved tracers for imaging of amyloid) on data-driven motion estimation. We will investigate even shorter frames, for example, 1-s, for registration-based motion estimation to determine the threshold where image registration fails due to insufficient count statistics, and integrate the motion compensation transformations into the reconstruction source code or into the list mode events.
ACKNOWLEDGMENTS
This work was supported by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) Grant No. R01 EB001457 and a research grant from Philips Medical systems. The contents are solely the responsibility of the authors and do not represent the official views of the NIBIB or Philips Medical Systems. All patient data used were existing and anonymized. It was obtained under IRB approval. A preliminary version of this study was presented at SNMMI 2014 and published in the conference proceedings in abstract form. We would also like to thank Xiyun (Steven) Song from Philips Healthcare for his help in providing PET brain studies.
REFERENCES
- 1.Dinelle K., Blinder S., Cheng J.-C., Lidstone S., Buckley K., Ruth T. J., and Sossi V., “Investigation of subject motion encountered during a typical positron emission tomography scan,” in Nuclear Science Symposium Conference Record (IEEE, San Diego, CA, 2006), pp. 3283–3287. [Google Scholar]
- 2.Jin X., Mulnix T., Gallezot J. D., and Carson R. E., “Evaluation of motion correction methods in human brain PET imaging—A simulation study based on human motion data,” Med. Phys. 40, 102503 (12pp.) (2013). 10.1118/1.4819820 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Lopresti B. J., Russo A., Jones W. F., Fisher T., Crouch D. G., Altenburger D. E., and Townsend D. W., “Implementation and performance of an optical motion tracking system for high resolution brain PET imaging,” IEEE Trans. Nucl. Sci. 46, 2059–2067 (1999). 10.1109/23.819283 [DOI] [Google Scholar]
- 4.Fulton R. R., Meikle S. R., Eberl S., Pfeiffer J., and Constable C. J., “Correction for head movements in positron emission tomography using an optical motion-tracking system,” IEEE Trans. Nucl. Sci. 49, 116–123 (2002). 10.1109/TNS.2002.998691 [DOI] [Google Scholar]
- 5.Wiersma R. D., Tomarken S. L., Grelewicz Z., Belcher A. H., and Kang H., “Spatial and temporal performance of 3D optical surface imaging for real-time head position tracking,” Med. Phys. 40, 111712 (8pp.) (2013). 10.1118/1.4823757 [DOI] [PubMed] [Google Scholar]
- 6.Goldstein S. R., Daube-Witherspoon M. E., Green M. V., and Eidsath A., “A head motion measurement system suitable for emission computed tomography,” IEEE Trans. Med. Imaging 16, 17–27 (1997). 10.1109/42.552052 [DOI] [PubMed] [Google Scholar]
- 7.Bloomfield P. M., Spinks T. J., Reed J., Schnorr L., Westrip A. M., Livieratos L., Fulton R., and Jones T., “The design and implementation of a motion correction scheme for neurological PET,” Phys. Med. Biol. 48, 959–978 (2003). 10.1088/0031-9155/48/8/301 [DOI] [PubMed] [Google Scholar]
- 8.Zhou V., Kyme A., Meikle S., and Fulton R., “An event-driven motion correction method for neurological PET studies of awake laboratory animals,” Mol. Imaging Biol. 10, 315–324 (2008). 10.1007/s11307-008-0157-0 [DOI] [PubMed] [Google Scholar]
- 9.Olesen O. V., Sullivan J. M., Mulnix T., Paulsen R. R., Hojgaard L., Roed B., Carson R. E., Morris E. D., and Larsen R., “List-mode PET motion correction using markerless head tracking: Proof-of-concept with scans of human subject,” IEEE Trans. Med. Imaging 32, 200–209 (2013). 10.1109/TMI.2012.2219693 [DOI] [PubMed] [Google Scholar]
- 10.Bühler P., Just U., Will E., Kotzerke J., and Van den Hoff J., “An accurate method for correction of head movement in PET,” IEEE Trans. Med. Imaging 23(9), 1176–1185 (2004). 10.1109/tmi.2004.831214 [DOI] [PubMed] [Google Scholar]
- 11.Langner J., “Event-driven motion compensation in positron emission tomography: Development of a clinically applicable method,” Ph.D. dissertation, University of Technology, Dresden, 2008. [Google Scholar]
- 12.Ullisch M. G., Scheins J. J., Weirich C., Rota Kops E., Celik A., Tellmann L., Stocker T., Herzog H., and Shah N. J., “MR-based PET motion correction procedure for simultaneous MR-PET neuroimaging of human brain,” PLoS One 7(11), e48149 (13pp.) (2012). 10.1371/journal.pone.0048149 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Green M. V., Seidel J., Stein S. D., Tedder T. E., Kempner K. M., Kertzman C., and Zeffiro T. A., “Head movement in normal subjects during simulated PET brain imaging with and without head restraint,” J. Nucl. Med. 35, 1538–1546 (1994). [PubMed] [Google Scholar]
- 14.Fulton R., Tellmann L., Pietrzyk U., Winz O., Stangier I., Nickel I., Schmid A., Meikle S., and Herzog H., “Accuracy of motion correction methods for PET brain imaging,” in Nuclear Science Symposium Conference Record (IEEE, Rome, Italy, 2004), pp. 4226–4230. [Google Scholar]
- 15.Herzog H., Tellman L., Fulton R., and Pietrzyk U., “Motion correction in PET brain studies,” in The Fourth International Workshop on Multidimensional Systems (IEEE, Wuppertal, Germany, 2005), pp. 178–181. [Google Scholar]
- 16.Tellmann L., Fulton R. R., Bente K., Stangier I., Winz O., Just U., Herzog H., and Pietrzyk U. K., “Motion correction of head movements in PET: Realisation for routine usage,” in Nuclear Science Symposium Conference Record (IEEE, Portland, OR, 2003), pp. 3105–3107. [Google Scholar]
- 17.Verhaeghe J., Gravel P., Mio R., Fukasawa R., Rosa-Neto P., Soucy J. P., Thompson C. J., and Reader A. J., “Motion-compensated fully 4D PET reconstruction using PET data supersets,” in Nuclear Science Symposium Conference Record (NSS/MIC) (IEEE, Orlando, Florida, 2009), pp. 3000–3004. [Google Scholar]
- 18.Keller S. H., Sibomana M., Olesen O. V., Svarer C., Holm S., Andersen F. L., and Hojgaard L., “Methods for motion correction evaluation using 18F-FDG human brain scans on a high-resolution PET scanner,” J. Nucl. Med. 53, 495–504 (2012). 10.2967/jnumed.111.095240 [DOI] [PubMed] [Google Scholar]
- 19.Mohy-ud-Din H., Karakatsanis N. A., Goddard J. S., Baba J., Wills W., Tahari A. K., Wong D. F., and Rahmim A., “Generalized dynamic PET inter-frame and intraframe motion correction: Phantom and human validation studies,” in Nuclear Science Symposium Conference Record (NSS/MIC) (IEEE, Anaheim, CA, 2012), pp. 3067–3078. [Google Scholar]
- 20.Mohy-ud-Din H., Karakatsanis N. A., Willis W., Tahari A. K., Wong D. F., and Rahmim A., “Intraframe motion compensation in multi-frame brain PET imaging,” Front. Biomed. Technol. 2(2), 366–380 (2015). [Google Scholar]
- 21.Faber T. L., Raghunath N., Tudorascu D., and Votaw J. R., “Motion correction of pet brain images through deconvolution. I. Theoretical development and analysis in software simulations,” Phys. Med. Biol. 54(3), 797–811 (2009). 10.1088/0031-9155/54/3/021 [DOI] [PubMed] [Google Scholar]
- 22.Picard Y. and Thompson C. J., “Motion correction of PET images using multiple acquisition frames,” IEEE Trans. Med. Imaging 16, 137–144 (1997). 10.1109/42.563659 [DOI] [PubMed] [Google Scholar]
- 23.Noonan P. J., Howard J., Cootes T. F., Hallett W. A., and Hinz R., “Realtime markerless rigid body head motion tracking using the Microsoft Kinect,” in Nuclear Science Symposium Conference Record (NSS/MIC) (IEEE, Anaheim, CA, 2012), pp. 2241–2246. [Google Scholar]
- 24.Olesen O., Paulsen R., Jensen R., Keller S., Sibomana M., Højgaard L., Roed B., and Larsen R., “3D surface realignment tracking for medical imaging: A phantom study with PET motion correction,” in Image-Based Geometric Modeling and Mesh Generation, edited by Zhang Y. (Springer, Netherlands, 2013), Vol. 3, pp. 11–19. [Google Scholar]
- 25.Chen K., Smilovici O., Lee W., Reschke C., Zhu Z., Bandy D., and Reiman E., “Inter-frame co-registration of dynamically acquired fluoro-deoxyglucose positron emission tomography human brain data,” in IEEE International Conference on Complex Medical Engineering (IEEE, Beijing, China, 2007), pp. 901–906. [Google Scholar]
- 26.Costes N., Dagher A., Larcher K., Evans A. C., Collins D. L., and Reilhac A., “Motion correction of multi-frame PET data in neuroreceptor mapping: Simulation based validation,” NeuroImage 47, 1496–1505 (2009). 10.1016/j.neuroimage.2009.05.052 [DOI] [PubMed] [Google Scholar]
- 27.Montgomery A. J., Thielemans K., Mehta M. A., Turkheimer F., Mustafovic S., and Grasby P. M., “Correction of head movement on PET studies: Comparison of methods,” J. Nucl. Med. 47, 1936–1944 (2006). [PubMed] [Google Scholar]
- 28.Andersson J. L., Vagnhammar B. E., and Schneider H., “Accurate attenuation correction despite movement during PET imaging,” J. Nucl. Med. 36, 670–678 (1995). [PubMed] [Google Scholar]
- 29.Rahmim A., Rousset O., and Zaidi H., “Strategies for motion tracking and correction in PET,” PET Clin. 2, 251–266 (2007). 10.1016/j.cpet.2007.08.002 [DOI] [PubMed] [Google Scholar]
- 30.Pluim J. P. W., Maintz J. B. A., and Viergever M. A., “Mutual-information-based registration of medical images: A survey,” IEEE Trans. Med. Imaging 22, 986–1004 (2003). 10.1109/TMI.2003.815867 [DOI] [PubMed] [Google Scholar]
- 31.Herzog H., Tellmann L., Fulton R., Stangier I., Rota Kops E., Bente K., Boy C., Hurlemann R., and Pietrzyk U., “Motion artifact reduction on parametric PET images of neuroreceptor binding,” J. Nucl. Med. 46, 1059–1065 (2005). [PubMed] [Google Scholar]
- 32.Jin X., Mulnix T., Sandiego C. M., and Carson R. E., “Evaluation of frame-based and event-by-event motion-correction methods for awake monkey brain PET imaging,” J. Nucl. Med. 55, 287–293 (2014). 10.2967/jnumed.113.123299 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Browne J. and De Pierro A. R., “A row-action alternative to the EM algorithm for maximizing likelihood in emission tomography,” IEEE Trans. Med. Imaging 15, 687–699 (1996). 10.1109/42.538946 [DOI] [PubMed] [Google Scholar]
- 34.McNamara J. E., Pretorius P. H., Johnson K., Mukherjee J. M., Dey J., Gennert M. A., and King M. A., “A flexible multicamera visual-tracking system for detecting and correcting motion-induced artifacts in cardiac SPECT slices,” Med. Phys. 36, 1913–1923 (2009). 10.1118/1.3117592 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Sonka M., Hlavac V., and Boyle R., Image Processing, Analysis, and Machine Vision, 3rd ed. (Thompson Learning, Toronto, 2008). [Google Scholar]
- 36.Schindelin J., Arganda-Carreras I., Frise E., Kaynig V., Longair M., Pietzsch T., Preibisch S., Rueden C., Saalfeld S., Schmid B., Tinevez J. Y., White D. J., Hartenstein V., Eliceiri K., Tomancak P., and Cardona A., “Fiji: An open-source platform for biological-image analysis,” Nat. Methods 9, 676–682 (2012). 10.1038/nmeth.2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Yoo T. S., Ackerman M. J., Lorensen W. E., Schroeder W., Chalana V., Aylward S., Metaxas D., and Whitaker R., “Engineering and algorithm design for an image processing Api: A technical report on ITK–the Insight Toolkit,” Stud. Health Technol. Inf. 85, 586–592 (2002). 10.3233/978-1-60750-929-5-586 [DOI] [PubMed] [Google Scholar]
- 38.Mattes D., Haynor D. R., Vesselle H., Lewellen T. K., and Eubank W., “PET-CT image registration in the chest using free-form deformations,” IEEE Trans. Med. Imaging 22, 120–128 (2003). 10.1109/TMI.2003.809072 [DOI] [PubMed] [Google Scholar]
- 39.Gagnon D., Patrick O., and Parmeshwar K. K., U.S. patent 8,515,148(20 August2013).
- 40.Schleyer P. J., Thielemans K., and Marsden P. K., “Extracting a respiratory signal from raw dynamic PET data that contain tracer kinetics,” Phys. Med. Biol. 59, 4345–4356 (2014). 10.1088/0031-9155/59/15/4345 [DOI] [PubMed] [Google Scholar]
- 41.Olesen O. V., Keller S. H., Sibomana M., Larsen R., Roed B., and Hojgaard L., “Automatic thresholding for frame-repositioning using external tracking in PET brain imaging,” in Nuclear Science Symposium Conference Record (NSS/MIC) (IEEE, Knoxville, TN, 2010), pp. 2669–2675. [Google Scholar]
- 42.Lindsay C., Mukherjee J. M., Johnson K., Olivier P., Song X., Shao L., and King M. A., “Marker-less multi-frame motion tracking and compensation in PET-brain imaging,” Proc. SPIE 9417, 94170J (7pp.) (2015). 10.1117/12.2082080 [DOI] [Google Scholar]
- 43.Reilhac A., Tomei S., Buvat I., Michel C., Keheren F., and Costes N., “Simulation-based evaluation of OSEM iterative reconstruction methods in dynamic brain PET studies,” NeuroImage 39, 359–368 (2008). 10.1016/j.neuroimage.2007.07.038 [DOI] [PubMed] [Google Scholar]






