Abstract
Purpose
With the recent introduction of the MR‐LINAC, an MR‐scanner combined with a radiotherapy LINAC, MR‐based motion estimation has become of increasing interest to (retrospectively) characterize tumor and organs‐at‐risk motion during radiotherapy. To this extent, we introduce low‐rank MR‐MOTUS, a framework to retrospectively reconstruct time‐resolved nonrigid 3D+t motion fields from a single low‐resolution reference image and prospectively undersampled k‐space data acquired during motion.
Theory
Low‐rank MR‐MOTUS exploits spatiotemporal correlations in internal body motion with a low‐rank motion model, and inverts a signal model that relates motion fields directly to a reference image and k‐space data. The low‐rank model reduces the degrees‐of‐freedom, memory consumption, and reconstruction times by assuming a factorization of space‐time motion fields in spatial and temporal components.
Methods
Low‐rank MR‐MOTUS was employed to estimate motion in 2D/3D abdominothoracic scans and 3D head scans. Data were acquired using golden‐ratio radial readouts. Reconstructed 2D and 3D respiratory motion fields were, respectively, validated against time‐resolved and respiratory‐resolved image reconstructions, and the head motion against static image reconstructions from fully sampled data acquired right before and right after the motion.
Results
Results show that 2D+t respiratory motion can be estimated retrospectively at 40.8 motion fields per second, 3D+t respiratory motion at 7.6 motion fields per second and 3D+t head‐neck motion at 9.3 motion fields per second. The validations show good consistency with image reconstructions.
Conclusions
The proposed framework can estimate time‐resolved nonrigid 3D motion fields, which allows to characterize drifts and intra and inter‐cycle patterns in breathing motion during radiotherapy, and could form the basis for real‐time MR‐guided radiotherapy.
Keywords: motion estimation, model‐based reconstruction, MR‐guided radiotherapy, MR‐LINAC
1. INTRODUCTION
Uncertainty in tumor and organs‐at‐risk locations due to unknown respiratory‐induced organ motion diminishes the efficacy of radiotherapy in the abdomen and thorax in two ways. First, tumors are irradiated with larger treatment margins, which results in increased radiation dose and toxicity to healthy tissue. Second, it prevents an accurate (retrospective) estimation of the actual dose accumulated in the targeted tumor and healthy surrounding tissue during the treatment.
Recently, the MR‐LINAC was introduced as the combination of an MR‐scanner and a linear accelerator (LINAC) in a single device, 1 , 2 , 3 , 4 which has the potential to address both points above. Achieving this goal, however, poses the following technical challenge: real‐time reconstructions at 5 Hz 5 , 6 of internal body motion during the treatments. A fundamental step toward real‐time reconstructions is the retrospective estimation of time‐resolved motion fields. Additionally, these retrospectively reconstructed motion fields are valuable for the calculation of accumulated dose and can be taken into account for more accurate radiation planning of subsequent treatments. To this extent, we focus on the retrospective reconstruction of time‐resolved 3D+t respiratory motion with a temporal resolution of 5 motion fields per second. We envision that this framework could eventually be adapted to prospective real‐time reconstructions. 7
In MR‐guided radiotherapy, tumor and organs‐at‐risk motion is typically estimated from cine‐MR‐images followed by image registration. For time‐resolved motion estimation, these cine‐MR‐images would thus require sufficient temporal resolution and spatial coverage to resolve the targeted motion. This is in general achievable in 2D, and also in 3D for slowly moving targets such as pelvic tumors. 8 However, in 3D, it is more challenging for faster moving targets like lung tumors, that require at least 5 motion fields per second. 5 , 6
Several strategies have previously been proposed to extract tumor and organ‐at‐risk motion from MR‐images, three of which will be reviewed below. With the first strategy, average respiratory motion is estimated from a respiratory‐resolved 3D+t MRI. This approach retrospectively sorts image slices or k‐space readouts in 3D acquisitions according to their respective respiratory phases, extracted using a respiratory motion surrogate (eg, pneumatic belt, self‐navigation signal or navigator). Examples include the works in Refs. [9, 10, 11, 12, 13] (see Ref. [14] for a more complete overview). Although the retrospective sorting in these methods allows for efficient use of all acquired data, it makes strong assumptions on the periodicity of respiratory motion and characterizes only average 3D+t breathing motion. Although this is useful to reduce treatment margins, it may not be sufficient for accurate accumulation of the delivered dose.
A different strategy uses multislice/orthogonal 2D+t cine‐MRI for 3D+t motion estimation. 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 The reduction in the spatial dimension allows for higher temporal resolution, and is combined with a model that links the lower dimensional image data to 3D motion fields. This strategy assumes, however, that a good fit on lower dimensional images implies a good fit in the full 3D domain. Although this is reasonable for small volumes, since slices cover a large fraction of the volume in such a case, it may be less valid for larger volumes which may be required for dose accumulation.
The third strategy does not rely on sorting, but reconstructs images from highly undersampled k‐space data. Even with parallel imaging, 23 , 24 this typically eventually results in lower SNR, lower spatial resolution, and/or undersampling artifacts. Nevertheless, it has been shown that motion fields can be estimated from these images with sufficient accuracy. 25 , 26 , 27 , 28 Additionally, iterative reconstructions based on compressed sensing 29 have been proposed to exploit the spatiotemporal sparsity of images. However, for the intended application the reported temporal resolution was too low, 28 , 30 , 31 or the FOV was too small. 32 , 33
Following a different approach, we have previously introduced MR‐MOTUS 34 (Model‐based Reconstruction of MOTion from Undersampled Signal), a new framework that allows to reconstruct nonrigid 3D motion fields directly from k‐space data. The key ingredient of MR‐MOTUS is a signal model that explicitly relates dynamic k‐space data to the combination of a static reference image and dynamic motion fields. Assuming a reference image is available, and data are acquired in steady state, motion fields can be reconstructed directly from k‐space data by solving the corresponding nonlinear inverse problem. Since motion fields are spatially correlated and therefore compressible, few data are required for the reconstructions.
The possibility to reconstruct motion from few k‐space data makes MR‐MOTUS a natural candidate for time‐resolved 3D+t motion estimation, which is not directly restricted to the achievable temporal resolution in MR‐images. Our work presented in Ref. [34], however, represents a proof‐of‐concept, and demonstrates MR‐MOTUS in an experimental setting. Four points of improvement should be addressed for the extension of MR‐MOTUS to time‐resolved 3D+t motion estimation:
Only spatial correlation in motion fields was exploited, and a single static motion‐field was reconstructed for each single snapshot of k‐space data. Additionally exploiting temporal correlation, and jointly reconstructing the 3D+t motion‐field series at once, could improve the reconstruction quality and lower requirements of computing time and memory.
Only the body coil was used for data acquisition to obtain homogeneous coil sensitivity. Since typically multicoil acquisitions are favored, this did not represent a practical setting.
The required reference image was obtained from a separate MR‐scan during breath‐hold. Ideally, no breath‐holds are required and reconstructions can be performed on data acquired in free‐breathing conditions.
3D motion fields were previously reconstructed from retrospectively undersampled Cartesian k‐space data, while the motion estimation application requires prospectively undersampled acquisitions with an efficient non‐Cartesian trajectory.
In this work, we address the aforementioned points of improvement and extend the framework to experiments in a realistic setting, in which reference image and time‐resolved 3D+t motion fields can be reconstructed from linearly combined multi‐coil, free‐breathing, prospectively undersampled non‐Cartesian 3D k‐space data. Time‐resolved 3D+t motion‐field reconstructions require the representation of 3D motion fields over a large number of timepoints (>100), and thereby introduce a large number unknowns. We propose to use a spatiotemporal low‐rank motion model to compress the representation of 3D+t motion fields. Several works have previously proposed low‐rank motion models for motion estimation, 19 , 35 , 36 , 37 , 38 , 39 and the analyses in Refs. [19, 35, 36] suggest that a rank‐2 motion model can accurately describe respiratory motion. Consequently, the low‐rank motion model can reduce the number of unknowns by two orders of magnitude, thereby introducing a regularization in both space and time and significantly reducing memory consumption and reconstruction times for 3D+t reconstructions. We will refer to the extended framework as low‐rank MR‐MOTUS.
We demonstrate and validate low‐rank MR‐MOTUS in a total of 6 in vivo experiments on 2 healthy subjects and several moving anatomies. 2D/3D abdominothoracic respiratory motion is included in view of the MR‐guided radiotherapy application, and 3D head‐and‐neck motion is included for additional validation and as a demonstration to handle different types of motion. The 2D respiratory motion reconstruction is validated against 2D time‐resolved compressed sensing, the 3D respiratory motion reconstruction against respiratory‐resolved 3D image reconstruction, and the 3D head‐and‐neck motion against 3D static images acquired right before and right after the motion.
2. THEORY
2.1. Background MR‐MOTUS
We assume a general d‐dimensional setting, with targeted case d = 3, and we follow the convention that bold‐faced characters denote vectorizations. We define as the mappings from coordinates in a reference image to new locations at time t. The mappings are characterized by the motion fields through . This will be written in concatenated vector‐form as
| (1) |
where denote the vertical concatenations over N spatial points in a d‐dimensional setup. The MR‐MOTUS forward model 34 explicitly relates the motion fields and a static reference image to dynamic, single‐channel (and possibly non‐Cartesian) k‐space measurements :
| (2) |
Here is the complex noise vector and is the vectorization of the forward operator defined as
| (3) |
where denotes the k‐space coordinate. By fitting the nonlinear signal model in Equation (3) to acquired k‐space data, motion fields can be reconstructed directly from k‐space measurements.
2.2. Reconstruction problem formulation for space‐time reconstructions
In this work, we follow Ref. [40] and formulate the reconstruction problem for space‐time motion fields D as follows:
| (4) |
Here is a regularization functional, with corresponding parameter , which models a priori assumptions in order to exploit correlations in both space and time.
2.2.1. Parameterization with a low‐rank space‐time motion model
A straightforward parameterization of D considers one motion‐field per dynamic, ie, . This is, however, impractical from a computational point‐of‐view, since the number of parameters scales with the number of dynamics: |D| = NMd ∼ M. For a typical scenario, and d = 3, in which case
| (5) |
Hence, this parameterization results in high memory consumption and long reconstruction times.
We observe that internal body motion is typically very rigid, and movement occurs along a similar angle over time. We therefore hypothesize that a representation of motion fields as the summation of motion directions per voxels with relative magnitude, multiplied by global scalings along these directions over time, can lead to an efficient representation with a reduced number of parameters. As an example, motion with fixed directions (and relative magnitude) per voxel can be represented with just one motion‐field and one global 1D scaling along these directions over time. This representation can mathematically be captured with a low‐rank motion model. The low‐rank model simultaneously reduces the number of parameters for the reconstruction and introduces a natural regularization in both space and time by enforcing the following factorization in spatial and temporal contributions:
| (6) |
Here R denotes the number of components of the model; denotes the matrix with spatial components, and denotes the matrix with temporal components. See also Figure 1 for a more intuitive explanation of the model. The model (6) will be referred to as the low‐rank model, since rank(D) ≤ R. The upper limit is achieved for R linearly independent components. A similar explicit low‐rank factorization was recently proposed in the context of image reconstructions in Ref. [30], with the same motivations as mentioned above.
FIGURE 1.

Overview of the low‐rank MR‐MOTUS framework. First, data are acquired during free‐breathing with a golden‐ratio radial trajectory (2D: golden‐angle radial, 44 3D: golden‐mean radial Koosh ball 45 ). Then, DC‐based phase binning is performed on end‐inhale to reconstruct a motion‐free reference image. Finally, the reference image and free‐breathing data are fed into the low‐rank MR‐MOTUS reconstructions, resulting in time‐resolved 3D motion fields. The motion fields are reconstructed with an explicit constraint on the maximum rank. That is, as a sum of component motion fields with each a different temporal behavior. The number of such components is pre‐determined
The number of parameters in the low‐rank model is |D| = |Φ|+|Ψ| = (Nd+M)R. Analyses in Refs. [5, 19, 36] suggest that a motion model with rank 2 is sufficient to accurately model respiratory motion. For the typical scenario considered above (), this would then imply
| (7) |
which is two orders of magnitude lower than Equation (5). In Supporting Information Section 1, analyses are performed to confirm that the low‐rank motion model with a small number of components R can indeed accurately represent motion fields.
We follow a standard approach in nonrigid medical image registration 41 and represent both the spatial components Φ and the temporal components Ψ of the motion fields in cubic B‐spline bases. This results in representation coefficients α, β for, respectively, Φ and Ψ.
2.2.2. Regularization functional
The motion‐field reconstruction problem in Equation (4) is typically ill‐posed, and requires incorporation of a priori knowledge of the motion fields. Since organs such as the liver, spleen and kidney consist of liquid filled tissue structures, they can be assumed incompressible and thus volume‐preserving under motion. 42 The Jacobian determinant is the fraction of the volume at spatial coordinate r after deformation by , with respect to the reference volume before deformation. Hence, values between 0 and 1 indicate shrinkage, values around 1 indicate no compression nor expansion, and values above 1 indicate expansion. We enforce the incompressibility assumption by penalizing deviations of the Jacobian determinant from unity 43 :
| (8) |
Here computes the determinant of the Jacobian, and W is a diagonal matrix with weights per voxel. The weights are added to exclude regions where the regularization is less realistic, eg, in the lungs. As weights we have taken the magnitude of the reference image, scaled to unit norm. For the implementation, we follow Ref. [43] and compute spatial derivatives analytically using the spline parameterization of the motion fields.
2.2.3. Final reconstruction problem formulation
Substituting the spline representation, low‐rank model (6) and regularization (8) into the objective function (4) results in the following minimization problem to reconstruct space‐time motion fields:
| (9) |
where is the regularization parameter that balances the terms. Note that no temporal regularization is added, since the low‐rank model already acts as a strong regularization in both space and time.
3. METHODS
The following data were acquired in three different experiments per volunteer for two volunteers:
2D+t abdominothoracic data;
3D+t abdominothoracic data;
3D+t head‐and‐neck data.
The 2D+t abdominothoracic data allow for a validation against time‐resolved image reconstruction at a high temporal resolution. The 3D+t abdominothoracic data are the targeted case for the application in MR‐guided radiotherapy. The 3D+t head‐and‐neck data are included as a demonstration to handle different types of motion, and for additional validation. All reconstructions are analyzed by comparison with image reconstructions on the same data. As an additional sanity check, the Jacobian determinant of the transformation corresponding to the motion fields is analyzed: . More details regarding the experiments are provided below, organized per subsection.
All data were acquired on a 1.5T MRI scanner (Ingenia, Philips Healthcare, Best, The Netherlands) using a steady‐state spoiled gradient echo sequence (SPGR) with anterior and posterior receive arrays. We employed golden‐angle radial readouts for 2D, 44 and golden‐mean Koosh ball radial readouts for 3D. 45 The volunteers provided written informed consent prior to the scans, and all scans were approved by the institutional review board of the University Medical Center Utrecht and carried out in accordance with the relevant guidelines and regulations. See Table 1 for all relevant acquisition parameters.
TABLE 1.
Details of the in vivo experiments as described in Sections 3.1‐3.3: the top half lists acquisition details, and the bottom half lists reconstruction details for the time‐resolved experiments
| Parameter | 2D resp. motion | 3D resp. motion | 3D head‐and‐neck motion | |||
|---|---|---|---|---|---|---|
| FOV (m) | 0.50 × 0.50 × 0.01 | 0.44 × 0.44 × 0.44 | 0.38 × 0.38 × 0.38 | |||
| Acquisition matrix size | 164 × 164 × 1 | 146 × 146 × 146 | 126 × 126 × 126 | |||
| Number of samples on readout | 264 | 264 | 232 | |||
| Spatial acq. resolution (mm) | 3.00 × 3.00 × 10.00 | 3.00 × 3.00 × 3.00 | 3.00 × 3.00 × 3.00 | |||
| Repetition time (ms) | 4.90 | 4.40 | 5.40 | |||
| Echo time (ms) | 2.30 | 1.80 | 2.30 | |||
| Flip angle () | 20 | 20 | 20 | |||
| Bandwidth (Hz) | 298.72 | 541.48 | 284.73 | |||
| Trajectory | 2D golden‐angle radial | 3D golden‐mean radial Koosh ball | 3D golden‐mean radial Koosh ball | |||
| Pulse sequence | 2D SPGR | 3D SPGR | 3D SPGR | |||
| Coils (#Channels) | Anterior + Posterior (24) | Anterior + Posterior (24) | Anterior + Posterior (24) | |||
| Scanner | Philips Ingenia 1.5T | Philips Ingenia 1.5T | Philips Ingenia 1.5T | |||
| Motion model components | R = 3 | R = 3 | R = 6 | |||
| Reference image resolution (mm) | 6.70 × 6.70 × 10.00 | 6.70 × 6.70 × 6.70 | 9.05 × 9.05 × 9.05 | |||
| Regularization parameter |
|
|
|
|||
| Number of iterations | 50 | 50 | 300 | |||
| Splines per spatial dimension | 18 | 16 | 3 | |||
| Splines in time | 1.28/second | 8.25/second | 5/second | |||
| Temporal motion resolution | 40.8 Hz: 5 spokes/dynamic | 7.6 Hz: 30 spokes/dynamic | 9.3 Hz: 20 spokes/dynamic | |||
| Reconstructed motion duration (s) | 20 | 33 | 40 | |||
| Reconstruction time | 4 minutes | 50 minutes | 2 hours |
Notes: For the respiratory‐resolved reconstruction in Section 3.2, the same parameters were used as listed in the “3D resp. motion” column, but effectively resulted in a temporal motion resolution of about 5 Hz, with 18062 spokes per dynamic, due to the sorting.
We followed the approach outlined in Section 2.2, and reconstructed motion fields from linearly combined multicoil k‐space data acquired during motion by solving the minimization problem (9) with L‐BFGS, 46 using the MATLAB implementation from Ref. (47). The low‐rank MR‐MOTUS workflow is schematically summarized in Figure 1. We refer to Table 1 for all parameter settings and to the Supporting Information of this work and the Supporting Information in Ref. (34) for more implementation details. Code that produces similar results as presented in this study is openly available at https://github.com/nrfhuttinga/LowRank_MRMOTUS.git.
3.1. Experiment 1: 2D+t in vivo respiratory motion reconstructions from abdominothoracic data
In the first experiment, a reference image and motion fields were reconstructed from the same 2D+t data acquired during 20 seconds of free‐breathing. The reference image was reconstructed from the end‐inhale bin after phase binning based on the self‐navigation signal of k = 0 values per readout (denoted as ‐values), see Supporting Information Section 3 for more details. The motion fields were reconstructed at 40.8 Hz, that is, 24.5 ms/frame, by assigning every 5 consecutive nonoverlapping spokes to one dynamic. The low‐rank model (6) was employed with R = 3, yielding motion fields with rank ≤3. Additional relevant reconstruction and acquisition parameters can be found in Table 1.
The motion fields were analyzed by comparison with a time‐resolved compressed sensing 2D+t reconstruction (CS2Dt) on the same free‐breathing data, and by means of the Jacobian determinant. For the comparison with CS2Dt, the MR‐MOTUS reference image was warped with the reconstructed motion fields to obtain a dynamic image sequence as follows. First, the motion fields are interpolated to the same spatial resolution as the image reconstruction using cubic interpolation. Second, the forward model (2) was evaluated on a Cartesian k‐space grid using the reconstructed motion fields . Finally, an inverse Fourier transform was performed to obtain one image per dynamic. The CS2Dt was reconstructed at a temporal resolution of 122.5 ms/frame by assigning every 25 consecutive nonoverlapping spokes to one dynamic, and was performed with the BART toolbox65 using spatial ‐wavelet and temporal total variation regularization. The temporal resolution of the CS2Dt was chosen as an integer multiple of the MR‐MOTUS resolution to allow comparison at the coarser CS2Dt temporal resolution. The comparison was performed by means of the relative error norm (REN). The REN between vectors a,b was defined as .
3.2. Experiment 2: 3D+t in vivo respiratory motion reconstructions from abdominothoracic data
In the second experiment, we considered the targeted case for MR‐guided radiotherapy: a reference image and motion fields were reconstructed from 3D+t data acquired during 33 seconds of free‐breathing. The targeted high temporal resolution does not allow for a straightforward validation by comparison with dynamic 3D image reconstruction. For validation purposes, we therefore compared MR‐MOTUS with respiratory‐resolved image reconstruction by performing both reconstructions on respiratory‐sorted data.
Finally, we performed 3D+t time‐resolved motion reconstruction to demonstrate the ability to reconstruct motion at high temporal resolution from time‐resolved k‐space data. The reference image for both reconstructions was reconstructed from the end‐inhale bin after phase binning based on the ‐value per readout (see Supporting Information Section 3), and the low‐rank model (6) was employed with R = 3. See Table 1 for all reconstruction and acquisition parameters.
For the respiratory‐resolved reconstructions, phase binning was performed in 20 equal‐sized bins based on the ‐value per readout. The images were independently reconstructed for each bin using 28 iterations of CG‐SENSE. 50 The motion fields were reconstructed over all bins simultaneously with low‐rank MR‐MOTUS by solving (9) with 20 dynamics. The quality of the MR‐MOTUS reconstruction was assessed by means of the Jacobian determinant and by comparison with the respiratory‐resolved image reconstruction. For the latter, a reference image was warped with the reconstructed motion fields to obtain a dynamic image sequence, as described in Section 3.1, and the two image sequences were compared in terms of REN. The reference image that was warped using the MR‐MOTUS motion fields was selected as the end‐inhale phase of the respiratory‐resolved image reconstruction (motion state #10) in order to reduce effects of image intensity, image quality, or contrast differences on the comparison of the two image sequences.
For the time‐resolved 3D+t reconstructions, motion fields were reconstructed at 7.6 Hz, ie, 132 ms/frame, by assigning every 30 consecutive nonoverlapping spokes to one dynamic. The reconstructions were analyzed by means of the Jacobian determinant and the average motion of the kidney was compared between the time‐resolved and respiratory‐resolved MR‐MOTUS reconstructions. This motion was computed as the mean of the displacements over a manually segmented mask of the right kidney. For comparison between respiratory‐resolved and time‐resolved, the motion magnitudes of each respiratory bin in the respiratory‐resolved reconstruction were assigned to the original, time‐resolved, spoke indices that were sorted into that particular bin.
3.3. Experiment 3: 3D+t in vivo head‐and‐neck motion reconstructions
With the third experiment, 3D+t motion fields were reconstructed from data acquired during head‐and‐neck motion. The subject was instructed to hold still in position 1 during the first 70 seconds of the acquisition, then move to position 2 and hold still for 70 seconds, then move freely for 40 seconds, and finally, hold still afterward in position 3 for 70 seconds. Data acquired in position 1 were used to reconstruct a reference image, data acquired during movement from position 2 to position 3 were used to reconstruct motion fields, and position 2 and 3 were used as fully sampled “checkpoints” to serve as validation; the beginning and end of the dynamic motion reconstruction should, respectively, coincide with positions 2 and 3. To verify this, the reference was warped with the reconstructed motion fields as described in Section 3.1, and the first and last dynamic of the resulting image sequence were visually compared with the fully sampled checkpoints. As a second analysis, the mean and standard deviation of the determinant of the Jacobian were computed for all dynamics, over all voxels within the body. The latter were determined by a threshold on the magnitude of the signal per voxel. The low‐rank motion model was employed with R = 6 to accommodate the head‐and‐neck motion which includes rotations in multiple planes. The motion fields were reconstructed at a temporal resolution of 9.3 Hz, that is, 108 ms/frame, by assigning every 20 consecutive nonoverlapping spokes to one dynamic. Additional reconstruction and acquisition parameters can be found in Table 1.
4. RESULTS
4.1. Experiment 1: 2D in vivo respiratory motion reconstructions from abdominothoracic data
The time‐resolved 2D respiratory motion was reconstructed with 40.8 motion fields per second. The Jacobian determinant and the comparison with CS2Dt is shown in Figure 2. The visual comparison with 2D+t compressed sensing image reconstruction corresponding to Figure 2B is shown in Supporting Information Video S1. It can be observed that good agreement is obtained for most phases of the respiratory cycle, with a small mismatch in end‐exhale in the upper back near the spine‐liver interface. The Jacobian determinants show small deviations from unity within the organs (green), and compression in the lungs (blue) except for the arteries. The qualitative results are supported by the quantitative results in Figure 2B, which show that the warped MR‐MOTUS images considerably reduce the REN.
FIGURE 2.

A, Jacobian determinants of the reconstructed motion fields in end‐inhale (left) and end‐exhale (right). The first end‐exhale and second end‐inhale positions were selected from all dynamics for this visualization. B, Relative error norm (REN) between MR‐MOTUS warped reference images and CS2Dt reconstruction over all dynamics (blue), and a baseline REN between the fixed MR‐MOTUS end‐exhale warped reference image and CS2Dt. The top row (I) shows the results for volunteer 1, whereas the bottom row (II) shows the results for volunteer 2. The comparison is also visualized in Supporting Information Video S1, and the reconstructed motion fields decomposed in the low‐rank model components are visualized in Supporting Information Videos S2 and S3
The warped reference images corresponding to the reconstructed motion‐field, overlayed with the motion‐field are shown in Supporting Information Videos S2 and S3. Moreover, these show the decomposition in the reconstructed low‐rank components. For volunteer 1, the first two components show pseudo‐periodic temporal behaviors, and the first is most prominent in magnitude. Both components show not only realistic movement of organs such as the liver and kidney, but also small unrealistic motion in the spine near the liver in end‐exhale. Interestingly, the third component shows a temporal behavior with a slight drift upwards, and the corresponding spatial motion‐field indicates a global rotation. Similar movement can also be observed in the ground‐truth CS2Dt reconstruction in Supporting Information Video S1. This movement could be caused by relaxation of the gluteus maximus muscle in the upper leg and buttocks. Similar motion patterns can be observed in Supporting Information Video S3 for volunteer 2, but the global rotation is less pronounced in the ground‐truth CS2Dt reconstruction.
4.2. Experiment 2: 3D in vivo respiratory motion reconstructions from abdomen/thorax data
The comparison between MR‐MOTUS and respiratory‐resolved image reconstruction is shown in Figures 3 and 4, Supporting Information Videos S4 and S5. It can be observed that good visual agreement is obtained between the two reconstructions for both volunteers. This is especially visible from the position of the top of the liver dome. The Jacobian determinants of the reconstructed motion fields are shown in Figure 4A. The lungs show compression (blue), except for the arteries, and small deviations from unity can be observed in the rest of the body. Deviation from unity can be observed at the spine‐liver interface, where a large volumetric compression is reconstructed. We expect this is related to the attachment of liver tissue to the spine during exhalation. The quantitative comparison in Figure 4B shows best agreement at motion state 10 (inhale) and worst agreement in motion state 19 (exhale). The sharp peak at motion state 10 can be explained by the fact that we took motion state 10 as the reference image to compute the warped reference images for MR‐MOTUS. The warped reference images reconstructed from the respiratory‐sorted data, overlayed with the motion‐field, are visualized for both volunteers in Supporting Information Videos S6 and S7. Moreover, these show the decomposition in the reconstructed low‐rank components. For both volunteers, the first component shows a pseudo‐periodic behavior in time and is most prominent in magnitude; the other components make only minor contributions. These large contributions of pseudo‐periodic components could be due to the periodicity assumption underlying the respiratory‐sorting. Small unrealistic motion can be observed for volunteer 1 at the spine‐liver interface and at the back of the spine, similar to the 2D reconstructions. Additionally, a small rotating motion can be observed in the motion‐field for volunteer 1 at the interface with the rib cage in the coronal slice on the bottom right. We expect the latter is caused by a combination of the volume‐preserving regularization and the inability of the motion model to resolve the sliding motion that is present in this area.
FIGURE 3.

Respiratory‐resolved image reconstruction (Resp. Resolved IR, left), MR‐MOTUS warped reference image (middle), and pixel‐wise absolute difference between the two reconstructions (right), as mentioned in Sections 3.2 and 4.2. The red and yellow horizontal lines indicated, respectively, end‐exhale and end‐inhale positions. A video corresponding to this figure of volunteer 1 is provided in Supporting Information Video S4. A similar video for volunteer 2 is provided in Supporting Information Video S5
FIGURE 4.

A, Jacobian determinants of the reconstructed respiratory‐resolved motion fields in end‐inhale (top) and end‐exhale (bottom). The first end‐exhale and second end‐inhale positions were selected from all dynamics for this visualization. B, Relative error norm (REN) with respiratory‐resolved image reconstruction (RR‐IR) for every motion state. The blue graph indicates the REN between MR‐MOTUS and respiratory‐resolved image reconstruction. The orange graph indicates a baseline comparison between the (fixed) end‐inhale image of the MR‐MOTUS reconstruction and the (dynamic) respiratory‐resolved image reconstruction. The sharp peak is caused by taking the 10th dynamic as the reference image for this comparison. The top row shows the results for volunteer 1, and the bottom row shows the results for volunteer 2. Videos corresponding to the comparisons in (B) are provided in Supporting Information Videos S4 and S5
The time‐resolved 3D respiratory motion was reconstructed with 7.6 motion fields per second. The warped reference images reconstructed from the time‐resolved data, overlayed with the motion‐field, are visualized for both volunteers in Supporting Information Videos S8 and S9. Similar motion is obtained as with the respiratory‐sorted data, but the reconstructed motion components are now similar in magnitude. All components show pseudo‐periodic temporal behavior, and the first component of volunteer 1 indicates a small drift. Similar to the respiratory‐resolved reconstructions, small unrealistic motion at the spine‐liver interface and anterior side of the spine can be observed for volunteer 1. Additionally, the same small rotation can be observed near the rib cage in the bottom right of the coronal slice. The Jacobian determinants of the reconstructed motion fields are shown in Figure 5. Similar patterns can be observed in end‐exhale as for the respiratory‐resolved motion reconstructions. Interestingly, the end‐inhale image for volunteer 1 shows a small expansion in the lungs, possibly indicating that a deeper inhale than the reference image was reconstructed while the reference image was obtained using respiratory‐sorting on end‐inhale. Finally, the comparison between the average kidney motion in the time‐resolved and respiratory‐resolved MR‐MOTUS reconstructions is visualized in Figure 6. The phase of the reconstructions are most similar in feet‐head (FH) and anterior‐posterior (AP), while in left‐right (LR) different patterns can be observed. However, it should be noted that the motion in FH and AP is two orders of magnitude higher than in LR. The motion magnitude is similar for both reconstructions, but the respiratory‐resolved reconstruction shows a constant amplitude over time since it only reconstructs an average breathing cycle. The time‐resolved reconstruction shows changing motion amplitudes over time. The phase difference between the two reconstructions may be explained by imperfect respiratory‐sorting.
FIGURE 5.

Jacobian determinants of the reconstructed time‐resolved motion fields in end‐inhale (top) and end‐exhale (bottom). The left figure shows the results for volunteer 1 and the right figure the results for volunteer 2. Videos corresponding to the reconstructions in this figure are provided in Supporting Information Videos S8 and S9
FIGURE 6.

Average motion of the right kidney over time, for both the respiratory‐resolved and the time‐resolved MR‐MOTUS reconstructions as mentioned in Sections 3.2 and 4.2. The respiratory‐resolved MR‐MOTUS reconstruction was projected back on the time axis, as described in Sections 3.2 and 4.2. The average motion magnitudes were computed over a manually segmented mask of the right kidney. Videos of reconstructions corresponding to these figures are provided in Supporting Information Videos S8 and S9
4.3. Experiment 3: 3D in vivo head‐and‐neck motion reconstructions
The time‐resolved 3D head‐and‐neck motion was reconstructed with 9.3 motion fields per second. The MR‐MOTUS warped reference images from 3D data acquired during head‐and‐neck motion are visualized for both volunteers in Supporting Information Videos S10 and S11. Clearly, rigid motion fields are reconstructed within the skull, and nonrigid motion fields at the neck. Figure 7 shows the Jacobian determinants of the reconstructed motion‐field over time (A), and the reconstructed temporal components (B) for both volunteers. The Jacobian determinant is close to 1 over the whole reconstructed time, with slightly more deviations for volunteer 1. These can be attributed to larger and more irregular motion than volunteer 2. The temporal components are relatively flat at the start and the end, corresponding to the static begin and end positions. The more extreme motion of volunteer 1 can also be observed from the larger magnitudes of the temporal components and from Supporting Information Video S10. Figure 8 shows the checkpoint validation for volunteer 2. It can be observed that good agreement is obtained between the fully sampled checkpoint images and the MR‐MOTUS reconstructions.
FIGURE 7.

Head‐and‐neck reconstructions as described in Sections 3.3 and 4.3. A, The mean (solid line) and standard deviation (shaded area) of the Jacobian determinants of the reconstructed motion fields over time. B, The reconstructed temporal profiles , scaled by the norm of the corresponding to be able to compare their magnitudes. The top row and bottom rows, respectively, show the results for volunteers 1 and 2. Videos corresponding to the reconstructions in these figures are provided in Supporting Information Videos S10 and S11
FIGURE 8.

Checkpoint validation for the head‐and‐neck reconstructions of volunteer 2, as mentioned in Sections 3.3 and 4.3. The left columns shows the fully sampled checkpoint image, the middle column shows the MR‐MOTUS warped reference images and the right column shows the absolute pixel‐wise difference. The top part corresponds to the comparison with the checkpoint acquired right before the start of the motion, and the bottom part corresponds to the checkpoint acquired right after the start of the motion. A video corresponding to this figure is provided in Supporting Information Video S11. A similar video for volunteer 1 is provided in Supporting Information Video S10
5. DISCUSSION
We have previously introduced MR‐MOTUS, 34 a framework to estimate motion directly from minimal k‐space data and a reference image by exploiting spatial correlation in internal body motion. In this work, we introduce low‐rank MR‐MOTUS: an extension of MR‐MOTUS from 3D to 3D+t reconstructions in a realistic experimental setting, where both reference image and motion fields are reconstructed from data acquired during free‐breathing. Low‐rank MR‐MOTUS employs a low‐rank motion model that constrains the degrees of freedom in space and time, thereby reducing memory consumption and functioning as a regularization in both space and time. It was demonstrated that the proposed method can reconstruct high quality 3D motion fields with a temporal resolution of more than 7.6 motion fields per second, while showing consistency with static, respiratory‐resolved and time‐resolved image reconstructions. Prospectively undersampled data were acquired with a non‐Cartesian trajectory and multichannel receivers, thereby bridging the gap toward clinical application.
The ability of the proposed framework to estimate time‐resolved rather than respiratory‐resolved motion is promising as it allows to characterize drifts and intra and inter‐cycle breathing patterns. This is in contrast with respiratory‐resolved methods that require sorting to obtain suitable images. 9 , 10 , 11 , 12 , 13 , 19 , 50 , 51 , 52 , 66 The sorting effectively results in (a motion model for) average breathing motion, which may have trouble capturing drifts and inter‐cycle variations. Some works have been proposed to reconstruct time‐resolved MR‐images without the need of retrospective sorting. However, the reported temporal resolution was too low, 28 , 30 , 31 or the FOV was too small. 32 , 33 The time‐resolved motion estimation of low‐rank MR‐MOTUS in combination with an MR‐LINAC can be particularly beneficial for MR‐guided radiotherapy; the (retrospective) reconstruction of 3D+t time‐resolved tumor and organs‐at‐risk motion during treatment can be used for accurate dose accumulation, 38 allowing for an accurate assessment of the treatments.
The resulting motion model explicitly separates a high‐dimensional static spatial component from a low‐dimensional dynamic temporal component. The low dimensionality of the dynamic behavior could be exploited to reduce the number of parameters and reconstruction times of future real‐time reconstructions, analogously to recently proposed approaches in Refs. [7, 19, 22, 53]. Our method could thereby form the basis for future work on real‐time MR‐based motion estimation, where reconstructions are performed on‐the‐fly to track tumor and organs‐at‐risk motion.
Low‐rank models in the context of motion estimation have been investigated before in several works, most of which retrospectively perform compression to a low‐rank model using principal component analysis. 19 , 35 , 37 , 38 Others decouple the motion fields into spatial components and temporal components based on surrogate signals. 21 , 22 , 39 The approach in this work is different in the sense that it explicitly and a priori enforces a structure that yields low‐rank motion fields, and does not assume dependence on surrogate signals for the motion model. Similar approaches have been studied in the context of image reconstruction. 30 , 32 , 56 , 57 , 58 , 59 , 60
This work includes some limitations and assumptions that should be addressed. Both the respiratory‐resolved and time‐resolved 3D respiratory motion reconstructions in Sections 3.2 and 4.2 look realistic in general. Yet, small unrealistic motion is reconstructed near discontinuities in the true motion fields that are present near sliding or attaching/detaching organ surfaces. This can be observed in for example Supporting Information Video S2, at the spine‐liver interface in end‐exhale. This could possibly be resolved with region‐specific 61 or nonparametric motion models, 62 but is beyond the scope of this work.
The respiratory motion reconstruction parameters in this work were fixed and were based on the results of a grid search for a single volunteer. We have empirically observed that the parameters from the grid search for a single volunteer are generalizable and yield acceptable results for all respiratory motion reconstructions in this work. Hence, in a realistic setting where grid searches may not be feasible due to time constraints, similar reconstruction parameter settings could be employed. The number of components R may be more subject‐dependent, and can be determined with analyses similar to the ones described in Supporting Information Section 1.
This work is based on the assumption that a motion‐free reference image is available, and that it can be warped with unknown motion fields into dynamic image series of a moving anatomy. For respiratory motion, these reference images could be extracted by means of respiratory binning, but this may be less straightforward for other body motion such as bladder filling, bowel movements, and prostate movement. However, these types of motion take place on a slower temporal scale, so more time would be available to gather information and update the reference image. Additionally, the modeling assumption regarding the warping of a single reference image into the dynamic image series may be partly violated due to unwanted contrast effects such as susceptibility‐induced variations. Yet, these effects can be assumed small at the targeted field strength of 1.5T. Nevertheless, future work will consider incorporating such additional contrast effects in the signal model, and performing joint reconstructions of the motion fields and the reference image.
Resulting from the grid search for optimal reconstruction parameters, we have used relatively low‐resolution reference images. This delivered the best performance, and allowed a factor 3 reduction of the reconstruction time. These low‐resolution reference images could allow for high‐resolution motion fields by means of interpolation, but do, however, provide less precise information regarding organ boundaries.
Contrary to standard coil compression techniques, the aim of the compression of multi‐channel data to a single channel in this work is homogeneous coil sensitivity. Consequently, this compression is suboptimal in terms of SNR. 63 , 64 Supporting Information Figure S3 analyzes the loss between a Roemer coil combination 63 and the proposed coil compression on the 2D data. This shows an SNR loss factor between 1.5 and 2.5 in most of the body, which increases toward the boundary of the body. Good results were obtained with the coil compression introduced in this work, but more advanced techniques could possibly be used to improve the SNR after the compression. We refer to Supporting Information Section 2 for a more extensive discussion.
The last point of improvement is the validation of time‐resolved 3D+t motion fields. In general, this is not straightforward, and we considered three viable options for this: (a) in silico with a digital phantom, (b) with an MR‐compatible motion phantom, and (c) in vivo with respiratory‐resolved image reconstruction. We have opted for the third option, since this was considered the closest to a practical use‐case. The in silico validation does not consider real acquisition–related data corruption (eg, eddy currents, flow effects), and can, in case of for example, the XCAT phantom, 65 yield unstable motion fields. 66 MR‐compatible motion phantoms, although useful for proof‐of‐principle validations, represent simplified in vivo anatomies.
The intended application of MR‐MOTUS is MR‐guided radiotherapy, possibly in real‐time. However, the current reconstruction times in MATLAB on a desktop workstation are around 4 min for 2D+t with 40 motion fields/second, around 6 min for the respiratory‐resolved 3D reconstruction, and around 50 min for 3D+t time‐resolved respiratory motion with 7.6 motion fields/second. Hence, the current implementation of the method is not directly applicable for real‐time processing, but reconstruction times may be reduced with a different programming language, improved hardware, GPU‐accelerations, or deep learning.
6. CONCLUSION
We have introduced low‐rank MR‐MOTUS, an extension of MR‐MOTUS that allows to retrospectively reconstruct time‐resolved 3D+t motion fields from prospectively undersampled k‐space data and one reference image. Reconstructions were performed for 2D/3D respiratory motion and 3D head‐and‐neck motion. A temporal resolution of more than 7.8 motion fields per second was obtained, and the motion fields were consistent with image reconstructions. For MR‐guided radiotherapy, the time‐resolved 3D motion fields could be used to reconstruct the respiratory‐motion‐compensated accumulated dose during the treatment. Furthermore, the explicit decomposition of motion fields in static and dynamic components could form the basis for future work toward real‐time MR‐guided radiotherapy.
DATA AVAILABILITY STATEMENT
Code that produces similar results as presented in this study is openly available at https://github.com/nrfhuttinga/LowRank_MRMOTUS.git.
Supporting information
FIGURE S1 Results of the singular value analyses in Supporting Information Section 1 for 3D+t respiratory motion (left), and 3D+t head‐and‐neck motion (right). This figure clearly indicates that 3D+t motion fields possess the low‐rank property; models with R = 3 and R = 6 can, respectively, capture 97.9% and 99.9% of the variance of 3D+t respiratory motion and 3D+t head‐and‐neck motion, allowing for a significant reduction in the number of unknowns
FIGURE S2 This figure shows the effect of a different number of components R = 1, …, 20 on the respiratory motion reconstructions, as discussed in Supporting Information Section 1. The metrics were evaluated on respiratory‐resolved MR‐MOTUS and image reconstructions, and the means over all 20 reconstructed respiratory phases are visualized in this figure. Only a minimal change can be observed in both metrics, showing that the effect of R on the results is minimal for respiratory motion. NMI = Normalized Mutual Information, REN = Relative Error Norm
FIGURE S3 A, Roemer reconstruction. 62 B, The coil compression with , as discussed in Supporting Information Section 2. C, SNR loss factor between (A) and (B). D, The histogram of the SNR loss factor in (C). The SNR loss factor is between 1.5 and 2.5 in most of the body and increases toward the boundary of the body
FIGURE S4 Results of the parameter search as mentioned in Supporting Information Sections 5 and 3.2. A, The effect of the reference image resolution and reference image respiratory binning phase on the reconstruction quality. B, The effect of the reference image resolution on the reconstruction time. In the figure, “InhaleBinned” refers to the binning phase for the reference image (InhaleBinned = 1 for inhale, InhaleBinned = 0 for exhale), “Resolution” denotes the spatial resolution of the reference image, and ‘SplineOrder’ denotes the number of spline basis functions defined per spatial dimension
VIDEO S1 This is an animated figure and should be viewed under Supporting Information. 2D‐t compressed sensing reconstruction (left), MR‐MOTUS warped reference images (middle), and pixel‐wise absolute differences between the two reconstructions (right), as mentioned in Sections 3.1 and 4.1. The top row shows reconstructions for volunteer 1, and the bottom row for volunteer 2
VIDEO S2 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from 2D time‐resolved data, as mentioned in Sections 3.1 and 4.1. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 1. For visualization purposes, the components were scaled such that
VIDEO S3 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from 2D time‐resolved data, as mentioned in Sections 3.1 and 4.1. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 2. For visualization purposes the components were scaled such that
VIDEO S4 This is an animated figure and should be viewed under Supporting Information. Respiratory‐resolved image reconstruction (Resp. resolved IR, left), MR‐MOTUS warped reference images (middle), and pixel‐wise absolute differences between the two reconstructions (right), as mentioned in Sections 3.2 and 4.2. The visualization shows data from volunteer 1
VIDEO S5 This is an animated figure and should be viewed under Supporting Information. Respiratory‐resolved image reconstruction (Resp. resolved IR, left), MR‐MOTUS warped reference images (middle), and pixel‐wise absolute differences between the two reconstructions (right), as mentioned in Sections 3.2 and 4.2. The visualization shows data from volunteer 2
VIDEO S6 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from respiratory‐sorted data, as mentioned in Sections 3.2 and 4.2. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 1. For visualization purposes, the components were scaled such that
VIDEO S7 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from respiratory‐sorted data, as mentioned in Sections 3.2 and 4.2. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 2. For visualization purposes, the components were scaled such that
VIDEO S8 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from respiratory‐sorted data, as mentioned in Sections 3.2 and 4.2. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 1. For visualization purposes, the components were scaled such that
VIDEO S9 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from respiratory‐sorted data, as mentioned in Sections 3.2 and 4.2. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 2. For visualization purposes, the components were scaled such that
VIDEO S10 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images resulting from the 3D head‐and‐neck motion reconstructions for volunteer 1, as mentioned in Sections 3.3 and 4.3
VIDEO S11 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images resulting from the 3D head‐and‐neck motion reconstructions for volunteer 2, as mentioned in Sections 3.3 and 4.3
VIDEO S12 This is an animated figure and should be viewed under Supporting Information. Respiratory‐resolved image reconstruction (Resp. resolved IR, left), MR‐MOTUS warped reference images (middle), and pixel‐wise absolute differences between the two reconstructions (right), as mentioned in Supporting Information Sections 5 and 3.2. The four blocks show reconstructions with different reconstruction parameter settings. “InhaleBinned” denotes whether the reference image is binned in inhale (1) or exhale (0). ‘Ref. resolution” denotes the resolution of the reference image in millimeters. All motion fields were reconstructed without regularization and with 9 cubic spline functions in every direction
Huttinga NRF, Bruijnen T, van den Berg CAT, Sbrizzi A. Nonrigid 3D motion estimation at high temporal resolution from prospectively undersampled k‐space data using low‐rank MR‐MOTUS. Magn Reson Med. 2021;85:2309‐2326. 10.1002/mrm.28562
REFERENCES
- 1. Lagendijk JJ, Raaymakers BW, Raaijmakers AJ, et al. MRI/linac integration. Radiothe Oncol. 2008;86:25‐29. [DOI] [PubMed] [Google Scholar]
- 2. Mutic S, Dempsey JF. The ViewRay system: magnetic resonance–guided and controlled radiotherapy In: Seminars in radiation oncology; 2014:196‐199. [DOI] [PubMed] [Google Scholar]
- 3. Keall PJ, Barton M, Crozier S. The Australian magnetic resonance imaging‐linac program. Semin Radiat Oncol. 2014;24:203‐206. [DOI] [PubMed] [Google Scholar]
- 4. Raaymakers B, Lagendijk J, Overweg J, et al. Integrating a 1.5 T MRI scanner with a 6 MV accelerator: proof of concept. Phys Med Biol. 2009;54:N229. [DOI] [PubMed] [Google Scholar]
- 5. Keall PJ, Mageras GS, Balter JM, et al. The management of respiratory motion in radiation oncology report of AAPM task group 76 a. Med Phys. 2006;33:3874‐3900. [DOI] [PubMed] [Google Scholar]
- 6. Murphy MJ, Isaakson M, Jalden J. Adaptive filtering to predict lung tumor motion during free breathing CARS 2002 computer assisted radiology and surgery. 2002;539‐544. [Google Scholar]
- 7. Huttinga NRF, Bruijnen T, van den Berg CAT, Sbrizzi A. Real‐time 3D respiratory motion estimation for MR‐guided radiotherapy using low‐rank MR‐MOTUS. In Proceedings 28th Annual Meeting ISMRM, vol. 28, Paris, France, 2020. Abstract 0598. [Google Scholar]
- 8. de MuinckKeizer DM, Kerkmeijer LGW, Maspero M, et al. Soft‐tissue prostate intrafraction motion tracking in 3D cine‐MR for MR‐guided radiotherapy. Phys Med Biol. 2019;64:235008. [DOI] [PubMed] [Google Scholar]
- 9. Breuer K, Meyer CB, Breuer FA, et al. Stable and efficient retrospective 4D‐MRI using non‐uniformly distributed quasi‐random numbers. Phys Med Biol. 2018;63:075002. [DOI] [PubMed] [Google Scholar]
- 10. Deng Z, Pang J, Yang W, et al. Four‐dimensional MRI using three‐dimensional radial sampling with respiratory self‐gating to characterize temporal phase‐resolved respiratory motion in the abdomen. Magn Reson Med. 2016;75:1574‐1585. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Han F, Zhou Z, Cao M, Yang Y, Sheng K, Hu P. Respiratory motion‐resolved, self‐gated 4D‐MRI using rotating cartesian k‐space (ROCK). Med Phys. 2017;44:1359‐1368. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Cai J, Chang Z, Wang Z, PaulSegars W, Yin FF. Four‐dimensional magnetic resonance imaging (4D‐MRI) using image‐based respiratory surrogate: a feasibility study. Med Phys. 2011;38:6384‐6394. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Feng L, Axel L, Chandarana H, Block KT, Sodickson DK, Otazo R. XD‐GRASP: golden‐angle radial MRI with reconstruction of extra motion‐state dimensions using compressed sensing. Magn Reson Med. 2016;75:775‐788. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Stemkens B, Paulson ES, Tijssen RH. Nuts and bolts of 4D‐MRI for radiotherapy. Phys Med Biol. 2018;63:21TR01. [DOI] [PubMed] [Google Scholar]
- 15. Paganelli C, Lee D, Kipritidis J, et al. Feasibility study on 3D image reconstruction from 2D orthogonal cine‐MRI for MRI‐guided radiotherapy. J Med Imaging Radiation Oncol. 2018;62:389‐400. [DOI] [PubMed] [Google Scholar]
- 16. Bjerre T, Crijns S, af Rosenschöld PM, et al. Three‐dimensional MRI‐linac intra‐fraction guidance using multiple orthogonal cine‐MRI planes. Phys Med Biol. 2013;58:4943. [DOI] [PubMed] [Google Scholar]
- 17. Tryggestad E, Flammang A, Hales R, et al. 4D tumor centroid tracking using orthogonal 2D dynamic MRI: implications for radiotherapy planning. Med Phys. 2013;40:091712. [DOI] [PubMed] [Google Scholar]
- 18. Brix L, Ringgaard S, S⊘rensen TS, Poulsen PR. Three‐dimensional liver motion tracking using real‐time two‐dimensional MRI. Med Phys. 2014;41:042302. [DOI] [PubMed] [Google Scholar]
- 19. Stemkens B, Tijssen RH, de Senneville BD, Lagendijk JJ, van den Berg CAT. Image‐driven, model‐based 3D abdominal motion estimation for MR‐guided radiotherapy. Phys Med Biol. 2016;61:5335. [DOI] [PubMed] [Google Scholar]
- 20. Seregni M, Paganelli C, Lee D, et al. Motion prediction in MRI‐guided radiotherapy based on interleaved orthogonal cine‐MRI. Phys Med Biol. 2016;61:872. [DOI] [PubMed] [Google Scholar]
- 21. McClelland JR, Hawkes DJ, Schaeffter T, King AP. Respiratory motion models: a review. Med Image Anal. 2013;17:19‐42. [DOI] [PubMed] [Google Scholar]
- 22. McClelland JR, Modat M, Arridge S, et al. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images. Phys Med Biol. 2017;62:4273. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: sensitivity encoding for fast MRI. Magn Reson Med. 1999;42:952‐962. [PubMed] [Google Scholar]
- 24. Griswold MA, Jakob PM, Heidemann RM, et al. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn Reson Med. 2002;47:1202‐1210. [DOI] [PubMed] [Google Scholar]
- 25. Glitzner M, de Senneville B, Lagendijk J, Raaymakers B, Crijns S. On‐line 3D motion estimation using low resolution MRI. Phys Med Biol. 2015;60:N301. [DOI] [PubMed] [Google Scholar]
- 26. Stemkens B, Tijssen R, Van den Berg CAT, et al. Optical flow analysis on undersampled radial acquisitions for real‐time tracking of the pancreas in MR guided radiotherapy. In Proceedings 21st Annual Meeting ISMRM, vol. 21, Salt Lake City, USA, 2013. Abstract 4325. [Google Scholar]
- 27. Roujol S, Ries M, Moonen C, de Senneville BD. Automatic nonrigid calibration of image registration for real time MR‐guided HIFU ablations of mobile organs. IEEE Trans Med Imaging. 2011;30:1737‐1745. [DOI] [PubMed] [Google Scholar]
- 28. Yuan J, Wong OL, Zhou Y, Chueng KY, Yu SK. A fast volumetric 4D‐MRI with sub‐second frame rate for abdominal motion monitoring and characterization in MRI‐guided radiotherapy. Quant Imaging Med Surgery. 2019;9:1303. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Lustig M, Donoho DL, Santos JM, Pauly JM. Compressed sensing MRI. IEEE Signal Process Mag. 2008;25:72‐82. [Google Scholar]
- 30. Ong F, Zhu X, Cheng JY, Johnson KM, Larson PEZ, Vasanawala SS, Lustig M. Extreme MRI: Large‐scale volumetric dynamic imaging from continuous non‐gated acquisitions. Magn Reson Med. 2020;84:1763‐1780. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. King AP, Buerger C, Tsoumpas C, Marsden PK, Schaeffter T. Thoracic respiratory motion estimation from MRI using a statistical model and a 2‐D image navigator. Med Image Anal. 2012;16:252‐264. [DOI] [PubMed] [Google Scholar]
- 32. Fu M, Barlaz MS, Holtrop JL, et al. High‐frame‐rate full‐vocal‐tract 3D dynamic speech imaging. Magn Reson Med. 2017;77:1619‐1629. [DOI] [PubMed] [Google Scholar]
- 33. Burdumy M, Traser L, Burk F, et al. One‐second MRI of a three‐dimensional vocal tract to measure dynamic articulator modifications. J Magn Reson Imaging. 2017;46:94‐101. [DOI] [PubMed] [Google Scholar]
- 34. Huttinga NRF, van den Berg CAT, Luijten PR, Sbrizzi A. MR‐MOTUS: model‐based non‐rigid motion estimation for MR‐guided radiotherapy using a reference image and minimal k‐space data. Phys Med Biol. 2020;65:015004. [DOI] [PubMed] [Google Scholar]
- 35. Zhang Q, Pevsner A, Hertanto A, et al. A patient‐specific respiratory model of anatomical motion for radiation treatment planning. Med Phys. 2007;34:4772‐4781. [DOI] [PubMed] [Google Scholar]
- 36. Li R, Lewis JH, Jia X, et al. On a PCA‐based lung motion model. Phys Med Biol. 2011;56:6009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Mishra P, Li R, Mak RH, et al. An initial study on the estimation of time‐varying volumetric treatment images and 3D tumor localization from single MV cine EPID images. Med Phys. 2014;41:081713. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Cai W, Hurwitz MH, Williams CL, et al. 3D delivered dose assessment using a 4DCT‐based motion model. Med Phys. 2015;42:2897‐2907. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Low DA, Parikh PJ, Lu W, et al. Novel breathing motion model for radiotherapy. Int J Radiation Oncol Biol Phys. 2005;63:921‐929. [DOI] [PubMed] [Google Scholar]
- 40. Huttinga NRF, Bruijnen T, van den Berg CAT, Luijten PR, Sbrizzi A. Prospective 3D+t non‐rigid motion estimation at high frame‐rate from highly undersampled k‐space data: validation and preliminary in‐vivo results. In Proceedings 27th Annual Meeting ISMRM, vol. 27, Montréal, Canada, 2019. Abstract 1180. [Google Scholar]
- 41. Rueckert D, Sonoda LI, Hayes C, Hill DL, Leach MO, Hawkes DJ. Nonrigid registration using free‐form deformations: application to breast MR images. IEEE Trans Med Imaging. 1999;18:712‐721. [DOI] [PubMed] [Google Scholar]
- 42. Zachiu C, de Senneville BD, Moonen CT, Raaymakers BW, Ries M. Anatomically plausible models and quality assurance criteria for online mono‐and multi‐modal medical image registration. Phys Med Biol. 2018;63:155016. [DOI] [PubMed] [Google Scholar]
- 43. Rohlfing T, Maurer CR, Bluemke DA, Jacobs MA. Volume‐preserving nonrigid registration of MR breast images using free‐form deformation with an incompressibility constraint. IEEE Trans Med Imaging. 2003;22:730‐741. [DOI] [PubMed] [Google Scholar]
- 44. Winkelmann S, Schaeffter T, Koehler T, Eggers H, Doessel O. An optimal radial profile order based on the Golden Ratio for time‐resolved MRI. IEEE Trans Med Imaging. 2006;26:68‐76. [DOI] [PubMed] [Google Scholar]
- 45. Chan RW, Ramsay EA, Cunningham CH, Plewes DB. Temporal stability of adaptive 3D radial MRI using multidimensional golden means. Magn Reson Med. 2009;61:354‐363. [DOI] [PubMed] [Google Scholar]
- 46. Liu DC, Nocedal J. On the limited memory BFGS method for large scale optimization. Math Program. 1989;45:503‐528. [Google Scholar]
- 47. Becker S. L‐BFGS‐B, converted from Fortran to C, with Matlab wrapper. https://github.com/stephenbeckr/L‐BFGS‐B‐C, 2019.
- 48. Uecker M, Ong F, Tamir JI, et al. Berkeley advanced reconstruction toolbox. In Proceedings 23rd Annual Meeting ISMRM, vol. 23, Toronto, Canada, 2015. Abstract 2486. [Google Scholar]
- 49. Jiang W, Ong F, Johnson KM, et al. Motion robust high resolution 3D free‐breathing pulmonary MRI using dynamic 3D image self‐navigator. Magn Reson Med. 2018;79:2954‐2967. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Pruessmann KP, Weiger M, Börnert P, Boesiger P. Advances in sensitivity encoding with arbitrary k‐space trajectories. Magn Reson Med. 2001;46:638‐651. [DOI] [PubMed] [Google Scholar]
- 51. Sbrizzi A, Huttinga NRF, van den Berg CAT. Acquisition, reconstruction and uncertainty quantification of 3D non‐rigid motion fields directly from k‐space data at 100 Hz frame rate. In Proceedings 27th annual meeting ISMRM, Montréal, Canada, 2019. Abstract 0795.
- 52. Lingala SG, Hu Y, DiBella E, Jacob M. Accelerated dynamic MRI exploiting sparsity and low‐rank structure: kt SLR. IEEE Trans Med Imaging. 2011;30:1042‐1054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Liang ZP. Spatiotemporal imaging with partially separable functions In: 2007 4th IEEE International Symposium on Biomedical Imaging: from Nano to Macro. IEEE; 2007:988‐991. [Google Scholar]
- 54. Rank CM, Heußer T, Buzan MT, et al. 4D respiratory motion‐compensated image reconstruction of free‐breathing radial MR data with very high undersampling. Magn Reson Med. 2017;77:1170‐1183. [DOI] [PubMed] [Google Scholar]
- 55. Zhao B, Haldar JP, Christodoulou AG, Liang ZP. Image reconstruction from highly under sampled (k, t)‐space data with joint partial separability and sparsity constraints. IEEE Trans Med Imaging. 2012;31:1809‐1820. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56. Zhao B, Haldar JP, Brinegar C, Liang ZP. Low rank matrix recovery for real‐time cardiac MRI In: 2010 IEEE International Symposium on Biomedical Imaging: from Nano to Macro. IEEE; 2010:996‐999. [Google Scholar]
- 57. Haldar JP, Liang ZP. Spatiotemporal imaging with partially separable functions: A matrix recovery approach In: 2010 7th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI 2010—Proceedings. IEEE; 2010:716‐719. [Google Scholar]
- 58. Delmon V, Rit S, Pinho R, Sarrut D. Registration of sliding objects using direction dependent B‐splines decomposition. Phys Med Biol. 2013;58:1303‐1314. [DOI] [PubMed] [Google Scholar]
- 59. Fu Y, Liu S, Li HH, Li H, Yang D. An adaptive motion regularization technique to support sliding motion in deformable image registration. Med Phys. 2018;45:735‐747. [DOI] [PubMed] [Google Scholar]
- 60. Roemer PB, Edelstein WA, Hayes CE, Souza SP, Mueller OM. The NMR phased array. Magn Reson Med. 1990;16:192‐225. [DOI] [PubMed] [Google Scholar]
- 61. Kellman P, McVeigh ER. Image reconstruction in SNR units: A general method for SNR measurement. Magn Reson Med. 2005;54:1439‐1447. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62. Segars WP, Sturgeon G, Mendonca S, Grimes J, Tsui BMW. 4D XCAT phantom for multimodality imaging research. Med Phys. 2010;37:4902‐4915. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63. Eiben B, Bertholet J, Menten MJ, Nill S, Oelfke U, McClelland JR. Consistent and invertible deformation vector fields for a breathing anthropomorphic phantom: a post‐processing framework for the XCAT phantom. Phys Med Biol. 2020;65:165005. [DOI] [PubMed] [Google Scholar]
- 64. Feng L, Tyagi N, Otazo R. MRSIGMA: magnetic Resonance SIGnature MAtching for real‐time volumetric imaging. Magn Reson Med. 2020;84:1280‐1292. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65. Barnett AH, Magland J, af Klinteberg L. A parallel nonuniform fast Fourier transform library based on an “Exponential of semicircle" kernel. SIAM Journal on Scientific Computing. 2019;41:C479–C504. [Google Scholar]
- 66. Pipe JG, Menon P. Sampling density compensation in MRI: rationale and an iterative numerical solution. Magn Reson Med. 1999;41:179‐186. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
FIGURE S1 Results of the singular value analyses in Supporting Information Section 1 for 3D+t respiratory motion (left), and 3D+t head‐and‐neck motion (right). This figure clearly indicates that 3D+t motion fields possess the low‐rank property; models with R = 3 and R = 6 can, respectively, capture 97.9% and 99.9% of the variance of 3D+t respiratory motion and 3D+t head‐and‐neck motion, allowing for a significant reduction in the number of unknowns
FIGURE S2 This figure shows the effect of a different number of components R = 1, …, 20 on the respiratory motion reconstructions, as discussed in Supporting Information Section 1. The metrics were evaluated on respiratory‐resolved MR‐MOTUS and image reconstructions, and the means over all 20 reconstructed respiratory phases are visualized in this figure. Only a minimal change can be observed in both metrics, showing that the effect of R on the results is minimal for respiratory motion. NMI = Normalized Mutual Information, REN = Relative Error Norm
FIGURE S3 A, Roemer reconstruction. 62 B, The coil compression with , as discussed in Supporting Information Section 2. C, SNR loss factor between (A) and (B). D, The histogram of the SNR loss factor in (C). The SNR loss factor is between 1.5 and 2.5 in most of the body and increases toward the boundary of the body
FIGURE S4 Results of the parameter search as mentioned in Supporting Information Sections 5 and 3.2. A, The effect of the reference image resolution and reference image respiratory binning phase on the reconstruction quality. B, The effect of the reference image resolution on the reconstruction time. In the figure, “InhaleBinned” refers to the binning phase for the reference image (InhaleBinned = 1 for inhale, InhaleBinned = 0 for exhale), “Resolution” denotes the spatial resolution of the reference image, and ‘SplineOrder’ denotes the number of spline basis functions defined per spatial dimension
VIDEO S1 This is an animated figure and should be viewed under Supporting Information. 2D‐t compressed sensing reconstruction (left), MR‐MOTUS warped reference images (middle), and pixel‐wise absolute differences between the two reconstructions (right), as mentioned in Sections 3.1 and 4.1. The top row shows reconstructions for volunteer 1, and the bottom row for volunteer 2
VIDEO S2 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from 2D time‐resolved data, as mentioned in Sections 3.1 and 4.1. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 1. For visualization purposes, the components were scaled such that
VIDEO S3 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from 2D time‐resolved data, as mentioned in Sections 3.1 and 4.1. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 2. For visualization purposes the components were scaled such that
VIDEO S4 This is an animated figure and should be viewed under Supporting Information. Respiratory‐resolved image reconstruction (Resp. resolved IR, left), MR‐MOTUS warped reference images (middle), and pixel‐wise absolute differences between the two reconstructions (right), as mentioned in Sections 3.2 and 4.2. The visualization shows data from volunteer 1
VIDEO S5 This is an animated figure and should be viewed under Supporting Information. Respiratory‐resolved image reconstruction (Resp. resolved IR, left), MR‐MOTUS warped reference images (middle), and pixel‐wise absolute differences between the two reconstructions (right), as mentioned in Sections 3.2 and 4.2. The visualization shows data from volunteer 2
VIDEO S6 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from respiratory‐sorted data, as mentioned in Sections 3.2 and 4.2. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 1. For visualization purposes, the components were scaled such that
VIDEO S7 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from respiratory‐sorted data, as mentioned in Sections 3.2 and 4.2. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 2. For visualization purposes, the components were scaled such that
VIDEO S8 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from respiratory‐sorted data, as mentioned in Sections 3.2 and 4.2. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 1. For visualization purposes, the components were scaled such that
VIDEO S9 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images overlayed with reconstructed dynamic motion fields from respiratory‐sorted data, as mentioned in Sections 3.2 and 4.2. The image shows a decomposition in the reconstructed components (spatial) and Ψ (temporal) for volunteer 2. For visualization purposes, the components were scaled such that
VIDEO S10 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images resulting from the 3D head‐and‐neck motion reconstructions for volunteer 1, as mentioned in Sections 3.3 and 4.3
VIDEO S11 This is an animated figure and should be viewed under Supporting Information. MR‐MOTUS warped reference images resulting from the 3D head‐and‐neck motion reconstructions for volunteer 2, as mentioned in Sections 3.3 and 4.3
VIDEO S12 This is an animated figure and should be viewed under Supporting Information. Respiratory‐resolved image reconstruction (Resp. resolved IR, left), MR‐MOTUS warped reference images (middle), and pixel‐wise absolute differences between the two reconstructions (right), as mentioned in Supporting Information Sections 5 and 3.2. The four blocks show reconstructions with different reconstruction parameter settings. “InhaleBinned” denotes whether the reference image is binned in inhale (1) or exhale (0). ‘Ref. resolution” denotes the resolution of the reference image in millimeters. All motion fields were reconstructed without regularization and with 9 cubic spline functions in every direction
Data Availability Statement
Code that produces similar results as presented in this study is openly available at https://github.com/nrfhuttinga/LowRank_MRMOTUS.git.
