Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Aug 30.
Published in final edited form as: Magn Reson Med. 2025 Aug 5;94(6):2475–2491. doi: 10.1002/mrm.70017

Accelerated free-breathing abdominal T2 mapping with deep learning reconstruction of radial turbo spin-echo data

Brian Toner 1, Simon Arberet 2, Shu Zhang 3, Fei Han 4, Eze Ahanonu 5, Ute Goerke 6, Kevin Johnson 7,8, Zeyad Abouelfetouh 3, Ion Codreanu 3,9, Sajeev Sridhar 3, Hina Arif-Tiwari 8, Vibhas Deshpande 10, Diego R Martin 3, Mariappan Nadar 2, Maria I Altbach 7,8,11, Ali Bilgin 1,5,8,11
PMCID: PMC12396147  NIHMSID: NIHMS2102136  PMID: 40762149

Abstract

Purpose:

To accelerate respiratory triggered free-breathing T2 mapping of the abdomen while maintaining high-quality anatomical images, accurate T2 maps, and fast reconstruction times.

Methods:

We developed a flexible deep learning framework that can be trained in a fully supervised manner to improve T2-weighted images or in a self-supervised manner to reconstruct T2 maps.

Results:

For retrospectively undersampled data, anatomical images and T2 maps reconstructed by the proposed deep learning method demonstrated reduced voxel-wise error compared to existing traditional and compressed sensing techniques. Reconstruction times were approximately 1s per slice, significantly faster than existing compressed sensing techniques. Prospectively undersampled data were also acquired to assess the model.

Conclusion:

The proposed deep-learning framework reconstructed high-quality anatomical images and accurate T2 maps from datasets undersampled to only 160 total radial views (5 views per echo time), enabling full liver coverage in under three minutes on average with per-slice reconstruction times of approximately one second.

Keywords: abdomen, deep learning, self-supervised learning, T2 mapping

1 |. INTRODUCTION

T2-weighted MRI is a key acquisition on which radiologists rely to detect and characterize important pathologies within the abdomen. T2-mapping is a quantitative approach to provide relaxometry measurements that can be more standardized and reproducible than qualitative images. Cartesian spin-echo or turbo spin-echo methods have traditionally been used for T2-weighted imaging1,2 and T2-mapping,35 which has been enabled by respiratory triggering to account for motion during lengthy acquisitions. T2-mapping with single-shot fast spin-echo or echo-planar techniques have been used to accelerate acquisitions to a breath hold69, but Cartesian spin-echo, turbo spin-echo, and echo planar imaging (EPI) techniques are all limited by the balance between temporal and/or spatial resolution and acquisition time. They typically only utilize 2–4 TEs in favor of faster imaging, but this compromises temporal resolution, which leads to inaccurate T2 quantification. A more recent technique called GRAPPATINI10 combines parallel imaging and model-based reconstruction to generate T2 maps directly from k-space, but acquisition is limited to 15 slices in 2:10–6:00min depending on the efficiency of respiratory triggering11.

Non-Cartesian trajectories provide many benefits to abdominal T2 mapping as they are often able to accelerate scans while being more robust to motion and providing high temporal resolution. Magnetic resonance fingerprinting techniques that provide multiparametric capabilities have been applied for T2-mapping of the abdomen using non-Cartesian trajectories.1215 In these methods, data acquisition requires approximately 13–24 s per slice, which is inefficient for achieving full abdominal coverage. A modified PROPELLER16 trajectory has also been reported for T2-mapping of the abdomen with minimum acquisition times ranging from 5:30 to 9:10min per slice, which does not include dead time due to respiratory triggering.

The radial turbo spin-echo (RADTSE) pulse sequence is a non-Cartesian approach that has been proposed for obtaining a time-series of co-registered multicontrast echo time (TE) images, which can be fit voxel-wise to generate a T2 map.1719 The radial trajectory allows RADTSE to provide highly accelerated, motion robust TE images with high spatiotemporal resolution, making it ideal for T2-mapping in the abdomen. Although RADTSE is typically acquired during a breath-hold, navigator-triggered prospective acquisition correction (PACE)20 provides free-breathing capability through respiratory triggering for subjects that cannot hold their breath, at the cost of lengthy scan times that depend on a subject’s respiratory rate. With long acquisition times, achieving full coverage of the abdomen in a single scan is inefficient and not always feasible in clinical settings.

To accelerate RADTSE, fewer radial views can be acquired per TE. The accelerated TE images can be reconstructed with an echo-sharing21 technique, but the echo-sharing reconstruction technique can introduce T2 measurement error due to mixing high spatial frequency data across different TEs. Compressed sensing (CS) techniques that rely on temporal correlations between neighboring TEs have been shown to be effective in reconstructing undersampled RADTSE data without the limitations of echo-sharing,18,19,2224 but are limited by long reconstruction times.

In recent years, deep learning (DL)-based approaches have gained prominence as leading methods for MR image reconstruction.25,26 One such DL-based method was recently introduced for the reconstruction of RADTSE data.27 This approach utilizes patch-based, fully supervised learning, where CS reconstructed magnitude TE images serve as the training targets. Notably, the method was developed and evaluated for breath-hold data acquisition. Despite its promise, this approach has several limitations, including the exclusion of phase information, the use of CS reconstructions as ground truth, and the absence of data consistency layers. Incorporating data consistency layers is crucial in DL-based MRI reconstruction,25,26,2830 as they ensure the network’s output remains faithful to the acquired k-space measurements and help mitigate artifacts introduced by relying solely on learned image priors.

Beyond the RADTSE sequence, DL methods have gained traction for the reconstruction of quantitative relaxometry parameters in many applications,3139 but with their own limitations. End-to-end methods3136 use DL techniques to estimate high-quality parametric maps directly from contrast images reconstructed by conventional methods with poor spatial and/or temporal resolution,3436 undersampling artifacts,3134,36 or motion-induced artifacts.36 One limitation of these methods is that they do not reconstruct high-quality contrast images that are on their own useful in radiological evaluations. Supervised learning methods31,32,34 rely on having high-quality references that are often difficult to acquire, particularly in the abdomen. As previously noted, methods lacking data consistency layers31,32,34,37 often yield suboptimal reconstructions because they fail to explicitly enforce the physical constraints inherent in the measurement process. Subject specific training33,38 causes long reconstruction times during inference. NLINV-Net39 is a hybrid between iterative and DL methods as it performs multiple Gauss-Newton iterations before introducing DL techniques and was not evaluated with a wide range of acceleration rates.

This paper proposes an efficient free-breathing RADTSE technique for accelerated full abdomen T2-mapping. The proposed flexible framework uses a cascaded unrolled network with data consistency layers and can be trained separately to create models for high-resolution T2-weighted imaging and quantitative T2 mapping. The model for T2 mapping incorporates a novel combination of a self-supervised learning scheme with a subspace constrained reconstruction of co-registered multicontrast images. This DL reconstruction approach provides a significant reduction in scan time allowing for full-abdominal respiratory-triggered RADTSE while maintaining high image quality, accurate T2 maps, and fast reconstruction times.

2 |. METHODS

2.1 |. RADTSE acquisition and reconstruction

Figure 1 summarizes the RADTSE acquisition and reconstruction pipelines. As shown in Figure 1A, data acquisition follows a radial trajectory in which different radial views are acquired after each refocusing pulse of the echo train as indicated by the solid radial views. Data for all radial views used for image reconstruction are acquired over multiple echo trains (dotted radial views). The angle of each radial view follows a pseudo-golden angle trajectory40 that ensures not only that all radial views are acquired at a unique angular orientation but also that the subset of data corresponding to each TE contains radial views with well-distributed angles to minimize gaps in k-space. When combined with respiratory triggering, a single echo train per slice is acquired for several slices during the expiratory phase of the respiratory cycle.

FIGURE 1.

FIGURE 1

(A) Respiratory triggered RADTSE acquisition scheme. (B) Overview of the two distinct reconstruction pipelines: Data from all TEs are combined into a single dataset to form a composite image (left). Data from each TE are reconstructed individually to form TE images (right). These can be compressed to PC images using SVD compression. Either the TE or PC images can be fit voxel-wise to a T2 parameter map using dictionary matching.

To reconstruct RADTSE data, first consider the sampling equation

yi=𝓕iSx˜i+ϵi, (1)

where yiCM×1 is the vectorized measured data corresponding to echo time TEi:i={1,,ETL}, ETL is the echo train length, x˜iCN×1 is the vectorized ground truth image at TEi,𝓕i is the nonuniform fast Fourier transform (NUFFT)41 from Cartesian image space to radial frequency space consisting of spatial frequencies sampled at TEi,S encodes coil sensitivity profiles, and ϵi represents measurement noise.

RADTSE data can be reconstructed in two ways, both of which are depicted in Figure 1B. The data from all TEs can be combined into a single dataset, which will result in a single T2-weighted image with contrast corresponding to an average echo time TEavg. The resulting TE-averaged image will be referred to as a composite image, xcomp, and can be reconstructed with 𝓕comp*, which applies to all spatial frequencies acquired during the scan y=y1y2yETL, and a density compensation function (DCF)42 following the equation

xcomp=S*comp*Dy. (2)

Here, the superscript * denotes the Hermitian adjoint and DR(METL)×(METL) is a ramp DCF in the form of a diagonal operator that multiplies spatial frequencies by compensation factors proportional to their radius from the center of k-space. The density-compensation factors D are applied in Equation (2) to precondition nonuniform k-space data prior to comp*, thereby allowing the adjoint to more closely approximate the true inverse of the forward model.

Another way to reconstruct RADTSE data is to reconstruct the time series of co-registered TE images individually following the equation:

xi=S*i*D¯yi, (3)

where xi is the reconstructed TE image, D¯RM×M is a smaller version of D with diagonal entries only for the spatial frequencies acquired for a single TE (D¯ and S have no dependence on i). The TE images can be subsequently fit voxel-wise to a T2 parameter map by dictionary matching18,19,22 using a dictionary of signal decay curves generated with the slice resolved extended phase graph (SEPG) model.24 Each yi is usually highly accelerated, so relying on just NUFFT for reconstruction leads to a low-quality image disrupted by undersampling artifacts.

Rather than using TE images as input and output channels for the proposed reconstruction model, the TE images are projected to a temporal subspace using singular value decomposition (SVD) compression of a signal model basis generated by the SEPG model. This temporal compression not only reduces the computational demand of the model but also takes advantage of sparsity in the temporal dimension,asisthecaseinCSreconstructions.22,23 Only P<ETL principal component (PC) images are used for input and output of the model.

Under the PC model, we refine Equation (3). Let x=x1x2xETL,𝓕C(NETL)×(METL) be a block diagonal matrix with each 𝓕i along the diagonal, and y be the same as in Equation (2). Let UC(NP)×(NETL) be the SVD compression matrix that converts TE images to PC images. This gives the PC model:

Ux=US*𝓕*Dy. (4)

To generalize the proposed network architecture, we will unify the two models (Equations 2 and 4) under a single notation:

x=A*Dy, (5)
yˆ=Ax, (6)

where the adjoint model reconstructs images x:=Ux from acquired k-space data y. The forward model projects the reconstructed images x back to k-space, which gives yˆ. Whenever applied to composite images, let A:=𝓕compS and U=I (the Identity operator). Whenever applied to PC images, let A:=𝓕SU* and U be defined as it was in Equation (4). The differences between these two models are illustrated in Figure 1B.

It should be noted that D is applied only for the adjoint image model (k-space to image) for the purpose of improving image reconstruction42 but is not needed for the forward image model (image to k-space). For the remainder of this work, we will assume that A*:=A*D even when not explicitly stated. Although this means that A and A* are no longer true adjoints under this convention, this will be used in practice for image reconstruction.

2.2 |. Network

A cascaded unrolled neural network28 architecture was developed for the task of reconstructing high-quality images from highly undersampled PACE RADTSE datasets. The network architecture, illustrated in Figure 2A, consists of K cascades, which iterate between convolutional neural network (CNN) blocks for artifact suppression and data consistency (DC) blocks that promote fidelity to the acquired, undersampled data. Each CNN block consists of L convolution layers, which are each followed by a nonlinear activation, and ends with a residual layer.

FIGURE 2.

FIGURE 2

(A) The DL reconstruction model is a cascaded unrolled network that alternates between CNN and DC blocks. DC blocks follow Equation (9). For composite images τ=1 and for PC images τ=P (the number of principal components). Below, the training schemes for (B) the composite network and (C) the TE/T2 network. Both produce inputs by randomly zero-filling a fraction of radial views. The composite network calculates loss in image space following Equation (11). For the TE/T2 network, the outputs are projected back to k-space, where loss is calculated between the original raw data and the reconstructed k-space, following the loss function stated in Equation (13).

Two distinct models under this same architecture were developed: a composite network to reconstruct composite images and a TE/T2 model to reconstruct time-resolved images that can be fit to T2 maps. The network architectures for each are nearly identical, other than a different imaging model for DC layers (A and A* from Equations 6 and 5) and a different number of input/output channels for the CNN layers. The composite network has only one input and output channel, whereas the TE/T2 network requires a multichannel architecture to account for the temporal dimension.

Each DC(j) block uses gradient descent to minimize

12AfCNN(j)x(j-1)-y2, (7)

where fCNN(j) is the jth CNN block, yC(METL)×1 represents the k-space data, x(j-1)C(Nτ)×1 is the image after j-1 cascades, and τ=1 for composite images or τ=P for PC images. The initial input is set to be x(0)=A*y. The gradient with respect to xˆ(j):=fCNN(j)(x(j-1)) is

ddxˆ(j)12Axˆ(j)-y2=A*(Axˆ(j)-y), (8)

which informs our DC block update step

x(j)=DC(j)(xˆ(j))=xˆ(j)-ηA*(Axˆ(j)-y). (9)

Here, the step size η is treated as a trainable parameter of the network and is learned through backpropagation. The final PC images after all cascades can be fit voxel-wise to T2 parameter maps themselves and can also be projected to TE space to provide a co-registered multicontrast time series of T2-weighted images at different TEs.

2.3 |. Training

During training, network input images are formed by zero-filling a random number of radial k-space views. Entire echo trains are randomly selected for zero-filling to ensure that all TEs still have the same number of radial views as each other. Let y be the original data, x be the composite image reconstructed with an adjoint NUFFT from the full dataset, y be the zero-filled data, and fcomp be the reconstruction model that takes the data y and sensitivity maps S as input and produces reconstructed composite images xˆ as output:

xˆ=fcompy,S. (10)

For the composite images, the training data consists of well sampled images that can be used as labels in a supervised training scheme. For this model, the loss function used is in image space:

x(x,xˆ)=αxˆ-x1x1+(1-α)xˆ-x2x2, (11)

where α[0,1] balances contributions between l1 and l2 error (set to 12).

For the TE/T2 model, fT2, there are no label images because the acquired TE datasets are highly undersampled (views/TE ≤ 12), so there are no high-quality targets to calculate loss in image space in a fully supervised manner. Instead, a self-supervised scheme with k-space loss is used. First, the imaging model A projects the output images back to k-space:

yˆ=AfT2y,S. (12)

Then the loss is calculated between the reconstructed k-space yˆ and the acquired training k-space y. The loss function used is

y(y,yˆ)=αyˆ-y1y1+(1-α)yˆ-y2y2, (13)

where α is the same as in Equation (11). The training scheme is illustrated in Figure 2C and the self-supervised version was inspired by the self-supervised learning via data undersampling (SSDU)43 framework. Unlike the original SSDU framework where loss is calculated on the subset of spatial frequencies that is disjoint from the subset of frequencies used for input, the loss in this framework is calculated with respect to the entire acquired dataset to further enforce data consistency. The model was trained with 5 cascades, 5 convolutional layers per CNN block, kernel size = 3 × 3, and 64 channels in hidden layers. ReLU activations that were modified for complex weights44 were used. The initial learning rate was set to 1e – 4 and it decayed according to a cosine annealing scheduler. Batch size varied from 1–4 based on model size and GPU availability. The NAdam optimizer45 was used during training. The composite network and TE/T2 network were trained for 450 and 150 epochs, respectively. The network hyperparameters were explored further in an ablation studies that can be seen in the Supporting Information section.

2.4 |. Dataset

A total of 121 consenting volunteers were imaged with PACE RADTSE at two imaging sites using 3T scanners (MAGNETOM Skyra, MAGNETOM Vida; Siemens Healthineers, Erlangen, Germany). The prototype sequence RADTSE with PACE-triggering (PACE RADTSE) allows for the acquisition of 28–30 slices. The protocol in this study was set to acquire 28 2D 6 mm axial slices with a 0.78 mm (13%) gap, giving a total coverage of 189.06 mm, which was sufficient to cover the abdomen for most of the subjects. Data were acquired with ETL = 32, echo spacing of 8.1 ms (Magnetom Skyra) or 8.46 ms (Magnetom Vida), flip angle= 150°, TR = 1 respiratory cycle, and prescribed field of view (FoV)= 380 × 380mm2. All subjects were imaged with 384 radial views and 512 readout points per view. By default, the sequence utilizes 2x oversampling in the readout direction, which allows a max FoV of double the prescribed FoV. We elected to use images of matrix size 320 × 320 as a compromise between increased FoV (475 × 475mm2) and reduced memory demand (compared to using the maximum 512 × 512 matrix size). Data were acquired with fat suppression using a chemical-shift selective (CHESS) technique. Of the 121 subjects, 85 were randomly selected for training, 9 for validation, and 27 for testing.

Before being used as input for the network, the RADTSE k-space data went through several preprocessing steps. Radial de-streaking46 was performed on the raw data to remove streaking artifacts that commonly emanate from the arms in radial imaging due to gradient nonlinearity and B0 inhomogeneity at the periphery of the FoV. This step has been shown to enhance performance of DL reconstructions.47 The datasets were also normalized so that the magnitude of the input images had a dynamic range of [0, 1]. To reduce the computational demand and to ensure the same number of coils for all datasets, the multicoil kspace data and sensitivity maps were projected to 6 virtual coils using SVD compression. For each training batch, an integer r was selected at random from r{2,3,,8} (following a uniform distribution), and radial views from r echo trains (out of 12 total) were used as input, while the rest were zero-filled. This led to training acceleration rates between 1.5–6×. 4 PC images were used as input and output of the CNN blocks of the network. For the signal dictionary used, the first 4 PCs accounted for 99.96% of explained variance.

Of the 27 test subjects, 22 were imaged with additional testing protocols. A T2 quantification protocol was designed and performed on 7 test subjects. This protocol consisted of a 5-slice PACE RADTSE acquisition with 8192 radial views. A dataset with 8192 views is not clinically reasonable (acquisition time ≈ 25min), but by using these data, TE images could be reconstructed using an adjoint NUFFT, which served as the gold standard to which the accelerated T2 maps could be compared. Only 5 slices were acquired for this protocol due to long scan times. All parameters other than the number of views and number of slices were kept the same as the training protocol. A prospectively accelerated protocol was designed and performed on 15 of the test subjects. This protocol consisted of 28-slice PACE RADTSE acquisitions with 192, 160, and 128 radial views that otherwise had identical sequence parameters to the 384-view training acquisition.

2.5 |. Validation on T2 error

After training was completed, the epoch with the lowest validation loss was chosen retrospectively as the optimal composite network. For the TE/T2 network, minimizing the training loss leads to high-quality images, but there is no explicit incorporation of T2 error in the loss function. To select a TE/T2 model that also minimizes T2 error, a second validation process was performed on the validation set. The output images were fit voxel-wise to T2 parameter maps, and the mean T2 values of fixed volume regions of interest (ROIs) were compared between the predictions and targets. For this step, the predictions were performed using the data retrospectively undersampled to 160 views and the targets were obtained from the full 384-view data reconstructed with a locally low rank (LLR) iterative technique,22 which reconstructs PC images using the same temporal compression as the proposed DL method. This validation was performed after each training epoch. After 150 training epochs were completed, the model from the epoch with the minimum mean T2 error within liver, kidney, spleen, and muscle ROIs was selected as the optimal model.

2.6 |. T2 quantification

To evaluate the reliability of T2-mapping estimates from the proposed method, the well-sampled reference scans with 8192 views described earlier were used. These data were first reconstructed using an adjoint NUFFT at each TE to create reference TE images and T2 maps. Then, they were retrospectively undersampled to simulate scans of 192, 160, and 128 views, and reconstructed using the DL and LLR methods. Unlike the random retrospective undersampling from training, the testing undersampling was performed by selecting entire echo trains for zero-filling in a deterministic manner that ensured each TE dataset had a well-distributed coverage of k-space to best mimic prospectively undersampled data.

Fixed volume ROIs were placed in the liver, right kidney, left kidney, spleen, and paraspinal muscle on each slice that the organ appeared. Mean T2 values within each ROI were compared between the well sampled reference and each of the accelerated reconstructions. ROI T2 values for each method at each acceleration were compared to the reference reconstruction in correlation plots, and linear regression analysis on the plotted points was used to assess both accuracy and precision.

2.7 |. Anatomical image error

Because the composite network was trained in a fully supervised scheme, 384-view data from the test set could be used to create reference composite images and evaluate voxel-wise metrics comparing the reference to retrospectively undersampled composite images. During evaluation, the testing datasets were undersampled by zero-filling entire echo trains that were selected in the same deterministic manner described earlier to ensure well-distributed coverage of k-space after for each TE so that the contrast contributions from each TE to the composite image were unchanged. Reference images were normalized so that the dynamic range was 0 to 1. The undersampled reconstructions were then normalized to have the same mean intensity value as the reference image. l1 error, l2 error, and peak signal to noise ratio (PSNR) were calculated between the 384-view composite image reconstructed by adjoint NUFFT and the images with 192, 160, and 128 radial views reconstructed by both adjoint NUFFT and the DL methods.

Composite images are reconstructed from data from all TEs, which leads to an image with fewer undersampling artifacts compared to the TE images. However, it may also be beneficial for radiologists to be able to dynamically adjust the contrast by scrolling through co-registered TE images. This is a feature of RADTSE not available in traditional T2-weighted imaging. To evaluate the quality of the TE images as anatomical images, the T2 quantification testing subset (consisting of 5-slice 8192-view protocol performed on 7 subjects) was used to create ground truth TE images to which the undersampled reconstructions can be compared. The reference TE images were reconstructed by an adjoint NUFFT at each TE (256 views per TE image). The reference TE images were compared to TE images reconstructed by NUFFT, LLR, and DL methods. The images reconstructed with NUFFT were projected to the PC subspace and then back to TE space, a process known as subspace filtering, to improve image quality and to ensure a fair comparison with LLR and DL, which also benefit from the SVD compression. As was done with the composite images, the reference TE images were first normalized so that the dynamic range of the entire time series of TE images was 0 to 1 for each slice, and then the undersampled reconstructions were normalized to have the same mean intensity value as the reference images. l1 error, l2 error, and PSNR were calculated between the reference and TE images reconstructed by the NUFFT, LLR, and DL methods.

2.8 |. Prospective data

The prospectively accelerated dataset, which consisted of 15 subjects imaged with 384, 192, 160, and 128 views, was used to qualitatively evaluate the model on prospectively undersampled data. This removes the ability to make voxel-wise comparisons like in retrospectively undersampled data but allows for the opportunity to test how the model will work in more realistic settings where the trajectory is optimized for the number of views acquired. It also provides the ability to gather statistics on acquisition times in respiratory triggered data.

2.9 |. System and code availability

The network was implemented in Pytorch45 using the Merlin44 library to enable the use of complex data and weights. Several versions of the model were trained as an ablation study, which is explained in Supporting Information. Models were trained using either Tesla P100, Nvidia GeForce RTX 3090, or Nvidia RTX A6000 GPUs. The evaluation of the speed of the network was performed using a Nvidia RTX A6000 GPU.

3 |. RESULTS

3.1 |. T2 quantification

Using the T2 quantification testing set, Figure 3A demonstrates the difference in T2 values produced by the LLR and DL methods on retrospectively undersampled data compared to the 8192-view reference. When comparing to the reference scan, the T2 maps reconstructed by DL have stronger correlation of mean ROI T2 values than those reconstructed by LLR. By examining the regression line, we observe that the line fitted to the DL T2 values is closer to the identity line, which indicates high accuracy. Likewise, the R2 values are closer to 1 for the DL datapoints, indicating higher precision than the LLR datapoints. Figure 3B compares T2 maps qualitatively. It can be seen visually that the DL method is more robust to undersampling compared to the LLR method, as the maps are not as disrupted by noise or streaks, as shown in the insets in the figure. Figure 3 shows that there is a drop off in performance by the network when going from 160 to 128 radial views, indicating that 160 radial views may be the ideal compromise between acceleration and reliable T2 quantification. It should be noted that regions such as subcutaneous fat are nulled by fat suppression. These regions have low, nearly constant signal across all TE images, leading to apparent high T2 values.

FIGURE 3.

FIGURE 3

(A) Correlation of T2 values at different retrospective acceleration rates, where reference T2 values (horizontal axis) are acquired with an NUFFT reconstruction of the 8192-view dataset and test T2 values are acquired with a LLR reconstruction (top row) and DL reconstruction (bottom row) at different retrospective acceleration rates. (B) T2 maps of reference data retrospectively undersampled at different rates. (i) Reference T2 map reconstructed with NUFFT. (ii)–(iv) T2 maps reconstructed with the LLR method. (v)–(vii) T2 maps reconstructed with the proposed DL method. Below, a zoomed in view shows that the maps reconstructed by LLR are noisier, more disrupted by streaks, and display more heterogeneity in the liver and spleen compared to the maps reconstructed by the proposed DL method. Both the correlation plots and the T2 maps show a drop in performance when going from 160 to 128 views using the DL model, indicating that 160 radial views is the ideal compromise between acceleration and ability to produce reliable T2 maps.

Table 1 shows the mean relative error of the mean ROI T2 values comparing each of the reconstruction methods to the reference scan. Here we observe that with 192 radial views (6 per TE image), DL and LLR provide comparable results. However, at increased acceleration rates of 160 and 128 views (5 and 4 per TE image), the error from the LLR reconstructions increases markedly, while the error from the DL reconstructions is more stable. When we combine all organs together, the DL method produces lower T2 error at all 3 acceleration rates.

TABLE 1.

Mean relative T2 error (%) of DL and LLR reconstructions at different retrospective acceleration rates of different organs of interest. Bold font indicates the smaller error between DL and LLR for a given organ and acceleration rate.

Views Liver Right Kidney Left Kidney Spleen Muscle All






DL LLR DL LLR DL LLR DL LLR DL LLR DL LLR
192 5.22 6.84 4.98 5.13 3.62 5.51 5.56 7.15 6.98 4.43 5.35 5.83
160 5.14 8.50 3.74 6.56 4.27 6.65 7.56 11.03 8.68 7.80 6.06 8.18
128 5.62 15.27 3.55 5.32 5.98 10.85 7.36 11.78 7.88 13.18 6.25 11.78

3.2 |. Anatomical image error

A representative TE image corresponding to 89.1ms, which is in the range commonly used by radiologists for the task of detecting focal liver lesions,3 is shown in Figure 4. l1 error, l2 error, and PSNR for all slices in the testing set are reported in Table 2. These metrics are calculated on the entire time series of 32 TE images, and l1 and l2 errors are normalized by the respective norm of the reference image. Here, it is observed that the TE images reconstructed by the proposed DL method have lower error when compared to the reference than do the images reconstructed by either NUFFT or LLR at all three acceleration rates. In fact, the DL TE images using just 128 views have lower error than the NUFFT or LLR TE images using 192 views.

FIGURE 4.

FIGURE 4

Sample TE image retrospectively undersampled and reconstructed by NUFFT, LLR, and DL methods. Here the 11th of 32 TE images corresponding to 89.1 ms is shown on top. The absolute error maps, calculated by using TE images from the full 8192-view dataset reconstructed by adjoint NUFFT as reference, are displayed below.

TABLE 2.

Voxel-wise l1 error, l2 error, and PSNR (dB) on the retrospectively undersampled TE and composite images.

Image type Views l1 Error l2 Error PSNR



NUFFT LLR DL NUFFT LLR DL NUFFT LLR DL
TE 192 0.37 0.25 0.16 0.31 0.22 0.16 29.30 32.18 35.25
160 0.40 0.26 0.16 0.35 0.23 0.16 28.48 31.87 35.16
128 0.49 0.30 0.17 0.40 0.26 0.17 27.19 30.76 34.57
Composite 192 0.11 0.10 0.10 0.09 39.40 40.19
160 0.16 0.13 0.15 0.13 36.19 37.54
128 0.17 0.12 0.15 0.12 35.83 37.76

Note: l1 and l2 errors were normalized by the respective norm of the reference image. TE metrics were averaged across all slices of the T2 quantification testing set. Composite metrics were averaged across all slices of the original testing set. Bold indicates the lowest error or highest PSNR of the different reconstruction techniques for a given acceleration.

Figure 5 shows a representative composite image, and Table 2 summarizes the calculated errors for all slices in the testing dataset. Here it is observed that the composite images reconstructed with DL have lower l1 and l2 error and higher PSNR than the images reconstructed with NUFFT when compared to the 384-view reference at all three acceleration rates.

FIGURE 5.

FIGURE 5

Representative composite image retrospectively undersampled and reconstructed by adjoint NUFFT and the proposed DL method.

3.3 |. Prospective data

The prospectively accelerated data acquired on 15 test subjects were used to test the model on real data and to gather statistics on the true acquisition times of accelerated PACE datasets. Figure 6 shows representative composite images, 3 of the 32 TE images, and T2 maps reconstructed with the proposed DL method using prospectively acquired data with 384, 192, 160, or 128 radial views. All images are co-registered and are reconstructed from a single RADTSE dataset serving as an example of the versatility of RADTSE to multicontrast qualitative images and corresponding quantitative maps. The average acquisition time over the 15 test subjects for the 384, 192, 160, and 128 scans were 8:19, 3:33, 2:57, and 2:37min, respectively. On average, there is a 64–68% reduction in scan time when only 160–128 radial views are acquired compared to 384 radial views, which is approximately equal to the expected increase due to collecting 58–67% less k-space data. As mentioned before, Figure 3 suggests that 160 views is the ideal acceleration rate for the DL reconstruction.

FIGURE 6.

FIGURE 6

Composite image, 3 of the 32 TE images, and corresponding T2 map for prospectively undersampled datasets reconstructed with the proposed DL method. The mean acquisition time for each acceleration rate is also reported to demonstrate the amount of time saved by accelerating acquisitions.

Figures 7 and S1S4 demonstrate the full coverage achieved in a single free-breathing RADTSE scan with 160 radial views. Figures 7, S1, and S2 show the composite images and T2 maps for all slices of a single acquisition while Figures S3 and S4 show the 5th and 11th of 32 total TE images from the same acquisition. The proposed DL reconstruction utilizes data acquired with an average scan time of just 2:57min to reconstruct 28 composite images, 28 T2 maps, and 896 (32 TEs × 28 slices) TE images with reconstruction times of 1s per slice.

FIGURE 7.

FIGURE 7

Composite images and T2 maps for all slices from an acquisition with 160 radial views reconstructed with the DL method. Larger versions of these images are provided in Figures S1 and S2. A sample of the TE images is shown in Figures S3 and S4. Scans with 160 views took on average, 2:57min to acquire 28 6 mm slices with a 0.79 mm (13%) gap between slices, giving a total coverage of 189.06 mm. Note that regions such as subcutaneous fat that are nulled by fat suppression have high T2 values due to low, nearly constant signal across all TE images.

3.4 |. Test subjects with pathology

The training set consisted of healthy volunteers, although benign lesions such as cysts and hemangiomas were identified in some subjects. In addition to healthy volunteers, the model was evaluated on two patients with pathology. Figure 8 demonstrates the performance of the model on both healthy volunteers and patients that were identified by a radiologist to have cholangiohepatitis, a hemangioma, a hepatic cyst, a splenic cyst, and a gastrointestinal stromal tumor of the stomach. The healthy volunteer with a hemangioma was one of the subjects imaged at 8192 views, which provides a high-quality reference. The other subjects were only imaged with 384 views, which should not be considered a gold standard but still provide insight as to how robust the model is to undersampling. All scans were retrospectively undersampled to 160 views to show performance of the model on accelerated scans.

FIGURE 8.

FIGURE 8

Composite images and T2 maps of subjects with (A) cholangiohepatitis (384/160 view T2 = 38.9/38.5 ms), (B) a hemangioma (8192/160 view T2 = 139.5/144.2ms), (C) a hepatic cyst (384/160 view T2 = 346.7/343.9 ms) and a splenic cyst (384/160 view T2 = 279.0/284.3 ms), (D) a gastrointestinal stromal tumor of the stomach (384/160 view T2 = 133.2/133.0 ms). On the top of each grouping shows the images reconstructed with all acquired radial views, and on the bottom shows those reconstructed with only 160 views (undersampled retrospectively). The subject with a hemangioma was imaged at 8192 views, which is used as the reference for that case. The other subjects were only imaged with 384 views, so that is used as a comparison for those cases. Note that the T2 display range changes for each subject to highlight the pathology of interest, which is also pointed out with arrows.

4 |. DISCUSSION

We present an accelerated RADTSE technique with DL reconstruction to provide T2-weighted images and T2 maps of the full abdomen in a short free breathing scan. PACE RADTSE provides a composite image, TE images, and T2 map from data from a single accelerated acquisition. This is a distinct advantage over other T2-weighted sequences used in the abdomen, which typically provide a single TE image per slice. RADTSE provides high spatial and temporal resolution for T2 mapping, but the acquisition of PACE triggered data poses challenges in terms of scan time depending on the subject’s breathing pattern. Acceleration can be achieved by collecting fewer radial views, but radial undersampling affects image quality and T2 mapping accuracy. To overcome the effect of undersampling the reconstruction of accelerated RADTSE TE images and T2 maps relied on echo-sharing or CS methods for reconstruction in the past.21,22 Echo-sharing reconstruction may lead to spurious T2 values due to sharing data across TEs21 and CS has the downside of long reconstruction times. The DL reconstruction technique proposed here proved to be more robust than NUFFT and CS to produce images and accurate T2 maps using high acceleration with fewer artifacts. Compared to CS, the DL technique has a clear advantage in reconstruction time. A single slice of RADTSE data takes approximately 1s using the DL technique but could take more than 30min using the LLR method on the same system, depending on the choice of reconstruction parameters.

RADTSE with 160 radial views was able to achieve full coverage of the liver with an average scan time of 2:57min in freely breathing subjects. Other than increasing scanning efficiency, reducing the acquisition time also makes it easier for patients to maintain a regular respiratory pattern and to remain grossly in a static position within the scanner, which should reduce overall whole patient motion artifacts or deterioration from poor respiratory triggering. The results of this work suggest that 160-view acquisition appears to be an ideal trade-off between acquisition time, anatomical image quality, and T2 accuracy and precision.

RADTSE provides advantages over other T2 mapping approaches due to its robustness to motion, ability to be accelerated by acquiring fewer radial views, and high spatial and temporal resolution. The robustness to motion also extends to T2-weighted imaging, but drawbacks to the radial approach compared to Cartesian approaches for T2-weighted imaging include increased susceptibility to off-resonance and gradient delay effects. The fat suppression utilized in the protocol presented ameliorates off-resonance sensitivity and gradient delays can be corrected during pulse sequence design.

Although the proposed model is able to mitigate streaks caused by undersampling, we still observe residual streaking artifacts in some cases. These streaks emanate from hyper-intense regions that exhibit motion such as the stomach due to peristalsis (Figure 6) or cerebrospinal fluid due to pulsation (Figure 3B, vii). When sampled sufficiently, such as the case with 8192 views, the motion averages out over time, so these artifacts could potentially be removed if the model were trained on a large training set of 8192-view data. This would require long scan times to yield a few slices (acquisition time ≈ 25 min for 5 slices).

The protocol used to collect training data was designed specifically for the application of free-breathing T2-mapping of the abdomen, and the model was trained accordingly with these data. For other applications, new protocols would need to be designed, and the model would likely require fine tuning to achieve the performance that we observed for this application. However, the fully supervised and self-supervised training schemes used in this study were presented in a flexible manner to be able to be easily applied to other datasets. The dataset used in this study consists predominantly of healthy volunteers. The proposed method was able to reliably reconstruct images for several subjects with different pathologies, highlighted in Figure 8, but should be evaluated on a larger cohort of patients with a wider variety of pathological conditions.

The reconstruction model consisted of five cascades of CNN blocks, each consisting of five convolutional layers, a relatively small and simple CNN architecture. Due to the multiple cascades, the size and complexity of each individual CNN block was limited by the memory of the GPU used in training for this study. With rapid recent advances in GPU technology, newer GPU models may allow for larger and more complex architectures, which could potentially improve performance.

Scanning efficiency can, in some subjects, be limited by specific absorption rate (SAR) safety limits. The protocol used for this study utilized a constant flip angle of 150°, but variable flip angle schemes have been shown to reduce SAR when using RADTSE for T2-mapping of the abdomen,17 and could be used to further increase scanning efficiency.

5 |. CONCLUSION

We propose a solution for quantitative MRI of liver and abdomen, which has historically been limited due to technical challenges that include long scan times and deterioration from physiological motion. Quantitative T2 mapping of liver and abdomen has the potential of robust differentiation of normal from abnormal tissues and cancer and could standardize evaluation of disease across vendor platforms. This paper presents an efficient free-breathing technique for accelerated full abdominal T2 mapping that is enabled by a novel self-supervised deep learning reconstruction model. RADTSE allows for reconstruction of a series of anatomical T2-weighted images and T2 parameter map in a single acquisition. Respiratory motion is a concern in the abdomen, and respiratory triggering is a potential solution for subjects that cannot hold their breath. However, triggering is slow and depends on the subject’s respiratory rate. The DL reconstruction model was trained to produce both high-quality T2-weighted anatomical images and accurate T2 maps from data acquired in a single accelerated respiratory triggered acquisition.

The model was evaluated on its ability to produce accurate and precise quantitative maps as well as its ability to produce high-quality anatomical images from accelerated datasets. The DL model achieves both these goals, and in doing so can dramatically decrease both acquisition times compared to well sampled datasets and reconstruction times compared to iterative techniques. Using this technique can accelerate respiratory triggered free-breathing T2 mapping of the entire abdomen to a mean scan time of under 3min without sacrificing image quality or T2 accuracy.

Supplementary Material

Supporting information

SUPPORTING INFORMATION

Additional supporting information may be found in the online version of the article at the publisher’s website.

Data S1: Supporting Information.

ACKNOWLEDGEMENTS

Part of this work was presented as a conference abstract.48 The authors would like to acknowledge grant support from the National Institutes of Health (CA245920 and EB031894), Arizona Biomedical Research Centre (CTR056039), the Technology and Research Initiative Fund (TRIF) Improving Health Initiative, and the Research Training Group in Data Driven Discovery at the University of Arizona (NSF grant DMS-1937229). The concepts and information presented in this abstract are based on research results that are not commercially available. Future commercial availability cannot be guaranteed.

FUNDING INFORMATION

Simon Arberet, Fei Han, Ute Goerke, Vibhas Deshpande, and Mariappan Nadar are employees of Siemens Healthineers. National Institutes of Health: grants CA245920 and EB031894, Technology and Research Initiative Fund (TRIF) Improving Health Initiative, and National Science Foundation Research Training Group in Data Driven Discovery at the University of Arizona: grant DMS-1937229.

Funding information

Technology and Research Initiative Fund (TRIF) Improving Health Initiative; Arizona Biomedical Research Center, Grant/Award Number: CTR056039; Research Training Group in Data Driven Discovery at the University of Arizona - NSF, Grant/Award Number: DMS-1937229; National Institute of Biomedical Imaging and Bioengineering (NIBIB) - NIH, Grant/Award Number: EB031894; National Cancer Institute (NCI) - NIH, Grant/Award Number: CA245920

Footnotes

CONFLICT OF INTEREST STATEMENT

Co-authors Simon Arberet, Fei Han, Ute Goerke, Vibhas Deshpande, and Mariappan Nadar are employees of Siemens Healthineers.

DATA AVAILABILITY STATEMENT

Although the human MRI data are not approved to be shared publicly, the code is publicly available at https://github.com/UA-MRI/radtse-dl-recon.git.

REFERENCES

  • 1.Semelka RC, Kelekis NL, Thomasson D, Brown MA, Laub GA. HASTE MR imaging: description of technique and preliminary results in the abdomen. J Magn Reson Imaging. 1996;6:698–699. [DOI] [PubMed] [Google Scholar]
  • 2.Klessen C, Asbach P, Kroencke TJ, et al. Magnetic resonance imaging of the upper abdomen using a free-breathing T2-weighted turbo spin echo sequence with navigator triggered prospective acquisition correction. J Magn Reson Imaging. 2005;21:576–582. [DOI] [PubMed] [Google Scholar]
  • 3.Kim YH, Saini S, Blake MA, et al. Distinguishing hepatic metastases from Hemangiomas: qualitative and quantitative diagnostic performance through dual Echo respiratory-triggered fast spin Echo magnetic resonance imaging. J Comput Assist Tomogr. 2005;29:571–579. [DOI] [PubMed] [Google Scholar]
  • 4.Farraher SW, Jara H, Chang KJ, Ozonoff A, Soto JA. Differentiation of hepatocellular carcinoma and hepatic metastasis from cysts and hemangiomas with calculated T2 relaxation times and the T1/T2 relaxation times ratio. J Magn Reson Imaging. 2006;24:1333–1341. [DOI] [PubMed] [Google Scholar]
  • 5.Cieszanowski A, AnyszGrodzicka A, Szeszkowski W, et al. Characterization of focal liver lesions using quantitative techniques: comparison of apparent diffusion coefficient values and T2 relaxation times. Eur Radiol. 2012;22:2514–2524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Goldberg MA, Hahn PF, Saini S, et al. Value of T1 and T2 relaxation times from echoplanar MR imaging in the characterization of focal hepatic lesions. Am J Roentgenol. 1993;160:1011–1017. [DOI] [PubMed] [Google Scholar]
  • 7.Abe Y, Yamashita Y, Tang Y, Namimoteo T, Takahashi M. Calculation of T2 relaxation time from ultrafast single shot sequences for differentiation of liver tumors: comparison of echo-planar, HASTE, and spin-echo sequences. Radiat Med. 2000;18:7–14. [PubMed] [Google Scholar]
  • 8.Lin X, Dai L, Yang Q, et al. Free-breathing and instantaneous abdominal T2 mapping via single-shot multiple overlapping-echo acquisition and deep learning reconstruction. Eur Radiol. 2023;33:4938–4948. [DOI] [PubMed] [Google Scholar]
  • 9.Meloni A, Carnevale A, Gaio P, et al. Liver T1 and T2 mapping in a large cohort of healthy subjects: normal ranges and correlation with age and sex. Magn Reson Mater Phys Biol Med. 2024;37:93–100. [Google Scholar]
  • 10.Hilbert T, Sumpf TJ, Weiland E, et al. Accelerated T2 mapping combining parallel MRI and model-based reconstruction: GRAPPATINI. J Magn Reson Imaging. 2018;48:359–368. [DOI] [PubMed] [Google Scholar]
  • 11.ViettiVioli N, Hilbert T, Bastiaansen JA, et al. Patient respiratory-triggered quantitative T2 mapping in the pancreas. J Magn Reson Imaging. 2019;50:410–416. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Chen Y, Jiang Y, Pahwa S, et al. MR fingerprinting for rapid quantitative abdominal imaging. Radiology. 2016;279:278–286. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Huang SS, Boyacioglu R, Bolding R, MacAskill C, Chen Y, Griswold MA. Free-breathing abdominal magnetic resonance fingerprinting using a pilot tone navigator. J Magn Reson Imaging. 2021;54:1138–1151. [DOI] [PubMed] [Google Scholar]
  • 14.Jaubert O, Arrieta C, Cruz G, et al. Multi-parametric liver tissue characterization using MR fingerprinting: simultaneous T1, T2, T2*, and fat fraction mapping. Magn Reson Med. 2020;84:2625–2635. [DOI] [PubMed] [Google Scholar]
  • 15.Ostenson J, Damon BM, Welch EB. MR fingerprinting with simultaneous T1, T2, and fat signal fraction estimation with integrated B0 correction reduces bias in water T1 and T2 estimates. Magn Reson Imaging. 2019;60:7–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Deng J, Larson AC. Modified PROPELLER approach for T2-mapping of the abdomen. Magn Reson Med. 2009;61:1269–1278. [DOI] [PubMed] [Google Scholar]
  • 17.Keerthivasan MB, Galons JP, Johnson K, et al. Abdominal T2-weighted imaging and T2 mapping using a variable Flip angle radial turbo spin-Echo technique. J Magn Reson Imaging. 2022;55:289–300. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Huang C, Bilgin A, Barr T, Altbach MI. T2 Relaxometry with indirect Echo compensation from highly Undersampled data. Magn Reson Med. 2013;70:1026–1037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Huang C, Altbach MI, Fakhri GE. Pattern recognition for rapid T2 mapping with stimulated echo compensation. Magn Reson Imaging. 2014;32:969–974. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Morita S, Ueno E, Suzuki K, et al. Navigator-triggered prospective acquisition correction (PACE) technique vs. conventional respiratory-triggered technique for free-breathing 3D MRCP: an initial prospective comparative study using healthy volunteers. J Magn Reson Imaging. 2008;28:673–677. [DOI] [PubMed] [Google Scholar]
  • 21.Altbach MI, Bilgin A, Li Z, Clarkson EW, Trouard TP, Gmitro AF. Processing of radial fast spin-Echo data for obtaining T2 estimates from a single K-space data set. Magn Reson Med. 2005;54:549–559. [DOI] [PubMed] [Google Scholar]
  • 22.Huang C, Graff CG, Clarkson EW, Bilgin A, Altbach MI. T2 mapping from highly Undersampled data by reconstruction of principal component coefficient maps using compressed sensing. Magn Reson Med. 2012;67:1355–1366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Mandava S, Keerthivasan MB, Martin DR, Altbach MI, Bilgin A. Improving subspace constrained radial fast spin echo MRI using block matching driven non-local low rank regularization. Phys Med Biol. 2021;66:04NT03. [Google Scholar]
  • 24.Lebel RM, Wilman AH. Transverse relaxometry with stimulated echo compensation. Magn Reson Med. 2010;64:1005–1014. [DOI] [PubMed] [Google Scholar]
  • 25.Hammernik K, Küstner T, Yaman B, et al. Physics-driven deep learning for computational magnetic resonance imaging: combining physics and machine learning for improved medical imaging. IEEE Signal Process Mag. 2023;40:98–114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Heckel R, Jacob M, Chaudhari A, Perlman O, Shimron E. Deep learning for accelerated and robust MRI reconstruction. Magnetic Resonance Materials in Physics, Biology and Medicine. 2024;37:335–368. [Google Scholar]
  • 27.Fu Z, Mandava S, Keerthivasan MB, et al. A multi-scale residual network for accelerated radial MR parameter mapping. Magn Reson Imaging. 2020;73:152–162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Schlemper J, Caballero J, Hajnal JV, Price A, Rueckert D. A deep cascade of convolutional neural networks for MR image reconstruction. International conference on information processing in medical imaging. Springer; 2017:647–658. [Google Scholar]
  • 29.Schlemper J, Salehi SSM, Kundu P, et al. Nonuniform variational network: deep learning for accelerated nonuniform MR image reconstruction. International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2019:57–64. [Google Scholar]
  • 30.Hammernik K, Klatzer T, Kobler E, et al. Learning a Variational network for reconstruction of accelerated MRI data. Magn Reson Med. 2018;79:3055–3071. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Liu F, Feng L, Kijowski R. MANTIS: model-augmented neural neTwork with incoherent k-space sampling for efficient MR parameter mapping. Magn Reson Med. 2019;82:174–188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Liu F, Kijowski R, ElFakhri G, Feng L. Magnetic resonance parameter mapping using model-guided self-supervised deep learning. Magn Reson Med. 2021;85:3211–3226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Bian W, Jang A, Liu F. Improving quantitative MRI using self-supervised deep learning with model reinforcement: demonstration for rapid T1 mapping. Magn Reson Med. 2024;92:98–111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Li H, Yang M, Kim JH, et al. SuperMAP: deep ultrafast MR relaxometry with joint spatiotemporal undersampling. Magn Reson Med. 2023;89:64–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Pei H, Shepherd TM, Wang Y, et al. DeepEMC-T2 mapping: deep learning—enabled T2 mapping based on echo modulation curve modeling. Magn Reson Med. 2024;92:2707–2722. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Yang Q, Lin Y, Wang J, et al. MOdel-based SyntheTic data-driven learning (MOST-DL): application in single-shot T2 mapping with severe head motion using overlapping-Echo Acquisition. IEEE Trans Med Imaging. 2022;41:3167–3181. [DOI] [PubMed] [Google Scholar]
  • 37.Zhang Z, Cho J, Wang L, et al. Blip up-down acquisition for spin- and gradient-echo imaging (BUDA-SAGE) with self-supervised denoising enables efficient T, T*, para- and dia-magnetic susceptibility mapping. Magn Reson Med. 2022;88:633–650. [DOI] [PubMed] [Google Scholar]
  • 38.Jun Y, Arefeen Y, Cho J, et al. Zero-DeepSub: zero-shot deep subspace reconstruction for rapid multiparametric quantitative MRI using 3D-QALAS. Magn Reson Med. 2024;91:2459–2482. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Blumenthal M, Fantinato C, UnterbergBuchwald C, Haltmeier M, Wang X, Uecker M. Self-supervised learning for improved calibrationless radial MRI with NLINV-net. Magn Reson Med. 2024;92:2447–2463. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Natsuaki Y, Keerthivasan M, Bilgin A, et al. Flexible and efficient 2D radial TSE T2 mapping with tiered Echo sharing and with pseudo Golden angle ratio reordering. Proceedings of the 2017 Annual Meeting of the ISMRM. International Society for Magnetic Resonance in Medicine. 2017. [Google Scholar]
  • 41.Shih Y, Wright G, Andén J, Blaschke J, Barnett AH. cuFINUFFT: a load-balanced GPU library for general-purpose nonuniform FFTs. 2021. [Google Scholar]
  • 42.Pipe JG, Menon P. Sampling density compensation in MRI: rationale and an iterative numerical solution. Magn Reson Med. 1999;41:179–186. [DOI] [PubMed] [Google Scholar]
  • 43.Yaman B, Hosseini SAH, Moeller S, Ellermann J, Uğurbil K, Akçakaya M. Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magn Reson Med. 2020;84:3172–3191. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Hammernik K, Küstner T. Machine Enhanced Reconstruction Learning and Interpretation Networks (MERLIN). Proceedings of the 2022 Annual Meeting of the ISMRM. International Society for Magnetic Resonance in Medicine. 2022. [Google Scholar]
  • 45.Paszke A, Gross S, Massa F, et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Curran Associates, Inc; 2019:8024–8035. [Google Scholar]
  • 46.Fu Z, Johnson K, Altbach M, Bilgin A. Cancellation of streak artifacts in radial abdominal imaging using interference null space projection. Magn Reson Med. 2022;88:1355–1369. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Toner B, Fu Z, Philip R, Martin D, Altbach M, Bilgin A. The impact of streak removal on deep learning reconstruction of radial datasets. Proceedings of the 2022 Annual Meeting of the ISMRM. International Society for Magnetic Resonance in Medicine; 2022. [Google Scholar]
  • 48.Toner B, Arberet S, Ahanonu E, et al. Free-breathing T2 mapping of the abdomen in half the scan time using RADTSE with deep learning reconstruction. Proceedings of the 2024 Annual Meeting of the ISMRM. International Society for Magnetic Resonance in Medicine; 2024. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting information

Data Availability Statement

Although the human MRI data are not approved to be shared publicly, the code is publicly available at https://github.com/UA-MRI/radtse-dl-recon.git.

RESOURCES