Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2026 Jan 1.
Published in final edited form as: Magn Reson Imaging. 2024 Nov 19;115:110277. doi: 10.1016/j.mri.2024.110277

Blip-Up Blip-Down Circular EPI (BUDA-cEPI) for Distortion-Free dMRI with Rapid Unrolled Deep Learning Reconstruction

Uten Yarach 1,#, Itthi Chatnuntawech 2,#, Congyu Liao 3,4, Surat Teerapittayanon 2, Siddharth Srinivasan Iyer 5, Tae Hyung Kim 6, Justin Haldar 7, Jaejin Cho 8, Berkin Bilgic 8,9, Yuxin Hu 10, Brian Hargreaves 3,4,10, Kawin Setsompop 3,4,*
PMCID: PMC12124459  NIHMSID: NIHMS2037765  PMID: 39566835

Abstract

Purpose:

BUDA-cEPI has been shown to achieve high-quality, high-resolution diffusion magnetic resonance imaging (dMRI) with fast acquisition time, particularly when used in conjunction with S-LORAKS reconstruction. However, this comes at a cost of more complex reconstruction that is computationally prohibitive. In this work we develop rapid reconstruction pipeline for BUDA-cEPI to pave the way for its deployment in routine clinical and neuroscientific applications. The proposed reconstruction includes the development of ML-based unrolled reconstruction as well as rapid ML-based B0 and eddy current estimations that are needed. The architecture of the unroll network was designed so that it can mimic S-LORAKS regularization well, with the addition of virtual coil channels.

Methods:

BUDA-cEPI RUN-UP – a model-based framework that incorporates off-resonance and eddy current effects was unrolled through an artificial neural network with only six gradient updates. The unrolled network alternates between data consistency (i.e., forward BUDA-cEPI and its adjoint) and regularization steps where U-Net plays a role as the regularizer. To handle the partial Fourier effect, the virtual coil concept was also introduced into the reconstruction to effectively take advantage of the smooth phase prior and trained to predict the ground-truth images obtained by BUDA-cEPI with S-LORAKS.

Results:

The introduction of the Virtual Coil concept into the unrolled network was shown to be key to achieving high-quality reconstruction for BUDA-cEPI. With the inclusion of an additional non-diffusion image (b-value = 0 s/mm2), a slight improvement was observed, with the normalized root mean square error further reduced by approximately 5%. The reconstruction times for S-LORAKS and the proposed unrolled networks were approximately 225 and 3 seconds per slice, respectively.

Conclusion:

BUDA-cEPI RUN-UP was shown to reduce the reconstruction time by ~88x when compared to the state-of-the-art technique, while preserving imaging details as demonstrated through DTI application.

Keywords: BUDA-cEPI, geometric distortion, diffusion MRI, off-resonance, eddy current, unrolled network, deep learning

1. INTRODUCTION

Diffusion represents alterations in the random movement of water molecules in tissues, revealing their microarchitecture, and microstructural abnormalities in many neurological conditions. Diffusion magnetic resonance imaging (dMRI) provides useful information, increasing the sensitivity of MRI as a diagnostic tool, narrowing the differential diagnosis, and providing prognostic information for treatment planning [1,2]. Single-shot Echo-Planar Imaging (ssEPI) is widely used for clinical dMRI since it is one of the fastest imaging techniques [3]. However, for high resolution dMRI, ssEPI is often compromised by off-resonance and eddy current effects which cause geometric distortion, and T2* decay which causes blurring, due to the lengthy echo spacing (ESP) and echo train length (ETL).

Numerous techniques have been developed to mitigate the aforementioned issues [413]. Post-processing techniques that require gradient echo or EPI based field maps attempt to correct geometric distortion through image-domain interpolation [8,9]. However, due to finite data sampling and image discretization, interpolative resampling invariably causes image blurring and spatial resolution loss. Moreover, because these non-idealities do not manifest independently during the data acquisition, traditional post-processing techniques that independently manage them can leave (potentially subtle) residual errors or degradations in the resulting images. Model-based reconstructions [1113], that consider off-resonance, odd-even phase shift, gradient nonlinearity, and/or ramp sampling in signal forward model and reconstruct images via iterative least-squares solver, have been shown to effectively mitigate some of the residual artifacts that occur in standard post-processing corrections.

For high resolution dMRI where ESP is extended, model-based reconstruction alone may be insufficient to mitigate distortions in ssEPI. To overcome this issue, interleaved multi-shot EPI (msEPI) acquisition can be used in conjunction with model-based reconstruction, where the effective ESP is shortened by a factor equal to that of the shot number used, at the expense of prolonged scan time [1416]. Modified rapid ‘two-shot’ EPI acquisitions such as blip-up, blip-down EPI acquisition (BUDA) and related techniques have also been developed to improve on this and shown to enable high-fidelity, high-resolution distortion-free dMRI [1722]. In BUDA, the off-resonance maps are often estimated from the individual blip-up and -down images which include both B0 field inhomogeneity and eddy current effect. With the incorporation of these maps, two-shot data can be jointly reconstructed through a model-based framework with low rank matrix modeling constrained algorithms [23]. These algorithms enable handling of shot-to-shot background phase variations and partial Fourier (pF) sampling effect, thereby providing high-fidelity dMRI without the need of additional calibration data [2428]. Similarly, our recent work [Congyu’s paper] introduced BUDA circular-EPI (BUDA-cEPI) to further minimize echo time (TE) and ETL through partial Fourier (pF) acquisition applied in both the readout (RO) and phase encoding (PE) directions. The k-space centers of the blip-up and -down shots were acquired using a constant echo spacing (ESP) to enable reconstruction of the individual blip-up and -down low-resolution images at a consistent distortion level for the purpose of generating off-resonance maps. However, current reconstruction pipeline of both the S-LORAKS and the eddy current estimation (e.g., TOP-UP toolbox) are very time-consuming, with overnight reconstruction on a server required for a short dMRI scan of 10–20 minutes. This is a big hurdle to allow such efficient high fidelity dMRI approach to be widely used in clinical and neuroscientific applications.

In the past few years, two types of deep learning-based strategies have been adopted to reduce MRI reconstruction time: data-driven [3035] and model-driven approaches [3643]. The data-driven approach typically trains a standard neural network model, such as a multi-layer perceptron (MLP) and a convolutional neural network (CNN), using a collection of input-output pairs to approximate the unknown underlying input-output relationship. In the context of accelerated dMRI, the input and output to the model can be chosen to be undersampled k-space and the ground truth images for simplicity, respectively. While such an approach has demonstrated promising results with much shorter reconstruction time compared to the conventional iterative reconstructions, it lacks theoretical explanation on the relationship between the network topology and performance. Furthermore, successful applications of the approach typically require a large amount of data, which could be clinically prohibitive. For the model-driven approach, an optimization problem - that relates the input data to the output data is formulated based on MR physics, and an optimization algorithm to solve the formulated problem is then selected and unrolled, resulting in a deep learning model with the architecture that is tailored to the current application. By explicitly incorporating domain-specific knowledge into a deep learning model this way, the model-based approach can become more robust to scan- and/or subject-specific factors and relies less on massive datasets for training. Recently, RUN-UP [38], a model-driven deep learning approach, was proposed for multi-shot dMRI reconstruction to speed up the reconstruction. Specifically, fast iterative shrinkage-thresholding algorithm (FISTA) [44] was unrolled for a fixed number of iterations. The unrolled network alternates between data consistency (i.e., forward SENSE and its adjoint) and regularization steps similar to the conventional algorithm, in which U-Net plays a role as a regularizer. In addition, some studies [45, 46] facilitate the use of a blip up/down scheme through deep learning to estimate field maps instead of TOP-UP, thereby potentially enhancing time efficiency.

In this work, we extend the RUN-UP model [38] to enable fast reconstruction of data acquired using the BUDA-cEPI. The main contributions of this work are as follow:

  • Design of unroll network with specific architecture that can be exploited to achieve comparable performance to S-LORAKS, particular with virtual coil (VC) and non-diffusion image (b0 image) channels and cross-domain CNN concept, in which the input space alternates between k-space and image space (KI-Net).

  • Validate the feasibility of using off-resonance and eddy maps estimated using a fast estimation technique (i.e., 3D U-Net) to enhance the speed of the entire pipeline of the proposed reconstruction.

2. THEORY

2.1. BUDA Circular EPI (BUDA-cEPI) Sequence

The sequence-diagram of BUDA-cEPI [47, 48] is illustrated in Fig. 1(A), where two interleaved EPI shots sample complementary subsets of k-space using opposing phase-encoding directions to create opposing distortions. As shown in Fig.1(B), cEPI only acquires approximately 30% of the k-space, using both RO- and PE-pF and ramp-sampling, which significantly reduces ETL and TE. The pF is designed to sample complementary k-space regions across the blip-up and -down shots to enable effective recovery of missing pF data when a joint reconstruction is performed across shots. The regions near the k-space center are acquired using a constant ESP, allowing for reconstruction of blip-up and -down low-resolution images at a constant distortion level that can later be used for off-resonance map estimation. The low-resolution images with symmetric k-space coverage are in 2 mm. resolution which is high enough to be able to represent the expected shot-to-shot background phase. Both shots are acquired in an interleaved fashion. The sequence programming is implemented using GE EPIC with KS Foundation (https://ksfoundationepic.org/).

Fig. 1.

Fig. 1.

(A) The sequence diagram of the BUDA-cEPI sequence. (B) The trajectory of the blip-up and blip-down cEPI with readout and phase-encoding partial Fourier acquisition.

2.2. Circular EPI Signal Model

Let TE, Δt, and T denote the echo-, dwell-, and echo spacing - times, respectively. Neglecting the T2 effect, the signal measured from the object field-of-view (Ωxy) during readout m{1,,M} of phase-encoding line n{1,,N} can be modeled as:

g[m,n,c]=Ωxysc(x,y)f(x,y)ej2π(Δω0(x,y)(τ[m,n])ej(kx[m]x+ky[n]x)+ε[m,n,c] (1)

where f is the target image. kx and ky are the k-space coordinates in the readout/frequency and phase-encoded dimensions, respectively. sc is the sensitivity profile for coil c{1,,C}.Δω0(x,y) is the off-resonance caused by magnetic field inhomogeneity at location (x,y). τ[m,n]=TE+(mM12)Δt+(nN12)T[n], denotes the sampling time where T are set of the variable ESPs. ε is Gaussian noise. Eq. 1 is modeled after time reversal, odd-even echo shifts being corrected.

For discrete model, as cEPI uses high readout bandwidths, ΔtT and off-resonance primarily manifests along its phase encoded direction, thus, Δt0 can be assumed. Letting u(x,y)=f(x,y)eω0(x,y)TE, followed by data discretizing [49], Eq. (1) becomes

g[m,n,c]=p=1Pq=1Qs[p,q,c]u[p,q]ej2π(Δω0[p,q][nN12]Tnej(kx[m]p+ky[n]q))+ε[m,n,c] (2)

p and q are the pixel indices. u is the underlying image. When the time at any phase encoding line is assumed to be constant, time segmentation [50] can be applied. Defining Wn=diag{e-j2π(Δω0[p,q][n-N-12]Tn} and S=[diag{s1},,diag{sC}]T , Eq. (2) can be written as

G=(In=1NFWn)Su+ε=Au+ε, (3)

where I is the identity matrix, is the Kronecker product and N is the number of time segmentation (i.e., total phase encoding lines). F is Fourier transform.

2.3. BUDA-cEPI Reconstruction with S-LORAKS

Low-rank modeling of local k-space neighborhoods (LORAKS) [51] is a constrained MRI framework that enables accurate image reconstruction from sparsely and unconventionally sampled k-space data. It relies on the fact that MR images typically have limited spatial support and/or slowly varying image phase that can resulted in structured low-rank k-space properties. Moreover, low-rank matrix regularization techniques can be used to produce high-quality reconstructions. In this work, the partial and parallel BUDA-cEPI acquisition were modeled using two matrices: A and A for the blip-up and the blip-down acquisitions, respectively. To reconstruct the underlying images u and u with S-LORAKS constraint, we minimize the following objective function.

minu,u12[A00A][uu][GG]22+λJr(Ps(u,u)). (4)

Ps() is the operator that constructs the high-dimensional structured LORAKS matrix (i.e., S-matrix) of u and u. The regularization term Jr() is a nonconvex regularization penalty imposing rank-r approximation of the corresponding input matrix, defined as

Jr(X)=k>rσk2=minrank(Y)rXYF2

λ is a user-selected regularization parameter used to adjust the strength of the regularization penalty applied to the S-matrix. r is a user-selected rank estimates for the S-matrix. Jr() is a nonconvex regularization that encourages its matrix argument to have rank less than or equal to r.

2.4. RUN-UP: The Unrolled Network with Deep Priors

RUN-UP [38] was introduced for multi-short DWI, where CNN regularization was implemented which utilize the joint information between images from different shots as follows,

min{u1,,uNS}12s=1NsAsusGs22+R(u1,,uNS), (5)

where u1,,uNs are the images of NS different shots to be reconstructed, As is the encoding operator for the sth shot, which is a combination of the sampling operator, Fourier transform, and sensitivity encoding operator, Gs is the acquired multi-coil data of the sth shot, and R() is a regularization term that is modeled using U-Nets which is trained to predict the ground truth obtained by magnitude-based spatial-angular locally low-rank regularization (SPA-LLR) [52]. In particular, the multi shots images are updated using the following equations.

u1,t=u1,t1τ(A1HA1u1,t1A1HG1).uNS,t=uNS,t1τ(ANSHANsuNS,t1ANSHGNS) (6)
{u1,t+1,,,uNS,t+1}=R(u1,t,,uNS,t) (7)

AH is the adjoint of A. τ is the step size. When t is an odd number, R() takes k-space data as the input (F{u{1,t},,u{NS,t}}). When t is an even number, R() takes image data as the input (u{1,t+even},,u{NS,t+even}). This implementation is called KI-Net, which was shown to improve the reconstruction performance by taking advantage of the joint information both in k-space and image space.

3. METHODS

3.1. Data Acquisitions

In-vivo experiments were performed on a 3T GE Premier with a 48-channel receiver head coil (SVD-compressed to 12-channel) [53]. Nine healthy volunteers were scanned with informed consent according to an IRB protocol. The BUDA-cEPI sequence was utilized. The parameters include: resolution = 0.73×0.73×5.00 mm, TR/TE = 5000/55 msec, field of view (FOV) = 220×220 mm, matrix size = 300×300, number of slices = 16, number of excitation (NEX) = 1, variable echo spacing (ESP) = 0.67 – 1.09 msec, partial Fourier (pF) in both phase encoding and readout = 5/8, SENSE factor = 4, scan time = 300 sec, and number of diffusion directions with b-value of 1000 = 50. Data from eight volunteers were used for training, while data from one volunteer was used for testing the models.

3.2. Data Pre-Processing

Low resolution gradient echo data were also acquired for coil sensitivity estimation using ESPIRiT [54]. 1D Nyquist ghost correction was applied to each line individually due to variable echo spacing using parameters estimated from an EPI calibration scan with phase-encoding gradient turned off. Since cEPI uses variable ESP and ramp-sampling, re-gridding was performed along ky line-by-line. The central low-resolution k-space data (i.e., matrix size 128×128) of each BUDA shot is reconstructed using standard SENSE [55]. Cubic interpolation was applied to the low-resolution images of BUDA pairs to create images with the same size as the high-resolution BUDA-cEPI (i.e., matrix size 300×300). These interpolated images were used to estimate the field map via FSL TOP-UP for each diffusion encoding direction, capturing both susceptibility and eddy current effects [10]. This map is referred to as Δω0 (in unit of hertz) as described in Eqs. (1) and (2).

3.3. Deep Learning Based Field Map Estimation

To reduce the processing time for field map estimation, a rapid estimation was also developed using an end-to-end 3D U-Net [56] with 103,668,041 trainable parameters (convolutional layer with 3×3×3 kernel size, filter of 64, depth of 2, and dropout of 0.05). TensorFlow [57] was implemented with the Adam optimizer [58] with a learning rate of 1×10−4 and batch size of two, running on a 32 GB NVIDIA Quadro GV100 graphics processing unit (GPU). Data from eight volunteers were used for network training of this off-resonance map estimation U-Net - inputs were pair of low-resolution blip-up and -down cEPI images obtained by SENSE, and ground truths were field maps estimated by FSL TOP-UP [10].

3.4. BUDA-cEPI RUN-UP

S-LORAKS forms the S-matrix by utilizing conjugate symmetric areas in k-space to take advantage of the smooth phase prior. This motivates us to modify the recent RUN-UP [38] by introducing a virtual coil, allowing the CNN network to access and process k-space data from conjugate areas, thereby enhancing the network’s capability to exploit k-space symmetry for more accurate image reconstruction. We proposed BUDA-cEPI RUN-UP that implements BUDA-cEPI operators (A and A) and virtual coil concept to jointly reconstruct the blip-up and blip-down images from the data acquired with the parallel and partial Fourier BUDA-cEPI sequence (i.e., G and G) which aim to minimize the following objective function:

12minu,u[A00A][uu][GG]22+R(u,u). (8)

In particular, the blip-up and blip-down images are updated using the following equations

ut,=ut1,τ(AHAut1,AHG)ut,=ut1,τ(AHAut1,AHG) (9)
Option1:{ut+1,,ut+1,}=R(ut,,ut,)Option2(virtual coil):{ut+1,,ut+1,}=R(ut,,ut,,ut,,ut,)Option3(b0images):{ut+1,,ut+1,}=R(ut,,ut,,b0,,b0,)Option4(virtual coil+b0images):{ut+1,,ut+1,}=R(ut,,ut,,ut,,ut,,b0,,b0,) (10)

AH is the adjoint of A. τ is the step size which was manually selected (τ=0.9). ‘∗’ denotes complex conjugate transpose which is referred to as virtual conjugate coil data. R(·) is a regularization term that is modeled using U-Nets [54]. The proposed model architecture has 3 processing blocks (T = 3 in Fig. 2) which correspond to 6 gradient updates, 3 U-Nets in the image-space, and 3 U-Nets in the k-space. Four options were investigated in this study.

Fig. 2.

Fig. 2.

The proposed unrolled network reconstruction for BUDA-cEPI (BUDA-cEPI RUN-UP)

  • In option 1, the inputs for U-Nets require only blip-up and blip-down images.

  • In option 2, the inputs for U-Nets require blip-up and blip-down images and their virtual coil images. This option is motivated by the limitations of the U-Net architecture to match the S-LORAKS results without making the opposite side of k-space easier to access.

  • In option 3, it is similar to option 1, except the pre-computed b0 images obtained by S-LORAKS were added as extra input channels, while these channels were collapsed for the output. This is basically one of the same principles that was used on autocalibrated structured low-rank EPI ghost correction [55], which itself was motivated by multi-contrast reconstruction [56].

  • In option 4, the virtual coil and the pre-computed b0 images were added as extra input channels, while these channels were collapsed for the output.

To allow different regularization functions for different processing blocks and spaces, the six U-Nets do not share their weights, resulting in the total number of trainable parameters of 12,708,984. Each U-Net consists of convolutional layer with 3×3 kernel size, filter of 64, depth of 3, and dropout of 0.05. We implemented the proposed model in Tensorflow and trained it by minimizing the normalized-root-mean-squared-error (NRMSE) loss between the reconstructed and ground truth blip-up/blip-down images using the Adam optimizer with a learning rate of 1×10−4 and batch size of two, running on a 32 GB NVIDIA Quadro GV100 graphics processing unit (GPU). The ground-truth data were prepared using S-LORAKS (20 inner and 15 outer iterations, rank = 80, λ=0.05, and Fourier radius = 3). 5,120 and 1,280 slices from 8 volunteers (whole-brain coverage) were used as the training and validation data, respectively. 800 slices from the 9th volunteer were used for testing the trained model.

3.5. Performance Evaluation

The experiment aimed to compare the reconstruction quality achieved when using conventional SENSE, S-LORAKS (Eq. 4), and BUDA-cEPI RUN-UP (Eq. 10). To evaluate the robustness and generalizability of the proposed BUDA-cEPI RUN-UP, leave-one-subject-out test was performed four times - data from eight and one subjects were used for training, and testing, respectively. NRMSE was simultaneously computed for all slices. Structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) were computed for each slice. Mean and standard deviation (SD) of SSIM and PSNR were reported.

3. RESULTS

4.1. BUDA-cEPI with S-LORAKS

Fig. 3 shows that individual blip-up and -down cEPI with conventional SENSE reconstruction resulted in image blurring at the brain’s boundaries (enlarged view images), residual aliasing artifacts, relatively high noise appearance, and geometric distortions (1st row). In contrast, for the joint reconstruction of the blip-up and -down cEPI with BUDA operators and S-LORAKS (2nd row), image boundaries appear sharper, less noise appearance, and no aliasing artifacts are visually detected. Moreover, the geometries of both the blip-up and the blip-down images are well-aligned as shown in overlaid images. However, the reconstruction time of the joint reconstruction is much longer when compared to conventional SENSE (for both polarities), with reconstruction times of 225 and 3.12 seconds, respectively.

Fig. 3.

Fig. 3.

(1st row) Images obtained by standard SENSE. (2nd row) Images obtained by BUDA S-LORAKS. Enlarged views in white and yellow boxes highlight the sharpness at image boundaries. Overlay of the EPI blip-up (green channel) and EPI blip-down (red channel) displayed to demonstrate the geometry alignment.

4.2. BUDA-cEPI RUN-UP

4.2.1. BUDA-cEPI RUN-UP with Prior Information

Fig. 4 presents the results obtained from four reconstructions using different sets of prior information, as described in Eq. 10, all trained for the same number of epochs (250 epochs). The difference maps and NRMSE percentage values are displayed in the rightmost column of Figure 4. The BUDA-cEPI RUN-UP appears blurry and exhibits the highest NRMSE (17.4%). In contrast, using either virtual coil (VC) or b0 images produces sharper results, with NRMSE values of 13.8% and 15.3%, respectively. Combining both VC and b0 images slightly improves the outcome, achieving an NRMSE of 13.1%. It is important to note that the NRMSE values likely reflect differences in noise between the reconstruction techniques, as shown by the difference maps.

Fig. 4.

Fig. 4.

(A and B) images obtained by BUDA S-LORAKS. (C) Images obtained by BUDA-cEPI RUN-UP. (D-F) Images obtained by BUDA-cEPI RUN-UP with additional virtual coil, b0 image, and both virtual coil and b0 image. (Rightmost column) image differences between image obtained by BUDA S-LORAKS (B) and images obtained by techniques in C-F. Numbers represent the percentages of NRMSE of images obtained by each reconstruction computed with the image obtained by BUDA S-LORAKS (B).

4.3.3. Technical Evaluation

In Table 1, NRMSE values from all models under the conditions of leave-one-subject-out test are lower than 14%. Means and standard deviations of the structural similarity index measure (SSIM) are 0.96±0.01 and 0.97±0.01, respectively. Means and standard deviations of the peak signal-to-noise ratio (PSNR) are 37.94±0.54 and 37.29±0.57, respectively. All three parameters reflect the proposed reconstruction’s accuracy, robustness, and generalizability, even with a small training data size (only eight subjects).

Table 1.

The results of leave-one-subject-out test. Single value of normalized root-mean-squares-error (NRMSE) was reported. It was computed simultaneously for all slices and diffusion directions. Structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) were computed slice-by-slice. Mean and SD values of SSIM and PSNR across all slices and diffusion directions were reported.

TRAIN (8 subjects) TEST (1 subject) %NRMSE SSIM (Mean±SD) PNSR (Mean±SD)
exclude subject 1 subject 1 13.57 0.96±0.01 37.36±0.45
exclude subject 2 subject 2 13.32 0.97±0.01 37.29±0.57
exclude subject 3 subject 3 13.26 0.96±0.01 37.34±0.54
exclude subject 4 subject 4 13.35 0.97±0.01 37.67±0.42

4.3.4. DTI Application

Fig. 5a shows the estimated eddy current displacement map for a representative slice and diffusion direction obtained using FSL-EDDY. The displacement is larger in areas further away from the iso-center of the scanner, and changes accordingly with diffusion directions as demonstrated in Fig. 5b. The maximum and minimum values of the displacement in the red box are −1.5 mm. and +1.9 mm., respectively. These variations are large enough to affect DTI application (5g, and 5h). The geometric inconsistencies are clearly visible after conventional SENSE reconstruction, resulting in blurring on the FA map (5h) and poor alignment of primary eigenvectors on colored FA (5g). The SENSE reconstructed images with FSL-EDDY enable partially managing the geometric distortion, thereby improving the FA maps (5i and 5j). However, the variable ESP at outer k-space, the partial Fourier acquisition, and image domain interpolation during data post-processing cause blurriness in the diffusion images and FA maps. Also, large acceleration in each shot that are not jointly reconstructed cause residual aliasing artifacts. BUDA-cEPI S-LORAKS and BUDA-cEPI RUN-UP reconstructions performed to the same level from visual inspection (5k vs. 5m and 5l vs. 5n), and outperformed conventional SENSE in reducing residual artifacts, and enhancing small details, resulting in improved diffusion images and FA maps.

Fig. 5.

Fig. 5.

(a) one representative eddy displacement obtained by FSL-EDDY. (b) bar plots of maximum and minimum eddy displacement inside the red box area in (a) across 50 diffusion directions. (c-f) mean diffusion images. (g, i, k, and m) the primary eigenvectors at yellow box area corresponding to each reconstruction technique were color-encoded (red: left-right, green: anterior-posterior, blue: superior-inferior). (h, j, l, and n) FA maps without directional information at yellow box area corresponding to each reconstruction technique.

Fig. 6 demonstrates the capability of BUDA-cEPI S-LORAKS and BUDA-cEPI RUN-UP in recovering imaging details due to partial Fourier acquisition. Recovering the information allows for visualizing more details of fiber orientation distributions in cortical areas, as shown in the enlarged views (6h and 6i).

Fig. 6.

Fig. 6.

(a-c) mean diffusion images. (d-f) colored FA maps (red: left-right, green: anterior-posterior, blue: superior-inferior) corresponding to diffusion images in a-c, respectively. (g-h) enlarged views of the primary eigenvectors were color-encoded and overlaid on FA maps. The green fibers in (g) show overlapping cortex area across two gyri.

4.3.5. Time-efficient Reconstruction Pipeline

In Table 2, excluding the time used for field map estimation, the reconstruction times for BUDA-cEPI S-LORAKS and BUDA-cEPI RUN-UP are 225.32 and 2.54 seconds, respectively. This indicates that BUDA-cEPI RUN-UP is approximately 88 times faster than BUDA-cEPI S-LORAKS. For field map estimation, we investigated two options: the conventional TOP-UP and a 3D U-Net. The 3D U-Net required only 0.05 seconds per slice while producing field maps nearly identical to those obtained with TOP-UP (results not shown), which took 12.08 seconds per slice. The best time efficiency is achieved with the final option in Table 2, which combines BUDA-cEPI RUN-UP, low-resolution SENSE, and 3D U-Net-based field map estimation, taking a total of 3.03 seconds for this pipeline.

Table 2.

Processing times (second) per slice for different reconstruction pipelines

Field map estimation (matrix 128×128) Reconstruction Total
SENSE TOP-UP 3D U-Net
BUDA-cEPI SLORAKS 0.44 12.08 - 225.32 237.84
BUDA-cEPI SLORAKS 0.44 - 0.05 225.32 225.81
BUDA-cEPI RUN-UP 0.44 12.08 - 2.54 15.06
BUDA-cEPI RUN-UP 0.44 - 0.05 2.54 3.03

4. DISCUSSION AND CONCLUSION

In this study, we developed a rapid ML-based reconstruction approach for distortion-free high resolution dMRI with BUDA-cEPI acquisition. A model-based framework that manages for geometric distortions caused by off-resonance effects was unrolled through a tailored artificial neural network with only six gradient updates. The reconstruction was shown to significantly reduce the reconstruction time, while providing high quality results comparable to that of the state-of-the-art technique, S-LORAKS [48, 51].

In this work, the use of an unrolled supervised learning algorithm is chosen to accelerate the reconstruction process, where such a network was tailored both in term of its unrolled structure and the incorporation of prior information such as virtual coils and b0 images to enable it to perform well for BUDA-cEPI. This approach is inspired by classic variational optimization methods and iterates between data-consistency enforcement and deep learning model that acts as a regularizer [3537]. It allows flexibility in trading off between the number of iterations (data consistency blocks) and trainable parameters. Recently, Hu Y. et al [38] reported that RUN-UP enabled nearly real-time reconstruction and improved image quality for brain and breast DWI applications compared to images obtained by conventional reconstruction. Their network unrolled 6 iterations of FISTA with a total of 2,396,454 parameters. Aggarwal H. et al [37] developed MoDL-MUSSELS that also implemented standard SENSE for data consistency. They unrolled 5 outer and 5 inner iterations of the IRLS algorithm. In this study, we implemented 6 gradient updates (6 U-Nets, 3 for image-space and 3 for k-space) with trainable parameters of 12,708,984. Typically, the employment of an extensive set of trainable parameters has been observed to substantially enhance the attainment of precise outcomes in the context of intricate tasks. Nevertheless, this practice is concomitant with inherent perils, notably overfitting and the occurrence of vanishing gradients, both of which can lead to the inadequate training of neural networks. In such cases, some hyper-parameters may be carefully fine-tuned. The selection of proper dropout rate [59] is often mentioned.

The proposed method, which incorporates the virtual coil (VC) data, improves results as demonstrated in Fig. 4. The utilization of VC technique represents a highly efficacious strategy for augmenting the performance of parallel MRI [60], with particular relevance in scenarios involving echo planar imaging (EPI) employing partial Fourier acquisition. The VC achieves the generation of virtual coils through the assimilation of conjugate symmetric k-space signals derived from physical coils, thus augmenting the available information to address gaps in k-space data, a feature particularly advantageous in conjunction with partial Fourier acquisition. In essence, the implementation of VC consistently ensures image quality on par with or superior to that of images reconstructed without VC. Recently, Cho J. et al [61] presented evidence of a network that incorporates convolutional neural network (CNN) denoisers in both k-space and image-space domains, harnessing the potential of virtual coils to enhance the conditioning of image reconstruction. Furthermore, our findings (Fig. 6) indicate that further adding non-diffusion images as an additional channel can enhance the network’s performance. Previous studies have also shown that including supplementary contrasts, apart from diffusion-weighted images, in the input data for the learning algorithm aids in delineating anatomical boundaries with preventing blurring artifacts in the outputs [62, 63].

As shown in Table 1, RUN-UP BUDA shows robustness and generalization across subjects as demonstrated through NRMSE, SSIM, and PSNR. For model accuracy, this was reflected through the DTI application where the results obtained by BUDA-cEPI RUN-UP and BUDA-cEPI S-LORAKS appeared comparable (Figs. 5 and 6). Even though we have shown that our BUDA-cEPI RUN-UP can work well and is robust for the same protocol across subjects, the robustness of using this reconstruction model could decrease when applied to acquisition with protocols that has significantly different resolution and/or noise distribution. This is a general issue that has been discussed in detail in recent works [64, 65]. Fabian Z. et al [64] introduced a physics-based data augmentation pipeline for accelerated MR imaging. This strategy showed the robustness against overfitting and shifts in the test distribution. Knoll F. et al [65] demonstrated that by increasing the heterogeneity of the training data set, trained networks can be obtained that generalize toward wide-range acquisition settings, including contrast, SNR, and particular k-space sampling patterns. Their study also provides an outlook for the potential of transfer learning to fine-tuning of our network to a particular target application using only a small number of training cases.

The generalizability of deep learning models to new datasets remains a critical concern, particularly when model testing is constrained to a single subject. To address this, we implemented a leave-one-subject-out cross-validation strategy, enabling a comprehensive evaluation of model performance across multiple individuals. In this approach, the model was iteratively trained on data from eight subjects while being tested on the left-out subject, resulting in consistent accuracy and low error metrics, such as NRMSE, SSIM, and PSNR, as shown in Table 1. The low variability in these performance metrics across subjects underscores our model’s robustness and reliability. In future, we aim to expand our dataset to further validate and enhance the model’s generalizability on a larger scale.

The proposed BUDA-cEPI RUN-UP integrates off-resonance effect through time segmentation strategy [50]. In addition to the number of time segmentations, the number of coils and the resolution of acquired data are proportionally relative to the reconstruction time. Our technique took longer (i.e., 3.03 seconds) than RUN-UP [38] and MoDL-MUSSELS [37] (i.e., 0.1, and 0.16 seconds, respectively) where off-resonance was not considered. It is worth noting that the extension of the input channel with virtual coil data had only a very slight impact on the reconstruction time, as this step is performed after all coil data have been combined. Advanced coil compression and/or coil sketching techniques [66] could further reduce coil channels, which may further improve the speed of BUDA-cEPI RUN-UP.

While machine learning (ML) reconstructions have proven beneficial in reducing noise [67], they might compromise spatial resolution [68]. Future research will delve into using high SNR ground truth data sourced from averaging multiple data acquisition to train the network in reconstructing and denoising single average data. In diffusion data, every reconstructed image will display varied phase variations between shots, necessitating the use of real-valued averaging to create accurate ground truth data devoid of magnitude noise bias [69]. Additionally, because an image reconstructed from a single average will have a distinct background phase relative to the ground-truth data, we’ll have to modify the training cost function. The background phase from the single-average reconstruction will have to be eliminated prior to its comparison with the ground truth.

To enable the proposed reconstruction on highly undersampled data, a specialized loss function, such as the Complex-Valued Contrast-Weighted SSIM Loss [70], may be considered, as it helps optimize image sharpness and contrast. This function emphasizes critical image regions and reduces blurring, essential for producing diagnostically reliable images at higher undersampling rates. Additionally, other specialized loss functions, such as Perceptual Loss [71], which captures high-level image details, and Adversarial Loss [72], leveraging Generative Adversarial Networks (GANs) to produce more realistic images, may further enhance reconstruction quality. Beyond loss functions, Physics-Informed Neural Networks (PINNs) [73] and Reinforcement Learning (RL) [74] provide robust approaches for optimizing data acquisition prior to reconstruction. These models enhance sampling strategies, reducing scan times by 4x to 10x compared to conventional methods. PINNs incorporate MRI physics to ensure realistic sampling, while RL adapts to patient-specific needs with dynamic, tailored trajectories, improving both acquisition and reconstruction quality.

In conclusion, we developed a new reconstruction pipeline, termed BUDA-cEPI RUN-UP, for parallel and partial Fourier BUDA-cEPI acquisition. This proposed technique uses a deep-learning architecture, combining an MR-physic model (BUDA-cEPI operators) and U-Nets in both k-space and image space as trainable priors, with virtual coil concept also incorporated. Such technique was shown to reduce the reconstruction time by ~88x when compared to the state-of-the-art technique, while preserving imaging details as demonstrated through DTI application.

ACKNOWLEDGEMENT

This study is supported in part by GE Healthcare research funds and NIH R01EB020613, R01MH116173, R01EB019437, U01EB025162, P41EB030006. This study is also supported by Thailand Research Fund (RGNS 64-084, Uten Yarach) and Associated Medical Sciences Faculty, Chiang Mai University.

Funding

This research was supported in part by GE Healthcare, NIH grants (R01EB020613, R01MH116173, R01EB019437, U01EB025162, P41EB030006), the Thailand Research Fund (RGNS 64-084), and the Faculty of Associated Medical Sciences at Chiang Mai University.

Footnotes

Ethical Approval

In-vivo experiments were conducted following protocols approved by the relevant institutional review board (IRB). All participants provided informed consent prior to the study.

Conflict of Interest

The authors declare no conflicts of interest associated with this publication.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

DATA AVAILABILITY

Python scripts and some data sets for model testing are available at the following link: https://github.com/uten178/Unrolled-BUDA-cEPI-Reconstruction.git

REFERENCES

  • [1].Hagmann P, Jonasson L, Maeder P, Thiran JP, Wedeen VJ, Meuli R. Understanding diffusion MR imaging techniques: from scalar diffusion-weighted imaging to diffusion tensor imaging and beyond. Radiographics. 2006; 1:1205–1223. [DOI] [PubMed] [Google Scholar]
  • [2].Lazar M, Weinstein DM, Tsuruda JS, et al. White matter tractography using diffusion tensor deflection. Human brain mapping. 2003; 18(4): 306–321. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Mansfield P Multi-planar image formation using NMR spin echoes. Jphys C: Solid State Phys. 1977; 10: L55–L58. [Google Scholar]
  • [4].Sutton BP, Noll DC, Fessler JA. Fast, iterative image reconstruction for MRI in the presence of field inhomogeneities. IEEE transactions on medical imaging. 2003; 22(2): 178–88. [DOI] [PubMed] [Google Scholar]
  • [5].Fessler JA, Lee S, Olafsson VT, Shi HR, Noll DC. Toeplitz-based iterative image reconstruction for MRI with correction for magnetic field inhomogeneity. IEEE Transactions on Signal Processing. 2005; 53(9): 3393–402. [Google Scholar]
  • [6].Noll DC, Meyer CH, Pauly JM, Nishimura DG, Macovski A. A homogeneity correction method for magnetic resonance imaging with time-varying gradients. IEEE Trans. Med. Imag 1991; 10(4): 629–637. [DOI] [PubMed] [Google Scholar]
  • [7].Irarrazabal P, Meyer CH, Nishimura DG, Macovski A. Inhomogeneity correction using an estimated linear field map. Magn. Reson. Med 1996; 35: 278–282. [DOI] [PubMed] [Google Scholar]
  • [8].Man LC, Pauly JM, Macovski A. Multifrequency interpolation for fast off-resonance correction. Magn. Reson. Med 1997; 37: 785–792. [DOI] [PubMed] [Google Scholar]
  • [9].Jezzard P, Balaban R. Correction for geometrical distortion in echo planar images from B0 field variations. Magn Reson Med. 1995; 1(34): 65–73. [DOI] [PubMed] [Google Scholar]
  • [10].Andersson J, Skare S, Ashburner J. How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. NeuroImage. 2003; 20(2): 870–888. [DOI] [PubMed] [Google Scholar]
  • [11].Yarach U, In MH, Chatnuntawech I, et al. Model-based iterative reconstruction for single-shot EPI at 7T. Magn. Reson. Med 2017; 76(6): 2250–2264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Zahneisen B, Aksoy M, Maclaren J, Wuerslin C, Bammer R. Extended hybrid-space SENSE for EPI: Off-resonance and eddy current corrected joint interleaved blip-up/down reconstruction. NeuroImage. 2017; 153: 97–108. [DOI] [PubMed] [Google Scholar]
  • [13].Tao S, Trzasko JD, Shu Y, Huston J III, Bernstein MA. Integrated image reconstruction and gradient nonlinearity correction. Magn. Reson. Med 2015: 74(4): 1019–1031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Holdsworth SJ, Skare S, Newbould RD, Guzmann R, Blevins NH, Bammer R. Readout-segmented EPI for rapid high resolution diffusion imaging at 3 T. European Journal of Radiology. 2008; 65(1): 36–46. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Porter DA, Heidemann RM. High resolution diffusion-weighted imaging using readout-segmented echo-planar imaging, parallel imaging and a two-dimensional navigator-based reacquisition. Magn Reson Med. 2009; 62(2): 468–475. [DOI] [PubMed] [Google Scholar]
  • [16].Chen NK, Guidon A, Chang HC, Song AW. A robust multi-shot scan strategy for high-resolution diffusion weighted MRI enabled by multiplexed sensitivity-encoding (MUSE). NeuroImage 2013; 72: 41–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Liao C, Bilgic B, Tian Q, et al. Distortion-free, high-isotropic-resolution diffusion MRI with gSlider BUDA-EPI and multicoil dynamic B0 shimming. Magn. Reson. Med 2021; 86: 791–803. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Bilgic B, Chatnuntawech I, Manhard MK, et al. Highly accelerated multishot echo planar imaging through synergistic machine learning and joint reconstruction. Magn Reson Med. 2019; 82(4): 1343–1358. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Usman M, Kakkar L, Kirkham A, Arridge S, Atkinson D. Model-based reconstruction framework for correction of signal pile-up and geometric distortions in prostate diffusion MRI. Magn Reson Med. 2019; 81(3): 1979–1992. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Bhushan C, Joshi AA, Leahy RM, Haldar JP. Improved B0-distortion correction in diffusion MRI using interlaced q-space sampling and constrained reconstruction. Magn. Reson. Med 2014; 72: 1218–1232. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Chang H, Fitzpatrick J. A technique for accurate magnetic resonance imaging in the presence of field inhomogeneities. IEEE Trans Med Imaging. 1992; 11: 319–329. [DOI] [PubMed] [Google Scholar]
  • [22].Morgan PS, Bowtell RW, McIntyre DJO, Worthington BS. Correction of spatial distortion in EPI due to inhomogeneous static magnetic fields using the reversed gradient method. J Magn Reson Imaging. 2004; 19: 499–507. [DOI] [PubMed] [Google Scholar]
  • [23].Haldar JP, Setsompop K. Linear Predictability in Magnetic Resonance Imaging Reconstruction: Leveraging Shift-Invariant Fourier Structure for Faster and Better Imaging. IEEE Signal Processing Magazine. 2020; 37: 69–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Haldar JP, Zhuo J. P-LORAKS: Low-Rank Modeling of Local k-Space Neighborhoods with Parallel Imaging Data. Magn. Reson. Med 2016; 75: 1499–1514. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Lee J, Jin KH, Ye JC. Reference-free EPI Nyquist ghost correction using annihilating filter-based low rank Hankel matrix for K-space interpolation. Magn Reson Med. 2016; 76:1775–1789. [DOI] [PubMed] [Google Scholar]
  • [26].Kim TH, Setsompop K, Haldar JP. LORAKS Makes Better SENSE: Phase-Constrained Partial Fourier SENSE Reconstruction without Phase Calibration. Magn. Reson. Med 2017; 77: 1021–1035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Lobos RA, Kim TH, Hoge WS, Haldar JP. Navigator-free EPI Ghost Correction with Structured Low-Rank Matrix Models: New Theory and Methods. IEEE Transactions on Medical Imaging. 2018; 37: 2390–2402. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Lobos RA, Hoge WS, Javed A, Liao C, Setsompop K, Nayak KS, Haldar JP. Robust Autocalibrated Structured Low-Rank EPI Ghost Correction. Magn. Reson. Med 2021; 85: 3404–3419. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Kim TH, Haldar JP. LORAKS Software Version 2.0: Faster Implementation and Enhanced Capabilities. University of Southern California, Los Angeles, CA, Technical Report USC-SIPI-443, May 2018. [Google Scholar]
  • [30].Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Zeitschrift fur Medizinische Physik 2019; 29(2): 102–127. [DOI] [PubMed] [Google Scholar]
  • [31].Liu W, Wang Z, Liu X, Zeng N, Liu Y, Alsaadi FE. A survey of deep neural network architectures and their applications. Neurocomputing 2017; 234: 11–26. [Google Scholar]
  • [32].Alom MZ, Tarek MT, Yakopcic C, et al. A state-of-the-art survey on deep learning theory and architectures. Electronics 2019; 8(3): 1:67. [Google Scholar]
  • [33].Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature 2018; 555(7697): 487–492. [DOI] [PubMed] [Google Scholar]
  • [34].Shanshan W, Zhenghang S, Leslie Y, et al. ACCELERATING MAGNETIC RESONANCE IMAGING VIA DEEP LEARNING. In Proceeding of IEEE Int Symp Biomed Imaging, Prague, Czech Republic, 2016. pp. 514–517. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Kwon K, Kim D, Park H. A parallel MR imaging method using multilayer perceptron. Med Phys. 2017; 44(12): 6209–6224. [DOI] [PubMed] [Google Scholar]
  • [36].Quan TM, Nguyen-Duc T, Jeong WK. Compressed Sensing MRI Reconstruction Using a Generative Adversarial Network with a Cyclic Loss. IEEE Transactions on Medical Imaging 2018; 37(6):1488–1497. [DOI] [PubMed] [Google Scholar]
  • [37].Aggarwal HK, Mani MP, Jacob M. Multi-Shot Sensitivity-Encoded Diffusion MRI Using Model-Based Deep Learning (Modl-Mussels). In Proceeding of IEEE Int Symp Biomed Imaging, Venice, Italy, 2019. pp.1541–1544. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Hu Y, Xu Y, Tian Q, et al. RUN-UP: Accelerated multishot diffusion-weighted MRI reconstruction using an unrolled network with U-Net as priors. Magn. Reson. Med 2021; 85(2): 709–720. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Yang Y, Sun J, Li H, Xu Z. Deep ADMM-Net for Compressive Sensing MRI. Neural Information Processing Systems, Barcelona, Spain, 2016. pp. 10–18. [Google Scholar]
  • [40].Hammernik K, Klatzer T, Kobler E, et al. Learning a variational network for reconstruction of accelerated MRI data. Magn. Reson. Med 2017; 79(6): 3055–3071. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].Zhang J, Ghanem B. ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing. In Proceeding of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA 2018. pp. 1828–1837. [Google Scholar]
  • [42].Akcakaya M, Moeller S, Weingartner S, Ugurbil K. Scan-specific robust artificial-neural-networks for k-space interpolation (RAKI) reconstruction: Database-free deep learning for fast imaging. Magn. Reson. Med 2019; 81:439–453. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [43].Kim TH, Garg P, Haldar JP. LORAKI: Autocalibrated Recurrent Neural Networks for Autoregressive Reconstruction in k-Space. arXiv 2019; 1904.09390 [Google Scholar]
  • [44].Beck A, Teboulle M. A fast Iterative Shrinkage-Thresholding Algorithm with application to wavelet-based image deblurring. IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 2009; 693–696 [Google Scholar]
  • [45].Zahneisen B, Baeumler K, Zaharchuk G, Fleischmann D, Zeineh M, “Deep flow-net for EPI distortion estimation,” Neuroimage, vol. 15, pp. 217:116886, Aug. 2020. [DOI] [PubMed] [Google Scholar]
  • [46].Duong STM, Phung SL, Bouzerdoum A, Schira MM, “An unsupervised deep learning technique for susceptibility artifact correction in reversed phase-encoding EPI images,” Magn. Reson. Imaging, vol. 71, pp. 1–10, Sep. 2020. [DOI] [PubMed] [Google Scholar]
  • [47].Rettenmeier C, Maziero D, Qian Y, Stenger VA. A circular echo planar sequence for fast volumetric fMRI. Magn. Reson. Med 2019; 81(3): 1685–1698. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [48].Liao C, Yarach U, Cao X, et al. High-fidelity mesoscale in-vivo diffusion MRI through gSlider-BUDA and circular EPI with S-LORAKS reconstruction. Neuroimage. 2023; 15(275):120168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [49].Fessler J Model-based image reconstruction for MRI. IEEE Signal Processing Magazine. 2010: 27(4): 81–89. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [50].Sutton B, Noll D, Fessler J. Fast, iterative image reconstruction for MRI in the presence of field inhomogeneities. IEEE Transactions on Medical Imaging. 2003; 22(2): 178–188. [DOI] [PubMed] [Google Scholar]
  • [51].Haldar JP. Low-rank modeling of local k-space neighborhoods (LORAKS) for constrained MRI. IEEE transactions on medical imaging 2014; 33(3): 668–681. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [52].Hu Y, Wang X, Tian Q, et al. Multi-shot diffusion-weighted MRI reconstruction with magnitude-based spatial-angular locally low-rank regularization (SPA-LLR). Magn. Reson. Med 2020; 83(5) :1596–1607. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [53].Zhang T, Pauly JM, Vasanawala SS, Lustig M. Coil compression for accelerated imaging with Cartesian sampling. Magn. Reson. Med 2013; 69(2): 571–582. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [54].Uecker M, Lai P, Murphy MJ, et al. ESPIRiT--an eigenvalue approach to autocalibrating parallel MRI: where SENSE meets GRAPPA. Magn. Reson. Med 2014; 71(3): 990–101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [55].Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: sensitivity encoding for fast MRI. Magn. Reson. Med 1999; 42(5): 952–962. [PubMed] [Google Scholar]
  • [56].Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, Munich, Germany, 2015. pp. 234–241. [Google Scholar]
  • [57].Abadi M, Barham P, Chen J, et al. TensorFlow: a system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation, Savannah, GA, USA, 2016. pp. 265–283. [Google Scholar]
  • [58].Kingma DP, Ba J. Adam: a method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diega, CA, USA, 2015. pp. 1–15. [Google Scholar]
  • [59].Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res 2014; 15:1929–1958. [Google Scholar]
  • [60].Blaimer M, Gutberlet M, Kellman P, Breuer FA, Köstler H, Griswold MA. Virtual coil concept for improved parallel MRI employing conjugate symmetric signals. Magn. Reson. Med 2009; 61(1): 93–102. [DOI] [PubMed] [Google Scholar]
  • [61].Cho J, Jun Y, Wang X, Kobayashi C, Bilgic B (2023). Improved Multi-shot Diffusion-Weighted MRI with Zero-Shot Self-supervised Learning Reconstruction. MICCAI 2023. doi. 10.1007/978-3-031-43907-0_44. [DOI] [Google Scholar]
  • [62].Tian Q, Bilgic B, Fan Q, Liao C, Ngamsombat C, Hu Y, et al. DeepDTI: high-fidelity six-direction diffusion tensor imaging using deep learning. NeuroImage. 2020. doi: 10.1016/j.neuroimage.2020.117017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [63].Golkov V, Dosovitskiy A, Sperl JI, et al. q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans. IEEE Trans Med Imaging. 2015; 35(5): 1344–1351. [DOI] [PubMed] [Google Scholar]
  • [64].Zalan F, Reinhard H, Mahdi S. Data augmentation for deep learning based accelerated MRI reconstruction with limited data. International Conference on Machine Learning. 2021. doi: 10.48550/arXiv.2106.14947 [DOI] [Google Scholar]
  • [65].Knoll F, Hammernik K, Kobler E, Pock T, Recht MP, Sodickson DK. Assessment of the generalization of learned image reconstruction and the potential for transfer learning. Magn Reson Med. 2019; 81(1): 116–128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [66].Oscanoa JA, Ong F, Iyer SS, et al. Coil Sketching for fast and memory-efficient iterative reconstruction. In Proceedings of the 29th Annual Meeting of the International Society for Magnetic Resonance in Medicine, Toronto, Canada, 2021. pp. 0066. [Google Scholar]
  • [67].Tian Q, Li Z, Fan Q, et al. SDnDTI: Self-supervised deep learning-based denoising for diffusion tensor MRI. Neuroimage. 2022. doi: 10.1016/j.neuroimage.2022.119033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [68].Chan CC, Haldar JP. Local Perturbation Responses and Checkerboard Tests: Characterization tools for nonlinear MRI methods. Magn. Reson. Med 2021; 86: 1873–1887. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [69].Eichner C, Cauley SF, Cohen-Adad J, et al. Real diffusion-weighted MRI enabling true signal averaging and increased diffusion contrast. Neuroimage. doi: 10.1016/j.neuroimage.2015.07.074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [70].Ahn S, Menini A, McKinnon G, et al. Contrast-weighted SSIM loss function for deep learning-based undersampled MRI reconstruction. In Proceedings of the 28th Annual Meeting of the International Society for Magnetic Resonance in Medicine, Vertual Conference, 2020. pp. 1295. [Google Scholar]
  • [71].Ghodrati V, Shao J, Bydder M, Zhou Z, Yin W, Nguyen KL, Yang Y, Hu P. MR image reconstruction using deep learning: evaluation of network structure and loss functions. Quant Imaging Med Surg. 2019; 9(9): 1516–1527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [72].Yang G, Yu S, Dong H, et al. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. IEEE Trans Med Imaging. 2018; 37(6): 1310–1321. [DOI] [PubMed] [Google Scholar]
  • [73].Peng W, Feng L, Zhao G, and Liu F. Learning Optimal K-space Acquisition and Reconstruction using Physics-Informed Neural Networks, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022, pp. 20762–20771 [Google Scholar]
  • [74].Pineda L, Basu S, Romero A, Calandra R, and Drozdzal M. Active MR k-space Sampling with Reinforcement Learning. Lect. Notes Comput. Sci 12262 LNCS, 23–33 (2020). arXiv:2007.10469. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Python scripts and some data sets for model testing are available at the following link: https://github.com/uten178/Unrolled-BUDA-cEPI-Reconstruction.git

RESOURCES