Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Mar 19.
Published in final edited form as: Med Image Comput Comput Assist Interv. 2020 Sep 29;12262:158–166. doi: 10.1007/978-3-030-59713-9_16

Acceleration of High-Resolution 3D MR Fingerprinting via a Graph Convolutional Network

Feng Cheng 1, Yong Chen 3, Xiaopeng Zong 2, Weili Lin 2, Dinggang Shen 1,2, Pew-Thian Yap 1,2
PMCID: PMC10950303  NIHMSID: NIHMS1929161  PMID: 38504822

Abstract

Magnetic resonance fingerprinting (MRF) is a novel imaging framework for fast and simultaneous quantification of multiple tissue properties. Recently, 3D MRF methods have been developed, but the acquisition speed needs to be improved before they can be adopted for clinical use. The purpose of this study is to develop a novel deep learning approach to accelerate 3D MRF acquisition along the slice-encoding direction in k-space. We introduce a graph-based convolutional neural network that caters to non-Cartesian spiral trajectories commonly used for MRF acquisition. We improve tissue quantification accuracy compared with the state of the art. Our method enables fast 3D MRF with high spatial resolution, allowing whole-brain coverage within 5min, making MRF more feasible in clinical settings.

Keywords: 3D MR fingerprinting, K-space interpolation, Graph convolution, GRAPPA

1. Introduction

Quantitative MR imaging is experiencing rapid expansion in the field of medical imaging. Compared to qualitative imaging approaches, quantitative imaging methods have been shown to yield improved consistency across scanners, allowing more objective tissue characterization, disease diagnosis, and treatment assessment [3]. However, most quantitative MRI methods [12, 14] are relatively slow and generally provide only a single tissue property at a time, limiting adoption in routine clinical settings. Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging method that is rapid and efficient for simultaneous quantification of multiple tissue properties in a single acquisition [10]. Due to its superior performance over other quantitative imaging approaches, MRF has attracted a lot of interest and has been successfully applied to quantitative imaging of various human organs, including the brain, abdomen, heart, and breast [2, 5, 15].

Significant efforts have been recently devoted to extending the original 2D MRF to 3D using stack-of-spirals acquisition [5, 11]. 3D MRF potentially improves signal-to-noise ratio, spatial resolution and coverage. However, 3D MRF with high spatial resolution and volumetric coverage prolongs the acquisition time. Various methods have been developed to accelerate 3D MRF. Chen et al. [4] proposed a method to combine non-Cartesian parallel imaging with machine learning to accelerate 3D MRF. While an acceleration of factor 4 was achieved along the temporal domain using a deep learning network, only an acceleration of factor 2 was achieved along the slice-encoding direction using non-Cartesian parallel imaging. The total acquisition time with 1mm isotropic resolution and whole brain coverage still takes 7 min. In addition, image reconstruction for non-Cartesian parallel imaging is time-consuming, hindering the wider adoption of 3D MRF in the clinics.

Recently, graph theory has been integrated with deep learning [8, 9] for tackling tasks in non-Euclidean space. In this work, we propose to use the graph-based deep learning to replace GRAPPA in non-Cartesian parallel imaging to accelerate 3D MRF along the slice-encoding direction. This is done in combination with a U-Net to accelerate MRF acquisition along the temporal domain for efficient tissue quantification. We develop a graph neural network to accelerate spiral-based 3D MRF acquisition, resulting in higher acceleration than conventional methods and reduced post-acquisition processing time. Our method, although applied to 3D MRF in the current work, holds great potential for the acceleration of general non-Cartesian acquisition for both qualitative and quantitative imaging purposes.

2. Problem Formulation

Similar to conventional 2D MRF, a typical 3D MRF dataset consists of hundreds of time frames for tissue characterization and each time frame is acquired with the stack-of-spirals trajectory. Only one spiral arm is acquired for each slice, so the acquisition is highly accelerated in-plane. For further speed improvement, we explore the possibility of acceleration along the slice-encoding direction in addition to the temporal dimension. For 3D MRF, the acquired data in k-space can be expressed as YT×S×Q×C, where T is the number of temporal frames, S is the number of slices, Q is the length of spiral arm, and C is the number of coils.

2.1. Acceleration Along Slice-Encoding Direction in k-space

To accelerate along the slice-encoding direction, an interleaved undersampling pattern as described in [11] was adopted. Specifically, given an acceleration factor of rslice, each slice with index s satisfying sf(modrslice) is acquired and other slices are skipped, where f is the time frame index. Therefore, the acquired k-space data is Xk-spaceT×(Sprslice+p)×Q×C, where p is the number of auto calibration signal (ACS) slices for parallel imaging reconstruction. Conventional methods, such as GRAPPA, aim to obtain Y by interpolating the accelerated acquired data Xk-space. A GRAPPA kernel of size 3×2 (spiral readout direction × slice-encoding direction) is typically used. Due to the nature of non-Cartesian acquisition with spiral trajectories, Q GRAPPA kernels need to be calculated from the ACS data and applied to fill the missing data along the slice-encoding direction.

2.2. Acceleration Along Temporal Dimension in Image Space

Acceleration along the temporal domain using deep learning has been described in the literature [6, 7]. In brief, the acceleration in image-space is achieved by acquiring fewer time frames, i.e., acquiring only the first Ta=Trframe time frames. The images after non-uniform Fourier transform (NUFFT) can be expressed as Ximage-spaceTa×S×H×W, where rframe is the acceleration factor and H×W is the in-plane image matrix size. The total acceleration factor r=rslice×rframe.

3. Approach

In this study, we propose to use a graph convolutional network (GCN) to interpolate the undersampled data along the slice-encoding direction. NUFFT was then applied to transform the k-space data to the image space. A network was further applied to generate the tissue property maps. Figure 1 shows the workflow of the proposed method.

Fig. 1.

Fig. 1.

Method overview.

3.1. Graph Convolutional Network

We aim to reconstruct the skipped slices from two adjacent slices, as shown in Fig. 1. The data points of an in-plane spiral trajectory are non-uniformly distributed in k-space, with more data points at the center of k-space and less data points at the edge. This non-uniform property requires different kernels to be applied to different data points, which poses challenges for conventional convolution. We adopt graph convolution to solve this problem. The spiral arm is first represented as a graph and then convolution is performed on the graph with the help of the graph adjacency matrix. This is in essence applying different convolutional kernels on different data points.

Figure 2 shows the structure of the proposed network. It is constructed with a cascade of nb blocks. Each block is constructed by one graph convolution layer and two kernel-1 1D conventional convolution. Residual connections are added to all blocks except the first block. The activation functions of all the layers except the last layer are ReLU. nb was set to 3 as an inception field that is too small or too large will degrade performance.

Fig. 2.

Fig. 2.

The proposed graph convolutional network.

To build the graph convolutional layer, we first represent the in-plane non-Cartesian trajectory as a graph 𝓖=(𝒱,𝓔,A), where 𝒱 is the vertices representing the data points on a trajectory, 𝓔 is the edges and AQ×Q is the weighted adjacency matrix. The weights of the graph adjacency matrix A are defined as the negative exponential of the square of normalized Euclidean distances between points in the spiral-readout direction:

Ai,j=e|vivj|2d¯2, (1)

where vi,vj are the spatial coordinates of the spiral trajectory and d¯ is the mean distance of all vertex pairs. To maintain locality, we only retain the q nearest neighbors for each vertex by keeping the q largest values in each row of A and setting the others to 0. The kernel size k for graph convolution is set to q+1, i.e., the vertex and its q neighbors.

We follow [9] to construct the graph convolutional layer:

Hl+1=σ(D^1/2A^D^1/2H(l)W(l)), (2)

where A^=A+I is the adjacency matrix of 𝓖 with self connections, I is the identity matrix, D is a diagonal degree matrix with D^ii=jAij,W(l) is a learnable kernel, H(l) and H(l+1) are the input and output of the layer, l is the layer’s index and σ(·) is the ReLU activation function.

3.2. Implementation

We trained the GCN on the ACS slices with stride 1 using a training dataset acquired from 5 subjects and evaluated the network on all the acquired slices of a testing subject. A total of 12 ACS slices from each subject were used. Each sample consists of two slices separated by an interval of rslice as input (Q×(C×2)) and rslice1 slices in between as target (Q×(C×(rslice1))). A total of n×Ta×(12rslice) training samples were used, where n is the number of training subjects. The complex numbers are converted to real numbers by stacking the real and imaginary parts. In each block the filters of GCN and the following two conv layers are 1024, 512, 512 respectively. Mean square error (MSE) is used as the objective function. The network is trained with a batch size of 1 and optimized using ADAM with an initial learning rate of 0.0005 and was reduced by 99% after each epoch. For tissue property quantification, we use the same network structure and settings as described in [4].

4. Experiments

MRI experiments were performed on a Siemens 3T Prisma scanner. A 3D MRF dataset from 6 subjects (M/F: 3/3; age: 34±10 years) was acquired. Each MRF time frame was highly undersampled in-plane with only one spiral arm (reduction factor: 48). The slice direction was linearly encoded and fully sampled. With a constant TR of 9.2ms and a waiting time of 2sec between partitions, the total scan time was ∼40min for each subject. FOV: 25×25cm2; matrix size: 256×256; effective in-plane resolution: 1mm; slice thickness: 1mm; number of slices: 144; variable flip angles: 5°–12°; number of MRF time frames: 768.

Various acceleration factors (2–4) along the slice-encoding direction were evaluated. We considered acceleration of factor 4 along the temporal direction, similar to [4]. Quantitative T1 and T2 maps generated using all 768 time frames with no acceleration along the slice-encoding direction were used as the ground truth. Leave-one-out cross validation was employed.

4.1. Comparison with State-of-the-Art Methods

We evaluated the performance of the proposed method wit respect to two state-of-the-art (SOTA) methods: 1) Non-Cartesian GRAPPA [4] and 2) RAKI [1]. RAKI was originally proposed for Cartesian MRI, and we extended it to non-Cartesian 3D MRF. Performance in both k-space interpolation and tissue quantification was assessed.

k-Space Interpolation:

Following [1], normalized mean square error (NMSE) was used to evaluate the accuracy of k-space interpolation. As shown in Table1, the proposed GCN-1b (nb=1) with only one block shows improved accuracy over either GRAPPA or RAKI for all three acceleration factors. GCN-3b (nb=3), with a cascade of multple blocks, achieves the best NMSE over all methods. Performance comparison between GCN-1b and GCN-3b will be illustrated in Sect. 4.2. The reconstruction time is reduced by 15–60 times with deep learning methods (RAKI and GCN) compared to non-Cartesian GRAPPA.

Table 1.

Evaluation of k-space interpolation, evaluated via NMSE. Reconstruction time per subject reported in minutes.

Methods NMSE Time Params (million)
rslice
2 3 4 2 3 4
GRAPPA [4] 0.93 1.42 1.56 97.6 132.0 301.5 1.97
RAKI [1] 0.401 0.568 0.731 7.2 5.3 4.8 1.36
GCN-1b 0.384 0.531 0.705 3.6 3.3 3.1 1.31
GCN-3b 0.377 0.500 0.663 6.1 5.1 4.7 3.41

Tissue Quantification Maps:

Following [7, 13], relative-L1, SNR and PSNR were applied for evaluation of tissue quantification accuracy (Table 2). Representative quantitative T1 and T2 maps are shown in Fig. 3. Compared to GRAPPA and RAKI, the proposed GCN yields the lowest Relative-L1 and highest SNR and PSNR for all three acceleration factors. Improved accuracy in tissue quantification and sharper edges can be observed in both T1 and T2 maps obtained with the proposed method (Fig. 3). All these results suggest that higher acceleration factors along the slice-encoding direction can be achieved using the proposed GCN. This will reduce the total acquisition time to 5min (rslice=3) or 4min (rslice=4) for 3D MRF with whole-brain coverage, comparable to the time needed to acquire either T1-weighted or T2-weighted images with similar resolution and coverage.

Table 2.

Comparison of tissue quantification errors.

Quan. Map Method Relative-L1 SNR PSNR
rslice
2 3 4 2 3 4 2 3 4
T1 GRAPPA 0.077 0.107 0.152 15.96 13.72 10.53 26.26 24.09 21.14
RAKI 0.069 0.097 0.118 16.03 13.77 12.83 26.16 24.01 23.15
GCN 0.061 0.085 0.103 17.48 15.23 13.50 27.70 25.51 23.95
T2 GRAPPA 0.101 0.132 0.173 11.32 8.45 5.17 26.59 24.09 21.02
RAKI 0.093 0.118 0.166 9.98 8.70 6.72 26.72 24.32 21.98
GCN 0.088 0.110 0.132 12.16 10.42 8.75 27.66 25.90 24.26
Fig. 3.

Fig. 3.

Representative T1 and T2 maps generated from MRF data acquired with various acceleration factors.

4.2. Ablation Study

We investigated the effects of two key network parameters: 1) the kernel size for graph convolution; 2) the number of cascaded blocks.

Kernel Size:

The kernel size k for graph convolution determines the neighborhood extent each point on a spiral can draw information from. No neighborhood information is considered when k=1. As shown in Table 3a, kernel size 5 results in the best performace in k-space interpolation for all three acceleration factors.

Table 3.

Effects of kernel size and number of cascaded blocks on the accuracy of k-space interpolation.

Method NMSE
rslice
2 3 4
GCN-k1 0.397 0.530 0.686
GCN-k3 0.379 0.514 0.671
GCN-k5 0.377 0.500 0.663
GCN-k7 0.395 0.529 0.680
GCN-k9 0.432 0.558 0.699
Method NMSE
rslice
2 3 4
GCN-1b 0.384 0.531 0.705
GCN-3b 0.377 0.500 0.663
GCN-5b 0.377 0.505 0.660
GCN-7b 0.383 0.513 0.682
(a)

Different kernel sizes with 3 blocks

(b)

Different nb’s with k=5

Number of Cascaded Blocks:

Table 3b indicates that a cascade of 3 blocks or 5 blocks yields the best performance. Increasing the number of blocks is in general expected to improve performance, but this comes at the cost of significantly increased number of network parameters, reducing network trainability with limited sample size.

5. Conclusion

In this paper, we presented a graph convolutional network to accelerate high-resolution 3D MRF along the slice-encoding direction. Our results suggest that improved tissue quantification accuracy can be achieved with reduced post-processing time. Therefore, our method improves the feasibility of 3D MRF clinical settings.

Acknowledgments

This work was supported in part by NIH grant EB006733.

References

  • 1.Akçakaya M, Moeller S, Weingärtner S, Uğurbil K: Scan-specific robust artificial-neural-networks for k-space interpolation (RAKI) reconstruction: database-free deep learning for fast imaging. Magn. Reson. Med 81(1), 439–453 (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Badve C, et al. : MR fingerprinting of adult brain tumors: initial experience. Am. J. Neuroradiol 38(3), 492–499 (2017) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Mehta BB, et al. : Magnetic resonance fingerprinting: a technical review. Magn. Reson. Med 81(1), 25–46 (2019) [DOI] [PubMed] [Google Scholar]
  • 4.Chen Y, Fang Z, Hung SC, Chang WT, Shen D, Lin W: High-resolution 3D MR fingerprinting using parallel imaging and deep learning. NeuroImage 206, 116329 (2020) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Chen Y: Three-dimensional MR fingerprinting for quantitative breast imaging. Radiology 290(1), 33–40 (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Fang Z, et al. : Deep learning for fast and spatially constrained tissue quantification from highly accelerated data in magnetic resonance fingerprinting. IEEE Trans. Med. Imaging 38(10), 2364–2374 (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Fang Z, Chen Y, Nie D, Lin W, Shen D: RCA-U-Net: residual channel attention U-Net for fast tissue quantification in magnetic resonance fingerprinting. In: Shen D, et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 101–109. Springer, Cham: (2019). 10.1007/978-3-030-32248-912 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Hong Y, Kim J, Chen G, Lin W, Yap PT, Shen D: Longitudinal prediction of infant diffusion MRI data via graph convolutional adversarial networks. IEEE Trans. Med. Imaging 38(12), 2717–2725 (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Kipf TN, Welling M: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) [Google Scholar]
  • 10.Ma D, et al. : Magnetic resonance fingerprinting. Nature 495(7440), 187–192 (2013) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Ma D, et al. : Fast 3D magnetic resonance fingerprinting for a whole-brain coverage. Magn. Reson. Med 79(4), 2190–2197 (2018) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Majumdar S, Orphanoudakis S, Gmitro A, O’donnell M, Gore J: Errors in the measurements of T2 using multiple-echo MRI techniques. I. effects of radiofrequency pulse imperfections. Magn. Reson. Med 3(3), 397–417 (1986) [DOI] [PubMed] [Google Scholar]
  • 13.Song P, Eldar YC, Mazor G, Rodrigues MR: Magnetic resonance fingerprinting using a residual convolutional neural network. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1040–1044. IEEE; (2019) [Google Scholar]
  • 14.Stikov N, Boudreau M, Levesque IR, Tardif CL, Barral JK, Pike GB: On the accuracy of T1 mapping: searching for common ground. Magn. Reson. Med 73(2), 514–522 (2015) [DOI] [PubMed] [Google Scholar]
  • 15.Yu AC, et al. : Development of a combined MR fingerprinting and diffusion examination for prostate cancer. Radiology 283(3), 729–738 (2017) [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES