Abstract
Recently, low-field magnetic resonance imaging (MRI) has gained renewed interest to promote MRI accessibility and affordability worldwide. The presented M4Raw dataset aims to facilitate methodology development and reproducible research in this field. The dataset comprises multi-channel brain k-space data collected from 183 healthy volunteers using a 0.3 Tesla whole-body MRI system, and includes T1-weighted, T2-weighted, and fluid attenuated inversion recovery (FLAIR) images with in-plane resolution of ~1.2 mm and through-plane resolution of 5 mm. Importantly, each contrast contains multiple repetitions, which can be used individually or to form multi-repetition averaged images. After excluding motion-corrupted data, the partitioned training and validation subsets contain 1024 and 240 volumes, respectively. To demonstrate the potential utility of this dataset, we trained deep learning models for image denoising and parallel imaging tasks and compared their performance with traditional reconstruction methods. This M4Raw dataset will be valuable for the development of advanced data-driven methods specifically for low-field MRI. It can also serve as a benchmark dataset for general MRI reconstruction algorithms.
Subject terms: Brain, Biomedical engineering
Background & Summary
Magnetic resonance imaging (MRI) is a powerful medical imaging technology for clinical diagnosis of various diseases. However, MRI accessibility is low and highly uneven around the world, with the majority of MRI scanners concentrated in high-income countries, leaving approximately 70% of the world’s population with little or no access to MRI1–6. Low-field MRI under 1 Telsa (T) has gained renewed interest6–14 as a potential solution to this problem due to its significantly lower cost for purchase, installation, and maintenance compared to high-field MRI systems. In addition to the economic considerations, low-field MRI has a number of intrinsic advantages compared to high-field MRI, including improved patient comfort, low sensitivity to metallic implants, fewer image susceptibility artifacts, and extremely low radiofrequency specific absorption rate (SAR)1–6. However, despite improvements in MRI hardware since the 1980s, the key limiting factor at low field remains the signal-to-noise ratio (SNR) per unit time. This often results in long scan times and compromised image quality, hindering the adoption of low-field MRI in areas that require fast imaging and high SNR.
Recently, data-driven methods, particularly deep learning-based approaches, have emerged as a potential solution to the SNR problem at low field. In the field of computer vision, data-driven methods have rapidly evolved to outperform traditional methods by a wide margin in many low-level tasks, such as denoising, deblurring, and super-resolution15–17. These methods have been deployed in digital cameras and mobile phones for daily use with robust performance. Similar trends have been seen in the field of MRI reconstruction18–20. For example, variational neural network (VarNet) based methods21 have been proposed and evaluated in a number of MRI applications21–24, showing superior performance in accelerating scans and boosting SNR.
Training data are critical for the development of data-driven methods. In comparison to natural images, MRI data are rare, and most public MRI datasets25–28 only include magnitude images, which lack important phase and multi-channel information necessary for realistic MRI reconstruction tasks29. Furthermore, the few existing multi-channel k-space datasets30–33 were all acquired using high-field MRI systems, which have different signal and noise characteristics from low-field systems. The lack of publicly available low-field MRI data has become a barrier for researchers to enter this field or reproduce different approaches of previous studies.
To address this gap, we present a new multi-channel k-space dataset acquired using low-field MRI. It contains brain data from 183 subjects, each with 18 axial slices and 3 contrasts: T1-weighted (T1w), T2-weighted (T2w), and fluid attenuated inversion recovery (FLAIR). Importantly, each contrast includes two or three repetitions, resulting in more than 1,000 volumes in total that can be used in various ways by the MRI community. We name this multi-contrast, multi-repetition, multi-channel MRI raw k-space dataset as M4Raw for low-field MRI research. In this paper, we describe the method for producing this dataset and demonstrate its potential uses in denoising and parallel imaging reconstruction.
Methods
The general workflow to produce the M4Raw dataset is illustrated in Fig. 1. Multi-contrast, multi-repetition, multi-channel MRI k-space data were collected from 183 healthy volunteers using a 0.3 T MRI system with a four-channel head coil. Single-repetition images were generated by applying the inverse Fourier transform to the k-space data and combining the coil signals using the root sum of squares. Multi-repetition averaged images were then obtained by calculating the magnitude average of individual repetitions for each contrast. The final dataset comprises a training subset of 128 subjects, a validation subset of 30 subjects, and a motion-corrupted subset of 25 subjects.
Imaging protocol
A total of 183 healthy volunteers were enrolled in the study with written informed consent, following approval by the Institutional Review Board of Shenzhen Technology University (reference number: SZTULL012021005). All participants were cognizant of the nature of the study, and provided consent for their materials to be made publicly available in anonymized form as part of the written consent process. The majority of the participants were college students (aged 18 to 32, mean = 20.1, standard deviation (std) = 1.5; 116 males, 67 females). Axial brain MRI data were obtained from each subject using a clinical 0.3 T scanner (Oper-0.3, Ningbo Xingaoyi) equipped with a four-channel head coil. This scanner is a classical open type permanent magnet-based whole-body system. Three common sequences were used: T1w, T2w, and FLAIR, each acquiring 18 slices with a thickness of 5 mm and an in-plane resolution of 0.94 × 1.23 mm2. To facilitate flexible research applications, T1w and T2w data were acquired with three individual repetitions and FLAIR with two repetitions. The decision to only acquire two repetitions for FLAIR was made due to its long scan time per repetition. Acquiring more repetitions would result in more than 6.5 min total scan time and increase vulnerability to motion artifacts. T1w scans were performed first, followed by FLAIR scans, and lastly T2w scans, all with identical multi-slice geometry planning. The complete imaging parameters are summarized in Table 1.
Table 1.
Sequence type | Matrix size | TR/TE | Scan time per repetition | No. of repetitions | Echo train length | Echo spacing | Common parameters | |
---|---|---|---|---|---|---|---|---|
T1w | Spin echo | 256 × 195 | 500/18.4 ms | 1 min 38 s | 3 | 1 | — |
Field-of-view: 240 × 240 mm2 No. of slices: 18 Slice thickness/gap: 5/1 mm Sampling bandwidth: 31.25 KHz Phase encoding in left-right direction |
T2w | Fast spin echo | 256 × 195 | 5500/128 ms | 1 min 12 s | 3 | 15 | 16.0 ms | |
FLAIR | Inversion recovery fast spin echo | 256 × 198 | 7500/98 ms; TI = 1655 ms | 2 min 15 s | 2 | 11 | 16.3 ms |
Data processing
The k-space data from individual repetitions were exported from the scanner console without averaging. The corresponding raw images were in scanner coordinate space and may be off-centered due to patient positioning. To correct this, an off-center distance was estimated along the left-right direction for each subject using the vendor DICOM images, and the k-space data were multiplied by a corresponding linear phase modulation. The k-space matrices were then converted to Hierarchical Data Format Version 5 (H5) format34, with imaging parameters stored in the H5 file header in an ISMRMRD-compatible format35. The items in the DICOM headers were selectively transferred to the H5 headers, ensuring subject anonymization in accordance with the DICOM Basic Application Level Confidentiality Profile. We applied Retain Device Identity, UIDs, Longitudinal Temporal Information, and Institution Identity Options while removing all items related to subject identity, such as subject name, personal ID, and date of birth. The k-space dimensions were arranged in the same manner as the fastMRI dataset30, allowing existing codes for fastMRI to be run on M4Raw with minimal modification. For each repetition, the reference images were formed by 2D inverse Fourier transform of the k-space data and taking root sum of squares of the coil channels. The reference images were also stored in the H5 files as potential training targets of parallel imaging reconstruction.
It should be noted that, similar to other MRI k-space datasets30,31,33, in order to preserve the raw data characteristics, the images were not further defaced. However, unlike 3D isotropic MRI data25–27,31, our 2D multi-slice data have relatively thick slices (5 mm thickness + 1 mm gap) covering only the upper half of the head (FOVz = 108 mm), which renders potential facial identification through 3D reconstruction highly improbable.
Subset partition
The dataset was divided into three subsets: training, validation, and motion-corrupted. The motion-corrupted subset was first extracted by identifying intra-scan and inter-scan motion. First, all data were visually inspected for severe intra-scan motion and apparent inter-scan motion. Then, the remaining data were examined quantitatively for inter-scan motion again. 3D translational motion model was employed with the parameters estimated using the Python scikit-image package36. Inter-scan motion was further divided into inter-repetition motion (between different repetitions of the same contrast), and inter-contrast motion (between the multi-repetition averaged images of different contrasts). Inter-repetition motion was considered severe if the translation was more than 1.25 pixels in-plane or 0.2 slice-thicknesses through-plane beyond the global means. Inter-contrast motion was considered severe if the translation was more than 5 pixels in-plane or 1 slice-thickness through-plane beyond the global means. As a result, 26 subjects were placed in the motion-corrupted subset because at least one of their scans contained severe motion following the abovementioned criterion. Last, the remaining data were randomly split into a training subset of 128 subjects (1024 volumes) and a validation subset of 30 subjects (240 volumes).
Note that in the above process, we only estimated the motion without performing actual correction for several reasons. Firstly, as shown in Table 2, the motion in the training and validation data was minor. Secondly, not all studies require motion correction; for example, inter-repetition motion has little impact on reconstruction of a single repetition, and inter-contrast motion should not interfere with most reconstruction algorithms unless multi-contrast strategies are employed24,37–39. Lastly, the optimal motion correction approach can vary depending on the specific application, including the choice of motion models and interpolation methods, which the M4Raw users may readily implement based on their own needs.
Table 2.
Direction | Inter-repetition motion | Inter-contrast motion | ||||
---|---|---|---|---|---|---|
T1w | T2w | FLAIR | T1w-T2w | T1w-FLAIR | FLAIR-T2w | |
Slice encoding (Superior-inferior) | −0.01 ± 0.03 | 0.00 ± 0.02 | −0.02 ± 0.04 | −0.08 ± 0.20 | −0.09 ± 0.16 | 0.01 ± 0.09 |
Frequency encoding (Anterior-posterior) | −0.39 ± 0.26 | −0.31 ± 0.18 | −0.31 ± 0.17 | −1.50 ± 0.66 | −0.83 ± 0.40 | −0.64 ± 0.30 |
Phase encoding (Left-right) | 0.01 ± 0.27 | −0.01 ± 0.17 | 0.04 ± 0.21 | 0.14 ± 0.77 | 0.08 ± 0.61 | 0.04 ± 0.35 |
Data Records
The multi-channel k-space and single-repetition images from the 183 participants, including T1-weighted, T2-weighted, and FLAIR contrasts, have been made publicly available through the Zenodo repository40. The training, validation, and motion-corrupted subsets are separately compressed into three zip files, containing 1024, 240, and 200 H5 files, respectively. Among the 200 files in the motion-corrupted subset, 64 files are placed in the “inter-scan_motion” sub-directory and 136 files in the “intra-scan_motion” sub-directory.
All the H5 files are named in the format of “study-id_contrast_repetition-id.h5” (e.g., “2022061003_FLAIR01.h5”). In each file, the imaging parameters, multi-channel k-space, and the single-repetition images can be accessed via the dictionary keys of “ismrmrd_header”, “kspace”, and “reconstruction_rss”, respectively. The k-space dimensions are arranged in the order of slice, coil channel, frequency encoding, and phase encoding, following the convention of the fastMRI dataset30.
Technical Validation
M4Raw dataset quality assessment
The quality of the data in the training and validation subsets was evaluated with regard to the image SNR, head motion, and coil sensitivity quality.
The SNR of both the single-repetition images and the multi-repetition averaged images were measured by calculating the mean signal divided by the standard deviation of noise. As shown in Fig. 2, a signal region-of-interest (ROI) of size 100 × 100 was selected at the center of each image, while four noise ROIs of size 30 × 30 were selected at the corners. The resulting SNR distribution for each contrast is also plotted in Fig. 2. Overall, the SNRs of single-repetition images were 14.97 ± 0.94 for T1w, 12.32 ± 0.60 for T2w, and 14.73 ± 0.81 for FLAIR; the SNRs of multi-repetition averaged images were 24.39 ± 1.49 for T1w, 20.10 ± 0.97 for T2w, and 20.20 ± 1.06 for FLAIR. The ratios between the SNRs of single-repetition images and multi-repetition averaged images were 1.63 ± 0.02 for T1w, 1.63 ± 0.01 for T2w, and 1.37 ± 0.01 for FLAIR, which are close to their theoretical values, i.e., the square root of the number of repetitions.
The inter-scan head motion in the training and validation data was evaluated by 3D image translation estimation using the scikit-image package36. The results, presented in Table 2, indicated that the inter-repetition motion was minor, with standard deviation less than 0.3 pixels in-plane and 0.05 slice-thicknesses through-plane. The inter-contrast motion was slightly larger, yet still small, with standard deviation less than 0.8 pixels in-plane and 0.2 slice-thicknesses through-plane. All estimates had a near-zero mean, except for those in the frequency encoding direction. This was expected because the main field drift commonly observed in low-field systems with permanent magnets3 can cause slight shifting of the image in this direction over time.
The coil sensitivity quality was quantitatively evaluated using the g-factors41, which are the pixel-wise noise amplification ratios in traditional SENSE reconstruction. The coil sensitivity maps were estimated using the ESPIRiT42 method by the Berkeley Advanced Reconstruction Toolbox (BART) toolbox (https://mrirecon.github.io/bart) and the g-factor maps were subsequently derived using the pygrappa package (https://github.com/mckib2/pygrappa). Figure 3 illustrates typical coil channel images, coil sensitivity maps, and g-factor maps at acceleration factor (R) = 2 for upper, middle, and lower slices. Figure 4 presents a statistical analysis of the 99th percentile g-factor values for different slices, which serves as a measure of maximum noise amplification. Overall, the 99th percentile g-factors had a global mean ± std of 1.21 ± 0.29 at R = 2 and 1.78 ± 0.50 at R = 3, while the slice-wise mean values ranged from 1.07 to 1.67 at R = 2 and from 1.53 to 2.28 at R = 3. The results demonstrated the feasibility of applying parallel imaging, but also indicated that traditional reconstruction methods41,43 might be seriously challenged at high acceleration factors.
Demonstration of M4Raw dataset for parallel imaging reconstruction
To demonstrate the utility of parallel imaging on the M4Raw dataset, an end-to-end variational network (VarNet) model22 was trained using the code from the fastMRI repository. Cartesian undersampling was retrospectively applied in the phase encoding direction at acceleration factors (R) = 2 and 3, while the central 256 × 30 k-space was left fully sampled for coil sensitivity calibration. The Adam optimizer was used with a learning rate of 1 × 10−3 for training over 50 epochs. The obtained model was evaluated on the validation subset and compared with the classical GRAPPA algorithm43 in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The results, shown in Fig. 5, demonstrated that the VarNet method exhibited superior PSNR performance and produced high-quality images even at R = 3, whereas the noise amplification problem in GRAPPA was too severe to provide usable images.
Demonstration of M4Raw dataset for denoising reconstruction
For the demonstration of image denoising, a simple U-Net model44 and a state-of-the-art natural image restoration model NAFNet45 were trained on the M4Raw dataset. During training, the single repetition image was used as input, and the multi-repetition averaged image was used as the label. Both models were trained with Adam optimizer, and a learning rate of 1 × 10−4 was used for 50 training epochs before reduced to 1 × 10−5 for another 10 epochs. The trained models were then compared with the classical BM3D algorithm46, and the sigma parameter for BM3D was set to 0.025 according to the noise estimation. PSNR/SSIM values were computed for quantitative evaluation. As shown in Fig. 6, both data-driven methods outperformed the traditional BM3D method in terms of PSNR and SSIM values, and they offered visually better images with less blurring than the BM3D results.
Usage Notes
The M4Raw dataset enables development of various data-driven methods for low-field MRI reconstruction. It can also serve as a benchmark dataset for comparing different methods specific to low-field MRI. The dataset encompasses characteristics of low-field MRI data, while also possessing similarities to existing high-field datasets30–33. As such, it can be used as a general test dataset for a wide range of MRI reconstruction algorithms, including those originally proposed for high-field MRI. Apart from parallel imaging and denoising, potential research applications for this dataset include super-resolution, motion correction, and image style transfer from low-field to high-field. Additionally, the dataset’s inclusion of multiple contrasts allows for joint multi-contrast image reconstruction to further advance image quality24,37–39.
For simplicity, researchers may opt to exclude the motion-corrupted subset from model training and evaluation. However, since it represents a common real-world MRI data imperfection, this subset can be valuable for developing motion correction techniques or motion-resistant reconstruction algorithms. It should be noted that not all data in this subset exhibit significant motion; if any scan from a subject was deemed motion-corrupted, all data related to that subject would be placed in this subset.
As can be observed from Fig. 3, the coil sensitivity exhibits large variations along the slice direction. Thus, another potential research application of this dataset is simultaneous multi-slice (SMS) reconstruction47. The k-space data can be multiplied by the CAIPIRINHA phase48 and summed along the slice direction to simulate SMS acquisition. SMS acceleration can effectively reduce the minimal TR and achieve higher SNR than in-plane acceleration. This technique is particularly suitable for low-field MRI without the SAR problem faced at high field1.
It should be noted that all M4Raw data were acquired using one MRI system, while other low-field systems may have different designs of magnets and coils1–3. Expanding the dataset to include data from a variety of low-field systems as well as paired data from high-field systems is an area of ongoing work. We welcome any collaborations to expand the dataset’s scope.
Acknowledgements
This work was supported by the National Natural Science Foundation of China (No. 62101348), Shenzhen Higher Education Stable Support Program (No. 20220716111838002), and Natural Science Foundation of Top Talent of Shenzhen Technology University (No. 20200208).
Author contributions
Project design: M.L.; data acquisition: L.M., S.L., M.L., Y. Li and K.Y.; data review: Y. Liu, Y.D. and Z.D.; code development: L.M., S.H. and M.L.; manuscript preparation: M.L., Y. Liu and E.X.W.
Code availability
To facilitate users of this dataset, we have released the following Github repository: https://github.com/mylyu/M4Raw. The repository contains Python examples for data reading and deep learning model training, and the trained model weights to reproduce the results in Figs. 2–6.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Marques JP, Simonis FFJ, Webb AG. Low-field MRI: An MR physics perspective. J. Magn. Reson. Imaging. 2019;49:1528–1542. doi: 10.1002/jmri.26637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Arnold TC, Freeman CW, Litt B, Stein JM. Low-field MRI: Clinical promise and challenges. J. Magn. Reson. Imaging. 2023;57:25–44. doi: 10.1002/jmri.28408. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Sarracanie M, Salameh N. Low-field MRI: how low can we go? A fresh view on an old debate. Front. Phys. 2020;8:172. doi: 10.3389/fphy.2020.00172. [DOI] [Google Scholar]
- 4.Lang M, et al. Emerging Techniques and Future Directions: Fast and Portable Magnetic Resonance Imaging. Magn. Reson. Imaging Clin. 2022;30:565–582. doi: 10.1016/j.mric.2022.05.005. [DOI] [PubMed] [Google Scholar]
- 5.Geethanath S, Vaughan JT., Jr. Accessible magnetic resonance imaging: A review. J. Magn. Reson. Imaging. 2019;49:e65–e77. doi: 10.1002/jmri.26638. [DOI] [PubMed] [Google Scholar]
- 6.Liu Y, et al. A low-cost and shielding-free ultra-low-field brain MRI scanner. Nat. Commun. 2021;12:1–14. doi: 10.1038/s41467-021-27317-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Sheth KN, et al. Assessment of brain injury using portable, low-field magnetic resonance imaging at the bedside of critically ill patients. JAMA Neurol. 2021;78:41–47. doi: 10.1001/jamaneurol.2020.3263. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Koonjoo N, Zhu B, Bagnall GC, Bhutto D, Rosen MS. Boosting the signal-to-noise of low-field MRI with deep learning image reconstruction. Sci. Rep. 2021;11:1–16. doi: 10.1038/s41598-021-87482-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Hömmen P, Storm J-H, Höfner N, Körber R. Demonstration of full tensor current density imaging using ultra-low field MRI. Magn. Reson. Imaging. 2019;60:137–144. doi: 10.1016/j.mri.2019.03.010. [DOI] [PubMed] [Google Scholar]
- 10.Campbell-Washburn AE, Suffredini AF, Chen MY. High-performance 0.55-T lung MRI in patient with COVID-19 infection. Radiology. 2021;299:E246–E247. doi: 10.1148/radiol.2021204155. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Campbell-Washburn, A. E. et al. T2-weighted lung imaging using a 0.55-T MRI system. Radiol. Cardiothorac. Imaging3, (2021). [DOI] [PMC free article] [PubMed]
- 12.Bhattacharya I, et al. Oxygen-enhanced functional lung imaging using a contemporary 0.55 T MRI system. NMR Biomed. 2021;34:e4562. doi: 10.1002/nbm.4562. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Mazurek MH, et al. Portable, bedside, low-field magnetic resonance imaging for evaluation of intracerebral hemorrhage. Nat. Commun. 2021;12:5119. doi: 10.1038/s41467-021-25441-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Yuen MM, et al. Portable, low-field magnetic resonance imaging enables highly accessible and dynamic bedside evaluation of ischemic stroke. Sci. Adv. 2022;8:eabm3952. doi: 10.1126/sciadv.abm3952. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Nah, S., Son, S., Lee, S., Timofte, R. & Lee, K. M. NTIRE 2021 Challenge on Image Deblurring. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 149–165 (2021).
- 16.Wang Z, Chen J, Hoi SCH. Deep Learning for Image Super-Resolution: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021;43:3365–3387. doi: 10.1109/TPAMI.2020.2982166. [DOI] [PubMed] [Google Scholar]
- 17.Tian C, et al. Deep learning on image denoising: An overview. Neural Netw. 2020;131:251–275. doi: 10.1016/j.neunet.2020.07.025. [DOI] [PubMed] [Google Scholar]
- 18.Hyun CM, Kim HP, Lee SM, Lee S, Seo JK. Deep learning for undersampled MRI reconstruction. Phys. Med. Biol. 2018;63:135007. doi: 10.1088/1361-6560/aac71a. [DOI] [PubMed] [Google Scholar]
- 19.Liang D, Cheng J, Ke Z, Ying L. Deep Magnetic Resonance Image Reconstruction: Inverse Problems Meet Neural Networks. IEEE Signal Process. Mag. 2020;37:141–151. doi: 10.1109/MSP.2019.2950557. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Knoll F, et al. Deep-learning methods for parallel magnetic resonance imaging reconstruction: A survey of the current approaches, trends, and issues. IEEE Signal Process. Mag. 2020;37:128–140. doi: 10.1109/MSP.2019.2950640. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Hammernik K, et al. Learning a variational network for reconstruction of accelerated MRI data. Magn. Reson. Med. 2018;79:3055–3071. doi: 10.1002/mrm.26977. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Sriram, A. et al. End-to-end variational networks for accelerated MRI reconstruction. in International Conference on Medical Image Computing and Computer-Assisted Intervention 64–73 (Springer, 2020).
- 23.Vishnevskiy V, Walheim J, Kozerke S. Deep variational network for rapid 4D flow MRI reconstruction. Nat. Mach. Intell. 2020;2:228–235. doi: 10.1038/s42256-020-0165-6. [DOI] [Google Scholar]
- 24.Polak D, et al. Joint multi-contrast variational network reconstruction (jVN) with application to rapid 2D and 3D imaging. Magn. Reson. Med. 2020;84:1456–1469. doi: 10.1002/mrm.28219. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Van Essen DC, et al. The WU-Minn Human Connectome Project: An overview. NeuroImage. 2013;80:62–79. doi: 10.1016/j.neuroimage.2013.05.041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Marcus DS, et al. Open Access Series of Imaging Studies (OASIS): Cross-sectional MRI Data in Young, Middle Aged, Nondemented, and Demented Older Adults. J. Cogn. Neurosci. 2007;19:1498–1507. doi: 10.1162/jocn.2007.19.9.1498. [DOI] [PubMed] [Google Scholar]
- 27.Jack CR, Jr., et al. The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods. J. Magn. Reson. Imaging. 2008;27:685–691. doi: 10.1002/jmri.21049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.IXI Dataset – Brain Development. http://brain-development.org/ixi-dataset/ (2010).
- 29.Shimron E, Tamir JI, Wang K, Lustig M. Implicit data crimes: Machine learning bias arising from misuse of public data. Proc. Natl. Acad. Sci. 2022;119:e2117203119. doi: 10.1073/pnas.2117203119. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Zbontar, J. et al. fastMRI: An Open Dataset and Benchmarks for Accelerated MRI. Preprint at http://arxiv.org/abs/1811.08839 (2019).
- 31.Souza R, et al. An open, multi-vendor, multi-field-strength brain MR dataset and analysis of publicly available skull stripping methods agreement. NeuroImage. 2018;170:482–494. doi: 10.1016/j.neuroimage.2017.08.021. [DOI] [PubMed] [Google Scholar]
- 32.Desai, A. et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. in Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (eds. Vanschoren, J. & Yeung, S.) vol. 1 (2021).
- 33.Lim Y, et al. A multispeaker dataset of raw and reconstructed speech production real-time MRI video and 3D volumetric images. Sci. Data. 2021;8:1–14. doi: 10.1038/s41597-021-00976-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Koranne, S. Hierarchical data format 5: HDF5. in Handbook of open source tools 191–200 (Springer, 2011).
- 35.Inati SJ, et al. ISMRM Raw data format: A proposed standard for MRI raw datasets. Magn. Reson. Med. 2017;77:411–421. doi: 10.1002/mrm.26089. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Van der Walt S, et al. scikit-image: image processing in Python. PeerJ. 2014;2:e453. doi: 10.7717/peerj.453. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Bilgic B, Goyal VK, Adalsteinsson E. Multi-contrast reconstruction with Bayesian compressed sensing. Magn. Reson. Med. 2011;66:1601–1615. doi: 10.1002/mrm.22956. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Li, G. et al. Transformer-empowered Multi-scale Contextual Matching and Aggregation for Multi-contrast MRI Super-resolution. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 20636–20645 (2022).
- 39.Yi Z, et al. Joint calibrationless reconstruction of highly undersampled multicontrast MR datasets using a low-rank Hankel tensor completion framework. Magn. Reson. Med. 2021;85:3256–3271. doi: 10.1002/mrm.28674. [DOI] [PubMed] [Google Scholar]
- 40.Lyu M, 2023. M4Raw: A multi-contrast, multi-repetition, multi-channel MRI k-space dataset for low-field MRI research. Zenodo. [DOI] [PMC free article] [PubMed]
- 41.Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: sensitivity encoding for fast MRI. Magn. Reson. Med. 1999;42:952–962. doi: 10.1002/(SICI)1522-2594(199911)42:5<952::AID-MRM16>3.0.CO;2-S. [DOI] [PubMed] [Google Scholar]
- 42.Uecker M, et al. ESPIRiT—an eigenvalue approach to autocalibrating parallel MRI: where SENSE meets GRAPPA. Magn. Reson. Med. 2014;71:990–1001. doi: 10.1002/mrm.24751. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Griswold MA, et al. Generalized autocalibrating partially parallel acquisitions (GRAPPA) Magn. Reson. Med. 2002;47:1202–1210. doi: 10.1002/mrm.10171. [DOI] [PubMed] [Google Scholar]
- 44.Ronneberger, O., Fischer, P. & Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation. in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 (eds. Navab, N., Hornegger, J., Wells, W. M. & Frangi, A. F.) 234–241 (Springer International Publishing, 2015).
- 45.Chu, X., Chen, L. & Yu, W. NAFSSR: Stereo Image Super-Resolution Using NAFNet. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 1239–1248 (2022).
- 46.Dabov K, Foi A, Katkovnik V, Egiazarian K. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007;16:2080–2095. doi: 10.1109/TIP.2007.901238. [DOI] [PubMed] [Google Scholar]
- 47.Barth M, Breuer F, Koopmans PJ, Norris DG, Poser BA. Simultaneous multislice (SMS) imaging techniques. Magn. Reson. Med. 2016;75:63–81. doi: 10.1002/mrm.25897. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Breuer FA, et al. Controlled aliasing in parallel imaging results in higher acceleration (CAIPIRINHA) for multi-slice imaging. Magn. Reson. Med. 2005;53:684–691. doi: 10.1002/mrm.20401. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Citations
- Lyu M, 2023. M4Raw: A multi-contrast, multi-repetition, multi-channel MRI k-space dataset for low-field MRI research. Zenodo. [DOI] [PMC free article] [PubMed]
Data Availability Statement
To facilitate users of this dataset, we have released the following Github repository: https://github.com/mylyu/M4Raw. The repository contains Python examples for data reading and deep learning model training, and the trained model weights to reproduce the results in Figs. 2–6.