Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jan 6.
Published in final edited form as: Med Image Comput Comput Assist Interv. 2020 Sep 29;12267:730–739. doi: 10.1007/978-3-030-59728-3_71

Automated Acquisition Planning for Magnetic Resonance Spectroscopy in Brain Cancer

Patrick J Bolan 1, Francesca Branzoli 2,3, Anna Luisa Di Stefano 4,5, Lucia Nichelli 2,3, Romain Valabregue 2,3, Sara L Saunders 1, Mehmet Akçakaya 1,6, Marc Sanson 3,4,7, Stéphane Lehéricy 2,3, Małgorzata Marjańska 1
PMCID: PMC8735854  NIHMSID: NIHMS1766346  PMID: 35005744

Abstract

In vivo magnetic resonance spectroscopy (MRS) can provide clinically valuable metabolic information from brain tumors that can be used for prognosis and monitoring response to treatment. Unfortunately, this technique has not been widely adopted in clinical practice or even clinical trials due to the difficulty in acquiring and analyzing the data. In this work we propose a computational approach to solve one of the most critical technical challenges: the problem of quickly and accurately positioning an MRS volume of interest (a cuboid voxel) inside a tumor using MR images for guidance. The proposed automated method comprises a convolutional neural network to segment the lesion, followed by a discrete optimization to position an MRS voxel optimally within the lesion. In a retrospective comparison, the novel automated method is shown to provide improved lesion coverage compared to manual voxel placement.

Keywords: Brain Cancer, Medical Image Segmentation, Image Guided Intervention

1. Introduction

In vivo magnetic resonance spectroscopy (MRS) can measure the concentration of >20 brain metabolites non-invasively. Several of these metabolites provide important clinical information. For example, the presence of the metabolite D-2-hydroxyglutarate (2HG) indicates that a tumor has a mutation in the isocitrate dehydrogenase enzyme, which is associated with favorable prognosis [1, 2]; high levels of choline-containing compunds (tCho) reflect aberrations in phospholipid metabolism and can be used for monitoring progression and treatment [36]. While single-voxel MRS offers clear clinical value, it is not widely performed outside academic medical centers because it requires specialized expertise to perform the acquisition and analyze the data. In this paper we address the problem of preparing the MRS acquisition, which requires a high level of skill to produce quality results. The operator must review and interpret clinical MR images, identify pathology and normal anatomy, weigh judgements to maximize lesion coverage while minimizing contributions from normal tissues and artifacts, and position a 3D cuboid (a voxel) in the lesion, all in real-time while the patient lies in the scanner. The need for technical expertise to perform accurate and consistent MRS voxel placement has been a critical barrier to wider use of this valuable technique [7].

We propose a computational approach to automate and optimize MRS voxel placement in lesions. This consists of 1) automatic segmentation to accurately delineate the full extent of the tumor, followed by 2) optimization of the voxel geometry to maximize tumor coverage and avoid non-involved tissues. The tumor segmentation is performed using the clinical T2-weighted MR images as input to a convolutional neural network, with transfer learning from a pre-trained network on the publicly available BraTS dataset. The voxel is then positioned within the segmented region by maximizing an objective function that codifies the tradeoffs considered by an expert spectroscopist. We demonstrate this approach on an institutional dataset with 60 glioma cases, and retrospectively compare the performance of the automated approach with manually performed expert voxel placements.

2. Methods

2.1. System Design

The proposed automated voxel placement system has two distinct steps, lesion segmentation and voxel geometric optimization, as shown in Fig. 1. This division was chosen because the problem of brain tumor segmentation has been studied and there are well-established approaches, whereas the problem of how to place an optimal voxel inside a lesion has not been previously addressed. The use of an analytical objective function provides flexibility to tune performance based on the MR spectroscopist’s expertise or preference for a specific application.

Fig. 1 –

Fig. 1 –

Schema for automated voxel placement.

2.2. Data

Our primary dataset consisted of 60 cases with MR imaging and spectroscopy acquired in patients with low-grade glioma. Acquisitions were performed using a 3 T whole-body system (MAGNETOM Verio, Siemens, Erlangen, Germany) equipped with a 32-channel receive-only head coil. The primary anatomical imaging was performed using a multi-slice 2D T2-weighted (T2w) sagittal FLAIR (resolution: 1.0 × 1.0 mm, 155 slices with thickness 1.1 mm, TR/TE = 5000/399 ms, scan time = 5.02 minutes). Using these images for guidance, a spectroscopic voxel was manually positioned in the lesion by an expert spectroscopist while the patient remained in the scanner. MR spectra were acquired using a single-voxel MEGA-PRESS sequence [8, 9] (TR/TE = 2000/68 ms) optimized to measure the 2HG signal at 4.02 ppm with editing applied at 1.9 ppm for the edit-on condition and at 7.5 ppm for the edit-off condition, in an interleaved fashion (128 pairs of scans, scan time = 8.5 minutes).

At the time of the scan, the MRS expert does not have a precise lesion segmentation for guidance – one must interpret the images directly to determine the extent of the tumor. To judge the quality of the MRS placement, we retrospectively performed slice-by-slice segmentation of tumor and necrotic regions. These segmentations were performed by a neurooncologist with 8 years of experience, using the software package ITK-SNAP [10], and using the T2w-FLAIR images for guidance. These manual lesion masks were used as the gold-standard definition of the tumor extent.

For model development, the dataset was divided into 36/8/8 cases for training/validation/testing, where each case included the MR images, MR spectroscopic voxel placement, and the manually segmented lesion mask. Note that 8 cases were reserved for a future test set.

Model training also used data from the 2018 BraTS tumor segmentation challenge [1113].

2.3. Segmentation with a CNN

We trained a convolutional neural network (CNN) that could replicate the manual lesion segmentation in our dataset. Our CNN was based on a 2D U-Net architecture [14] but with the encoding arm replaced with the ResNet-50 model [15], a widely used convolutional network with 50 layers that has been pre-trained on the ImageNet dataset [16]. The use of a pre-trained encoding network is a form of transfer learning, which can provide better performance when training on limited datasets [17], as the encoder is already tuned to represent low-level features (edges, textures, shapes, etc.). The model was implemented using the dynamic U-Net provided in the open source fastai library [18] implemented in PyTorch [19] and trained on a server with an NVidia Titan RTX gpu. The model used fastai’s default batch normalization layers and ReLU activations.

To further exploit transfer learning, we first trained our full model using data from the BraTS 2018 brain tumor segmentation challenge. Only the T2w-FLAIR images and manual segmentations from this dataset were used. From the publicly available training data of 285 cases, we divided the data in training/validation/testing (200/43/42) cases. The T2w-FLAIR images were reformatted as axial 2D slices with 120×120 resolution and trained using the whole-tumor ROI masks as the target label (avoiding necrosis, but including both enhancing and non-enhancing regions, as in the our data). The model was trained for 20 epochs using standard data augmentation methods (flip, rotate, zoom, affine warp, and scaling), a batch size of 64, a binary cross-entropy loss function, an Adam optimizer [20], and using cyclic learning rates (once-cycle policy [21]) with a base learning rate of 4e-6.

The T2w-FLAIR images in the BraTS dataset were somewhat different than our internal images: they were originally acquired in different orientations and with different resolution, they were “skull-stripped” to remove the skull and skin. Skull-stripping was not performed on our data because it was found to cut off some peripheral tumors. Therefore, after training on BraTS data the model was fine-tuned by training on 36/8 (training/validation) cases from our internal dataset for an additional 20 epochs. After all training, a final test set of 8 cases not seen during training was evaluated.

2.4. Geometric Objective Function

For lesions of irregular shape, there is not an objectively “correct” voxel placement. Those with expertise in MRS will plan voxels based on a tradeoff between voxel size, partial volume effects, and spectral quality, which may depend on the specific disease or on the MR methods used. We developed an objective function to encode two primary considerations: the size of the MRS voxel, and portion of the voxel that contains lesion. We defined Vtarget as the volume of the intersection between the gold-standard lesion mask and the MRS voxel, and ftarget as the fraction of voxel that contains lesion. If Vtarget is too small, the measurement will not have sufficient signal-to-noise; if too large, the signal will be inhomogeneous and spectral quality will decrease. In contrast, MRS performance is maximized for ftarget ~1. Thus we propose an objective function consisting of two Gaussian functions:

Fobj(θ)=exp(12(VtargetμVσV)2)exp(12(ftargetμfσf)2), (1)

where (μV, σV) are the mean and standard deviation of the Vtarget objective distribution, (μf, σf) are the mean and standard deviation for ftarget, and θ are the nine geometric parameters defining the voxel placement (position, size, and rotation angles of the cuboid). Selection of the distribution parameters is described below in section 3.1.

The objective function was maximized using numerical discrete optimization in Matlab. For each iteration, the voxel coordinates were used to generate a 3D raster mask of the voxel location, which was compared with the raster lesion segmentation to calculate ftarget, Vtarget, and the objective function value. Starting with a small voxel placed in the lesion centroid, a single-parameter discrete 1D search was performed over each of the size, position, and angle parameters, and then repeated for 3 iterations.

2.5. Performance Comparison

The manual and fully-automated voxel placements were compared by measuring ftarget, Vtarget, and Vvoxel (total voxel volume) for both methods on a per-case basis. Each method used the retrospectively drawn lesion mask as the gold-standard lesion definition for calculating the performance metrics. Statistical comparisons were performed with a 2-sided t-test.

3. Results

3.1. Objective Function Tuning

To select the objective function parameters, we calculated ftarget and Vtarget from each of the 60 manual MRS placements in our dataset. These are shown in Fig. 2a. From these, we inferred that the expert was seeking a Vtarget of ~8–8.5 mL. We therefore selected parameters for the prospective objective function to model this intent, setting μV = 8.5 mL and μf = 1. The σ values for the distributions were heuristically selected to produce an objective function that reflected the spectroscopists’ intent and provide steep gradients near the desired solution: σV = 2 mL and σf = 0.25. With these parameters the prospective objective function is plotted in Fig. 2b.

Fig. 2. –

Fig. 2. –

Voxel placement properties a) retrospectively observed from the manual MRS placements, and b) proposed for the geometric objective function.

3.2. Tumor Segmentation

With initial training of 20 epochs on the BraTS dataset (requiring ~4 h), the model produced a Sorenson-Dice [22] score of 0.81 on the validation dataset. Twenty more epochs were trained using our internal dataset (~1 h) to give a final Dice score of 0.88. Evaluating the trained model on our test dataset gave a mean Dice of 0.87, with inference requiring 5.2 s/slice. Examples of cases with representative segmentation performance are given in Fig. 3.

Fig. 3. –

Fig. 3. –

Examples of lesion segmentation with our CNN. The top row shows T2-weighted FLAIR images; the bottom row shows correctly predicted regions in magenta, with overestimation in red and underestimation in blue.

3.3. Overall Performance

The overall system performs lesion segmentation and voxel optimization one volume at a time, with a stack of 155 2D T2w-FLAIR images as input. This was performed for all 44 cases in the training/validation set, and separately for the 8 test cases. The full calculation (not including file i/o) required an average of 21.7 s/case (range 16–35 s, stdev 3.7 s).

A plot of the ftarget and Vtarget for all 52 cases is given in Fig. 4. Compared to the manual voxel placement performance (Fig. 2a), the automatic method gives more consistent results for the majority of voxel placements.

Fig. 4. –

Fig. 4. –

Voxel placement parameters for fully automatic placements in the training/validation and test datasets. Plotted on the same axes as Fig. 2a for comparison.

Mean values and standard deviations for ftarget, Vtarget, and Vvoxel are given in Table 1 for both the fully automated processing as well as the manually placed voxels. In the combined training + validation dataset, ftarget is larger (p=0.003) with the automatic method than the manual method, indicating that those voxels are more precisely placed in the lesions and thus their spectra will be more representative of tumor metabolites. There were no significant differences with the other parameters. The trend of higher ftarget is seen in the test dataset but was not significant with only n=8. The standard deviations of the voxel placements are smaller for all parameters in both datasets with the automated method, suggesting greater consistency of voxel placement than with manual voxel placements.

Table 1.

Comparison of performance metrics (mean and standard deviation) for manual vs. fully-automated voxel placement in the combined training + validation (n=44) and test (n=8) datasets.

ftarget mean, [std] Vtarget (mL) mean, [std] Vvoxel (mL) mean, [std]
manual automatic manual automatic manual automatic
Training + validation 0.813 [0.23] 0.926 [0.075] 7.47 [4.25] 7.85 [1.43] 9.14 [4.81] 8.47 [1.40]
Test 0.690 [0.23] 0.740 [0.20] 7.30 [4.09] 6.45 [2.15] 11.32 [8.42] 8.75 [2.26]

Two examples comparing manual and automatic voxel placements are provided in Fig. 5.

Fig. 5.

Fig. 5.

Two examples of manual (blue) and automatic (red) voxel placements. a) Example from test set, where the manual voxel (ftarget=0.973, Vtarget=7.78 mL) includes a necrotic region which is avoided by the automatic voxel (ftarget=0.995, Vtarget=8.43 mL). b) Example from CNN training set showing a manual voxel (ftarget=0.83, Vtarget=6.46 mL) was unnecessarily rotated and placed too low in the tumor, compared to the automated voxel (ftarget=0.90, Vtarget=7.43 mL).

4. Discussion

The automated MRS voxel placement method presented here gives lesion coverage that is superior to manual placement by an expert. The time required to calculate a voxel placement is short enough for use in clinical trials or practice. This could be made even shorter by software optimization or replacing the discrete search with a separately trained CNN. We chose in this work to keep the objective function separate so that it would be more easily interpretable by MRS experts and enable it to be more easily tailored for different voxel placement strategies.

While automatic voxel placement has been previously reported for normal brain regions [2327], to our knowledge this is the first system designed for placing MRS voxels inside pathologic lesions. This approach could readily be adapted for cancers in other organs (e.g., breast, liver), or for assessing other brain lesions (e.g., multiple sclerosis, abscesses).

This study had several limitations. Firstly, our institutional glioma dataset was small, particularly after partitioning out subsets for training and validating the CNN. Secondly, our results only assessed lesion coverage and not MRS performance, which may be impacted by other factors not considered in our model. Thirdly, all of the manual MRS voxel placements were performed by a single expert operator. All three of these limitations should be addressed with a larger study using manual voxel placements from multiple experts and using MRS metrics for assessing relative performance. Finally, while the 2D CNN showed acceptable performance, other deep networks (e.g., 3D U-Nets, DenseNets, etc.) may provide better performance and should be investigated.

5. Conclusion

In this work, we have demonstrated an automatic MRS voxel placement system that gives superior lesion coverage compared to traditional manual placement. This approach can help reduce the need for live MRS expertise during a scan, and may provide more consistent MRS measurements for clinical trials and routine practice.

References

  • 1.Yan H, Parsons DW, Jin G, et al. (2009) IDH1 and IDH2 mutations in gliomas. N Engl J Med 360:765–773. 10.1056/NEJMoa0808710 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Tanaka K, Sasayama T, Mizukawa K, et al. (2015) Combined IDH1 mutation and MGMT methylation status on long-term survival of patients with cerebral low-grade glioma. Clin Neurol Neurosurg 138:37–44. 10.1016/j.clineuro.2015.07.019 [DOI] [PubMed] [Google Scholar]
  • 3.Nelson MT, Everson LI, Garwood M, et al. (2008) MR Spectroscopy in the diagnosis and treatment of breast cancer. Semin Breast Dis 11:100–105. 10.1053/j.sembd.2008.03.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Muruganandham M, Clerkin PP, Smith BJ, et al. (2014) 3-Dimensional magnetic resonance spectroscopic imaging at 3 Tesla for early response assessment of glioblastoma patients during external beam radiation therapy. Int J Radiat Oncol Biol Phys 90:181–189. 10.1016/j.ijrobp.2014.05.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Laprie A, Catalaa I, Cassol E, et al. (2008) Proton magnetic resonance spectroscopic imaging in newly diagnosed glioblastoma: predictive value for the site of postradiotherapy relapse in a prospective longitudinal study. Int J Radiat Oncol Biol Phys 70:773–781. 10.1016/j.ijrobp.2007.10.039 [DOI] [PubMed] [Google Scholar]
  • 6.Shim H, Wei L, Holder CA, et al. (2014) Use of high-resolution volumetric MR spectroscopic imaging in assessing treatment response of glioblastoma to an HDAC inhibitor. AJR Am J Roentgenol 203:W158–165. 10.2214/AJR.14.12518 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Tietze A, Choi C, Mickey B, et al. (2018) Noninvasive assessment of isocitrate dehydrogenase mutation status in cerebral gliomas by magnetic resonance spectroscopy in a clinical setting. J Neurosurg 128:391–398. 10.3171/2016.10.JNS161793 [DOI] [PubMed] [Google Scholar]
  • 8.Mescher M, Merkle H, Kirsch J, et al. (1998) Simultaneous in vivo spectral editing and water suppression. NMR Biomed 11:266–272. [DOI] [PubMed] [Google Scholar]
  • 9.Marjańska M, Lehéricy S, Valabrègue R, et al. (2013) Brain dynamic neurochemical changes in dystonic patients: a magnetic resonance spectroscopy study. Mov Disord 28:201–209. 10.1002/mds.25279 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Yushkevich PA, Piven J, Hazlett HC, et al. (2006) User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage 31:1116–1128. 10.1016/j.neuroimage.2006.01.015 [DOI] [PubMed] [Google Scholar]
  • 11.Menze BH, Jakab A, Bauer S, et al. (2015) The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Transactions on Medical Imaging 34:1993–2024. 10.1109/TMI.2014.2377694 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Bakas S, Akbari H, Sotiras A, et al. (2017) Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Scientific Data 4:1–13. 10.1038/sdata.2017.117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Bakas S, Reyes M, Jakab A, et al. (2019) Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge. arXiv:181102629 [cs, stat] [Google Scholar]
  • 14.Ronneberger O, Fischer P, Brox T (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Springer International Publishing, pp 234–241 [Google Scholar]
  • 15.He K, Zhang X, Ren S, Sun J (2015) Deep Residual Learning for Image Recognition. arXiv:151203385 [cs] [Google Scholar]
  • 16.Russakovsky O, Deng J, Su H, et al. (2015) ImageNet Large Scale Visual Recognition Challenge. arXiv:14090575 [cs] [Google Scholar]
  • 17.Romero M, Interian Y, Solberg T, Valdes G (2019) Training Deep Learning models with small datasets. arXiv:191206761 [cs, stat] [Google Scholar]
  • 18.Howard J, Gugger S (2020) Fastai: A Layered API for Deep Learning. Information 11:108. 10.3390/info11020108 [DOI] [Google Scholar]
  • 19.Paszke A, Gross S, Massa F, et al. (2019) PyTorch: An Imperative Style, High-Performance Deep Learning Library. In: Wallach H, Larochelle H, Beygelzimer A, et al. (eds) Advances in Neural Information Processing Systems 32. Curran Associates, Inc., pp 8026–8037 [Google Scholar]
  • 20.Kingma DP, Ba J (2017) Adam: A Method for Stochastic Optimization. arXiv:14126980 [cs] [Google Scholar]
  • 21.Smith LN (2018) A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay. arXiv:180309820 [cs, stat] [Google Scholar]
  • 22.Taha AA, Hanbury A (2015) Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Med Imaging 15:29. 10.1186/s12880-015-0068-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Park YW, Deelchand DK, Joers JM, et al. (2018) AutoVOI: real-time automatic prescription of volume-of-interest for single voxel spectroscopy. Magnetic Resonance in Medicine 80:1787–1798. 10.1002/mrm.27203 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Bian W, Li Y, Crane JC, Nelson SJ (2018) Fully automated atlas-based method for prescribing 3D PRESS MR spectroscopic imaging: Toward robust and reproducible metabolite measurements in human brain. Magn Reson Med 79:636–642. 10.1002/mrm.26718 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Martínez-Ramón M, Gallardo-Antolín A, Cid-Sueiro J, et al. (2010) Automatic placement of outer volume suppression slices in MR spectroscopic imaging of the human brain. Magn Reson Med 63:592–600. 10.1002/mrm.22275 [DOI] [PubMed] [Google Scholar]
  • 26.Ozhinsky E, Vigneron DB, Chang SM, Nelson SJ (2013) Automated prescription of oblique brain 3D magnetic resonance spectroscopic imaging. Magn Reson Med 69:920–930. 10.1002/mrm.24339 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Yung K-T, Zheng W, Zhao C, et al. (2011) Atlas-based automated positioning of outer volume suppression slices in short-echo time 3D MR spectroscopic imaging of the human brain. Magn Reson Med 66:911–922. 10.1002/mrm.22887 [DOI] [PubMed] [Google Scholar]

RESOURCES