Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2024 Jul 18.
Published in final edited form as: Simul Synth Med Imaging. 2022 Sep 21;13570:101–111. doi: 10.1007/978-3-031-16980-9_10

Brain Lesion Synthesis via Progressive Adversarial Variational Auto-Encoder

Jiayu Huo 1,, Vejay Vakharia 2, Chengyuan Wu 3, Ashwini Sharan 3, Andrew Ko 4, Sébastien Ourselin 1, Rachel Sparks 1
PMCID: PMC7616255  EMSID: EMS197478  PMID: 39026926

Abstract

Laser interstitial thermal therapy (LITT) is a novel minimally invasive treatment that is used to ablate intracranial structures to treat mesial temporal lobe epilepsy (MTLE). Region of interest (ROI) segmentation before and after LITT would enable automated lesion quantification to objectively assess treatment efficacy. Deep learning techniques, such as convolutional neural networks (CNNs) are state-of-the-art solutions for ROI segmentation, but require large amounts of annotated data during the training. However, collecting large datasets from emerging treatments such as LITT is impractical. In this paper, we propose a progressive brain lesion synthesis framework (PAVAE) to expand both the quantity and diversity of the training dataset. Concretely, our framework consists of two sequential networks: a mask synthesis network and a mask-guided lesion synthesis network. To better employ extrinsic information to provide additional supervision during network training, we design a condition embedding block (CEB) and a mask embedding block (MEB) to encode inherent conditions of masks to the feature space. Finally, a segmentation network is trained using raw and synthetic lesion images to evaluate the effectiveness of the proposed framework. Experimental results show that our method can achieve realistic synthetic results and boost the performance of down-stream segmentation tasks above traditional data augmentation techniques.

Keywords: Laser interstitial thermal therapy, Adversarial variational auto-encoder, Progressive lesion synthesis

1. Introduction

Mesial temporal lobe epilepsy (MTLE) is one of the most common brain diseases and affects millions of people worldwide [11]. First-line treatment for MTLE is anti-seizure medicine but up to 30% of patients do not achieve seizure control, in these patients resective neurosurgery may be curative [15]. As a minimally invasive treatment, laser interstitial thermal therapy (LITT) can accurately locate and ablate target lesion structures within the brain [17]. LITT has been shown as an effective treatment for MTLE, and ablation of specific structures can be predictive of seizure freedom [16]. Region of interest (ROI) segmentation needs to be performed to enable quantitative analyses of LITT [18] (e.g., lesion volume quantification and ablation volume estimation). However, manual delineation is inevitably time-consuming and requires domain knowledge expertise. Automated lesion segmentation could improve the speed and reliability of lesion segmentation for this task.

In the literature, some segmentation methods for the post-ablation area have already been exploited. Ermiş et al. [2] developed Dense-UNet to segment resection cavities in glioblastoma multiforme patients. Pérez-García et al. [14] proposed an algorithm to simulate resections from preoperative MRI and utilized synthetic images to assist the brain resection cavity segmentation. Although the segmentation performance seems to be satisfied, it can be constrained by a small-scale dataset. Also, generated images of the rule-based resection simulation method can be less diverse, and imperfect synthetic results may compromise the performance of the segmentation model.

To mitigate the huge demand of images for training CNNs, methods utilizing generative adversarial network (GAN) [4] have been presented. Han et al. [6] generated 2D brain images with tumours from random noise to create more training samples. Kwon et al. [10] implemented a 3D α-GAN for brain MRI generation, including tumour and stroke lesion simulation. While these methods demonstrate the potential of GANs, there are some limitations. First, not all synthetic brain images have corresponding lesion masks, which means these methods may not be suitable to use for some down-stream tasks such as lesion segmentation. Additionally, these methods need extensive training samples to generate realistic results, which implies that the generalizability and robustness of these networks can not be ensured when the number of training samples is limited. Recently, Zhang et al. [19] designed a lesion-ware data augmentation strategy to improve brain lesion segmentation performance. However, its effectiveness still can be affected due to limited training samples.

To address the aforementioned issues, we develop a novel progressive brain lesion synthesis framework based on an adversarial variational auto-encoder, and refer it as PAVAE. Instead of simulating lesions directly, we decompose this task into two smaller sub-tasks (i.e., mask generation and lesion synthesis) to alleviate the task difficulty. For mask generation, we utilize a 3D variational auto-encoder as the generator to avoid mode collapses. We adopt a WGAN [1] discriminator with the gradient penalty [5] to encourage the generator to give more distinct results. We also design a condition embedding block (CEB) to encode semantic information (i.e., lesion size) to guide mask simulation. For lesion generation, we utilize a similar structure except replacing the CEB with a mask embedding block (MEB), to encode the shape information provided by masks to guide lesion synthesis. In the inference stage, we first sample from a Gaussian distribution to form the shape latent vector for mask simulation. Next, we combine the generated mask with an intensity latent vector sampled from a Gaussian distribution, and feed them into the lesion synthesis network to create the lesion image. Finally, we create new post-LITT ablation MR images from the generated lesion images. We train a lesion segmentation model using the framework nnUNet [7] to show the effectiveness of our method in synthesizing training images.

2. Methodology

Overall, the brain lesion synthesis task is decomposed into two smaller sub-tasks as described in Section 2.1. First, we design an adversarial variational auto-encoder to generate binary masks. To assist mask generation, we present a CEB to help encode mask conditions into the feature space, so that mask simulation can be guided by high-level semantic information. We adopt a similar architecture to generate lesions guided by binary lesion masks. Lesion masks are embedded into the feature space using a MEB. These additional blocks are described in Section 2.2. Finally, all models are trained using a four-term loss function as described in Section 2.3 to ensure reconstructions are accurate, latent spaces are approximately Gaussian, and the real and simulated distributions are similar.

2.1. Model Architecture

Fig. 1 illustrates our progressive adversarial variational auto-encoder for brain lesion synthesis. We design a progressive 3D variational auto-encoder to approximate both shape and intensity information of post-LITT ablation lesions as Gaussian distributions. Besides, a following discriminator can ensure that generated images are more realistic. The kernel size of all convolutional layers is set to 3 × 3 × 3, and Instance Normalization (InstanceNorm) and Leaky Rectified Linear Unit (LeakyReLU) are used after each convolutional layer. For the last convolutional layer, we use a Sigmoid function to scale output values between 0 and 1.

Fig. 1. The pipeline of our proposed framework for progressive brain lesion synthesis.

Fig. 1

Our method contains two separate networks with similar structures for (a) mask simulation and (b) lesion synthesis, respectively. For the inference (c), we sample shape latent vectors and intensity latent vectors from Gaussian distributions to generate new lesions.

New lesion synthesis is performed as shown in Fig. 1 (c). We first randomly sample from a Gaussian distribution to build shape latent vectors which are input into DS to generate new masks. Next, new masks and intensity latent vectors sampled from a Gaussian distribution are used as input for DI to generate new lesions. Here new masks are responsible for controlling new shapes and intensity latent vectors are responsible for intensity patterns.

2.2. Condition and Mask Embedding Blocks

To add additional guidance for models in order to generate better results, we propose two separate modules shown in Fig. 2, a CEB and a MEB. For MEB, we follow the approach presented by SPADE [12]. First, masks are resized to the feature map resolution using nearest-neighbor downsampling. Next, learned scale and bias parameters are produced by three 3D convolutional layers. Finally, the normalized feature maps are modulated by the learned scale and bias parameters. For CEB, the structure is similar to MEB, but all 3D convolutional layers are replaced with linear layers since all input conditions are vectors.

Fig. 2. The structure of (a) condition embedding block (CEB) and (b) mask embedding block (MEB).

Fig. 2

2.3. Loss Functions

To optimize the encoder, decoder and discriminator so that reasonable masks and realistic lesions are generated, four loss functions are utilized in our work: reconstruction Loss ℒRec, KL divergence ℒKL, and GAN specific losses ℒG and ℒD. First, the reconstruction loss ℒRec is used to ensure outputs have high fidelity to the ground truth images. The Kullback-Leibler (KL) loss ℒKL is imposed on the model to minimize the KL divergence between the intractable posterior distribution and the prior distribution (i.e., Gaussian distribution in latent space). Furthermore, we add Wasserstein loss functions (ℒG and ℒD) to the GAN in order to prevent results generated by the decoder from being fuzzy.

The reconstruction loss implemented in our framework is mean squared error (MSE) defined as:

LRec=ixg(i)xr(i)2, (1)

where xr(i) refers to the ith real image within a mini-batch, and xg(i) denotes the ith generated image obtained from the decoder. MSE guarantees that real images and synthetic images look similar in general. However, synthetic images may lose some detailed information, which will make them appear indistinct.

The KL loss is defined as the KL divergence 𝒟KL between the posterior distribution q (z|·) and the prior distribution p (z), which is formulated as:

KL=iDKL[q(z|xr(i))p(z)], (2)

where q(z|xr(i)) is the posterior latent distribution under the condition of xr(i), and p (z) is a normal distribution for the latent vector z. By minimizing the KL divergence between the two distributions, the conditional distribution of the latent vector z approximates a Gaussian distribution.

To avoid generating images with blurriness and instability during training, we deploy the loss functions from WGAN [1] instead of the original GAN. The Wasserstein loss can be defined as:

D=Exg~g[D(xg)]Exr~r[D(xr)]+λEx^~x^[(x^D(x^)21)2], (3)
G=Exg~g[D(xg)]. (4)

Compared with original GAN using a discriminator to differentiate whether images are real or fake, WGAN uses the Wasserstein distance to directly estimate the difference between two distributions ℙr and ℙg. This Wasserstein distance can be formulated as DW=Exr~r[D(xr)]Exg~g[D(xg)], where 𝔼 denotes the maximum likelihood estimation and D(·) denotes the discriminator. In addition, we further include a gradient penalty regularization [5] to constrain ℒD to satisfy the 1-Lipschitz condition, so that ℒD will remain stable during network training. The gradient penalty regularization is formalized as λEx^~x^[(x^D(x^)21)2], where x^ denotes random interpolation between real samples and generated samples, and λ is a weighting factor. In our experiments, we fix the value of λ to 10.

3. Experiments

3.1. Dataset

In this study, 47 T1-weighted MRI scans of 47 patients are collected from a high-volume epilepsy surgery center which has already established expertise in using LITT for MTLE. Consecutive patients are included if they received LITT for MTLE and had concordant semiology, scalp electroencephalography (EEG) and structural MRI features of mesial temporal sclerosis, or had seizure onset confirmed within the hippocampus following stereo-EEG (SEEG) investigation. Ethical approval for the study was provided by institutional review board approval for the retrospective use of anonymized imaging. All T1-weighted images are first aligned to the MNI152 brain template [3]. A random split of the dataset is performed, keeping 20% (10 cases) of the whole dataset as the test set with the remaining 80% (37 cases) being used as the training set.

3.2. Implementation Details

Our framework is implemented within PyTorch 1.10.0 [13]. For network training, encoder and decoder layers are treated as the generator and optimized together. To optimize the generator and discriminator networks, we use two Adam optimizers [8]. The initial learning rate is set to 5e-5, and the batch size is set to 13 due to GPU memory limitations. For each model, we train for 1000 epochs individually using only data in the training set. For input images, we extract a 64 × 64 × 64 cube from raw MRI scans corresponding to a ROI containing the mask to ensure the entire lesion area is included within the image.

3.3. Evaluation Metrics

All metrics are evaluated and reported on the test set. First, we evaluate the lesion synthesis performance using three metrics, i.e., peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and normalized mean square error (NMSE). Moreover, to prove that generated brain lesions can further boost the performance of down-stream tasks, we use four metrics to measure brain lesion segmentation performance: Dice coefficient, Jaccard index, the average surface distance (ASD), and the 95% Hausdorff Distance (95HD).

3.4. Experimental Results

Comparison of Lesion Synthesis Performance

We first qualitatively compare the synthetic results of our framework with other existing methods. Specifically, we employ 3D VAE and 3D VAE w/ WGAN-GP as baseline models. For 3D VAE, only ℒRec and ℒKL is utilized for model training. For 3D VAE w/ WGAN-GP, the structure is similar to the lesion synthesis network and all loss functions are utilized for training, but MEB has not been included. Note that all models are implemented with 3D convolution and 3D InstanceNorm layers. Triplanar views of synthetic lesions are shown in Fig. 3. Here, 3D VAE w/ WGAN-GP refers to 3D VAE followed by a WGAN-GP discriminator, PAVAE (Syn Mask) indicates lesion synthesis utilizes generated masks derived from the mask synthesis network. PAVAE (Real Mask) indicates that real masks are utilized to guide lesion synthesis. As can be found in Fig. 3, 3D VAE always generates fuzzy lesion images. Also, small lesions seem to be diffused, indicating the model has trouble in simulating small lesions. When a WGAN-GP discriminator is added to 3D VAE, results are clearer for big lesions. However, WGAN-GP still can not simulate small lesions. For our model, using synthetic masks to guide lesion generation i.e., PAVAE (Syn Mask), we observe even small lesions are successfully generated. Utilizing real masks for guidance results in synthetic lesions are closest to the ground truth among all compared methods (the rightmost column in Fig. 3). This highlights that the lesion synthesis network can generate realistic image intensity when provided a realistic lesion mask.

Fig. 3. Qualitative comparison for different generative models.

Fig. 3

Quantitative results shown in Table 1 indicate that neither 3D VAE nor 3D VAE w/ WGAN-GP can achieve high SSIM and low NMSE simultaneously. As for our model, we obtain relatively good synthetic results merely using generated masks. If we replace generated masks with real masks, the final results achieve the highest PSNR and SSIM, and lowest NMSE among all methods.

Table 1. Quantitative comparison of synthetic results for different generative models.
Method Metrics
PSNR[dB] SSIM[%] NMSE
3D VAE [9] 21.40 10.18 76.34
3D VAE w/ WGAN-GP [5] 22.05 92.15 152.87
PAVAE (Syn Mask) 23.67 94.90 76.98
PAVAE (Real Mask) 32.74 99.29 15.68

Comparison of Lesion Segmentation Performance

For the purpose of proving the effectiveness of synthetic results generated by our framework, we conduct the brain segmentation model based on nnUNet. For the training set and test set split, we follow the same manner with the lesion synthesis task. We generate 100 synthetic lesion images using CarveMix [19] and PAVAE individually, and combine them with raw images to create new training datasets. We use Dice loss and Cross-entropy loss to train nnUNet for 200 epochs. Both quantitative and qualitative results are shown in Table 2 and Fig. 4. Here, NoDA means no data augmentation strategy is employed and only the real training dataset (37 samples) is used to train the model. TDA means traditional data augmentation strategies implemented in nnUNet, including random flip and rotations, and etc.. As shown in Table 2, our method has the best performance for three metrics (Dice, Jaccard, 95HD) and for the final metric ASD, only CarveMix has a slightly lower value. Furthermore, in Fig. 4, we can observe that all the results of the competing methods over-segment the LITT ablation volume. Our method yields accurate segmentation results, which most closely resembles the expert annotation.

Table 2. Comparison of segmentation results using different data augmentation techniques during training.
Method Metrics
Dice[%] Jaccard[%] ASD[voxel] 95HD[voxel]
NoDA [7] 66.69 51.19 1.17 3.52
TDA [7] 72.25 57.67 1.08 3.15
CarveMix [19] 73.29 58.77 0.97 3.24
PAVAE (Syn Mask) 74.18 59.95 1.00 2.77
Fig. 4. Segmentation results from different training datasets for nnUNet models.

Fig. 4

4. Discussion and Conclusion

Building a 3D generative model may face several problems. The biggest ones can be mode collapses due to limited training samples and increasing computational complexity compared with 2D generative models. To tackle these issues, we have presented a progressive adversarial variational auto-encoder for brain lesion synthesis, which can generate reasonable masks and realistic brain lesions in a step-wise fashion. We further develop two types of blocks (i.e., CEB and MEB) to utilize both semantic and shape information to facilitate this lesion synthesis.

Experimental results show that our framework can create high-fidelity brain lesions and boost the down-stream segmentation model training compared with existing methods. However, as can be found in the quantitative results (Fig. 3), ground truth masks are still able to synthesize more realistic lesion images indicating a potential room for improvement when creating masks. Besides, all data used in this study was from a single center, further validation is required to evaluate its effectiveness on multi-center data and especially data acquired on different MRI scanners.

Acknowledgement

This work was supported by Centre for Doctoral Training in Surgical and Interventional Engineering at King’s College London. This research was funded in whole, or in part, by the Wellcome Trust [218380/Z/19/Z, WT203148/Z/16/Z]. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. This research was supported by the UK Research and Innovation London Medical Imaging & Artificial Intelligence Centre for Value Based Healthcare. The research was funded/supported by the National Institute for Health Research (NIHR) Biomedical Research Centre based at Guy’s and St Thomas’ NHS Foundation Trust and King’s College London and supported by the NIHR Clinical Research Facility (CRF) at Guy’s and St Thomas’. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

References

  • 1.Arjovsky M, Chintala S, Bottou L. Wasserstein generative adversarial networks; International conference on machine learning; 2017. pp. 214–223. [Google Scholar]
  • 2.Ermiş E, Jungo A, Poel R, Blatti-Moreno M, Meier R, Knecht U, Aebersold DM, Fix MK, Manser P, Reyes M, et al. Fully automated brain resection cavity delineation for radiation target volume definition in glioblastoma patients using deep learning. Radiation oncology. 2020;15(1):1–10. doi: 10.1186/s13014-020-01553-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Fonov V, Evans A, McKinstry R, Almli CR, Collins D. Unbiased nonlinear average age-appropriate brain templates from birth to adulthood. NeuroImage. 2009;47 [Google Scholar]
  • 4.Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. Advances in neural information processing systems. 2014;27 [Google Scholar]
  • 5.Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC. Improved training of wasserstein gans. Advances in neural information processing systems. 2017;30 [Google Scholar]
  • 6.Han C, Hayashi H, Rundo L, Araki R, Shimoda W, Muramatsu S, Furukawa Y, Mauri G, Nakayama H. Gan-based synthetic brain mr image generation; 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018); 2018. pp. 734–738. [Google Scholar]
  • 7.Isensee F, Jaeger PF, Kohl SA, Petersen J, Maier-Hein KH. nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods. 2021;18(2):203–211. doi: 10.1038/s41592-020-01008-z. [DOI] [PubMed] [Google Scholar]
  • 8.Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint. 2014:arxiv:1412.6980 [Google Scholar]
  • 9.Kingma DP, Welling M. Auto-encoding variational bayes. arXiv preprint. 2013:arXiv:1312.6114 [Google Scholar]
  • 10.Kwon G, Han C, Kim Ds. Generation of 3d brain mri using auto-encoding generative adversarial networks; International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer; 2019. pp. 118–126. [Google Scholar]
  • 11.Nevalainen O, Ansakorpi H, Simola M, Raitanen J, Isojärvi J, Artama M, Auvinen A. Epilepsy-related clinical characteristics and mortality: a systematic review and meta-analysis. Neurology. 2014;83(21):1968–1977. doi: 10.1212/WNL.0000000000001005. [DOI] [PubMed] [Google Scholar]
  • 12.Park T, Liu MY, Wang TC, Zhu JY. Semantic image synthesis with spatially-adaptive normalization; Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2019. pp. 2337–2346. [Google Scholar]
  • 13.Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems. 2019;32 [Google Scholar]
  • 14.Pérez-García F, Dorent R, Rizzi M, Cardinale F, Frazzini V, Navarro V, Essert C, Ollivier I, Vercauteren T, Sparks R, et al. A self-supervised learning strategy for postoperative brain cavity segmentation simulating resections. International Journal of Computer Assisted Radiology and Surgery. 2021;16(10):1653–1661. doi: 10.1007/s11548-021-02420-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Rosenow F, Lüders H. Presurgical evaluation of epilepsy. Brain. 2001;124(9):1683–1700. doi: 10.1093/brain/124.9.1683. [DOI] [PubMed] [Google Scholar]
  • 16.Satzer D, Tao JX, Warnke PC. Extent of parahippocampal ablation is associated with seizure freedom after laser amygdalohippocampotomy. Journal of Neurosurgery. 2021;135(6):1742–1751. doi: 10.3171/2020.11.JNS203261. [DOI] [PubMed] [Google Scholar]
  • 17.Sun XR, Patel NV, Danish SF. Tissue ablation dynamics during magnetic resonance–guided, laser-induced thermal therapy. Neurosurgery. 2015;77(1):51–58. doi: 10.1227/NEU.0000000000000732. [DOI] [PubMed] [Google Scholar]
  • 18.Vakharia VN, Sparks R, Li K, O’Keeffe AG, Miserocchi A, McEvoy AW, Sperling MR, Sharan A, Ourselin S, Duncan JS, et al. Automated trajectory planning for laser interstitial thermal therapy in mesial temporal lobe epilepsy. Epilepsia. 2018;59(4):814–824. doi: 10.1111/epi.14034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Zhang X, Liu C, Ou N, Zeng X, Xiong X, Yu Y, Liu Z, Ye C. Carvemix: A simple data augmentation method for brain lesion segmentation; International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer; 2021. pp. 196–205. [Google Scholar]

RESOURCES