Abstract
Mueller matrix polarimetry (MMP) provides valuable structural insights into tissue and holds promise for medical diagnostics. However, its clinical adoption is hindered by labor-intensive data collection and annotation. This study examines the use of MMP data collected in reflection from ex vivo human brain tissue to identify neoplastic regions. Using a custom-built single-wavelength MMP imaging system, we compare deep learning models trained on Mueller matrix measurements against Lu-Chipman feature maps. Our networks achieve segmentation accuracy comparable to multi-spectral polarimetry, highlighting the potential of real-time MMP for brain tumor differentiation. We further provide a qualitative analysis discussing challenges and opportunities for neurosurgical MMP applications.
1. Introduction
Mueller matrix polarimetry (MMP) is a powerful optical technique that enables precise characterization of how a sample interacts with polarized light. By providing detailed insights into the structural properties of materials and tissues, MMP has found applications across a wide range of fields, including materials science [1,2], remote sensing [3,4] and biomedical imaging [5,6].
Recent advancements in MMP have demonstrated its potential in medical applications, particularly in differentiating tissue characteristics [7]. For instance, MMP has been successfully employed in diagnosing cervical pre-cancer [8–10], detecting pancreatic diseases [11], monitoring the risk of preterm birth [12], and analyzing brain fiber tracts [13]. Following this, our group started to explore the potential to differentiate brain tumor from healthy tissue based on MMP enabling the more accurate resection of neoplastic tissue.
Traditional approaches to MMP data analysis often rely on decomposition methods, such as the Lu-Chipman algorithm [14], which is widely adopted to extract physically interpretable features from Mueller matrices [15–17]. More recently, machine learning has been leveraged to enhance polarimetric data analysis by bypassing manual feature extraction [18]. Notably, combining multispectral imaging with polarimetry has improved binary tumor classification, demonstrating the potential of multimodal approaches for tissue differentiation [11]. At the same time, hyperspectral imaging methods have also been investigated for brain tumor identification [19–21]. However, many hyperspectral studies primarily emphasize spectroscopic analysis or focus on tissue types beyond the brain, leaving open questions about the suitability of polarimetry for intraoperative neurosurgical applications.
Despite these advancements, translating MMP to intraoperative use faces several challenges. In real-time surgical settings, the efficient acquisition and high-quality processing of MMP data are crucial for ensuring reliable and accurate imaging outcomes [22]. Current polarimetric data acquisition methods often struggle to balance rapid data collection with the computational efficiency needed for large-scale processing and accurate, interpretable classifications. The absence of fast and reliable classification pipelines remains a key barrier to integrating MMP into routine clinical workflows.
Furthermore, a significant gap exists in applying MMP for brain tumor image segmentation, particularly when dealing with ex-vivo, unfixed brain specimens. Unlike other tissues, brain tissue presents unique imaging challenges, including its intricate anatomical structure and two distinct tissue types: gray and white matter. the latter one exhibiting birefringence due to its microstructure, which increases the labor-intensive process of acquiring annotated data [17]. While MMP generates rich information about tissue’s optical properties, it also produces large amounts of data, requiring intensive processing for Mueller matrix decomposition, feature extraction, and real-time inference, posing computational bottlenecks that complicate its adoption for real-time applications. Addressing these limitations is critical for advancing polarimetric imaging in neurosurgery and supporting surgeons with precise, intraoperative decision-making tools.
This study investigates the potential of MMP as a tool for brain tumor detection by leveraging machine learning models applied to different MMP input data representations. Using our custom-built polarimetric imaging device and annotations from histology, we evaluate the ability of different MMP input data representations to differentiate normal and tumor regions in ex vivo, fresh human brain tissue. Specifically, we investigate the trade-off between image segmentation accuracy and computational efficiency by contrasting hand-crafted Lu-Chipman (LC) features with Mueller matrix (MM) input data. Our results show that the standard U-Net architecture [23] enables rapid and accurate distinction between normal and brain tumor tissue using monochromatic MMP data. These insights lay the groundwork for integrating polarimetry into real-time medical imaging systems, potentially supporting advanced neurosurgical decision-making. To facilitate reproducibility and further research, we make our codebase for semantic segmentation publicly available [33]. Additionally, our code for Mueller matrix and Lu-Chipman decomposition can be accessed [35].
The remainder of this paper is organized as follows: Section 2 outlines the methodology, including data pre-processing, and machine learning models. Section 3 presents experimental results, including segmentation performance and computational efficiency across different augmentation strategies and discusses the implications of the findings, emphasizing the potential of polarimetry in real-time medical imaging applications. Finally, Section 4 concludes the paper and highlights directions for future research.
2. Methods
Our methodology is designed to maximize the potential of MMP for brain tumor tissue identification. The overall pipeline consists of three key stages (see Fig. 1): (a) data acquisition and augmentation, (b) Mueller matrix composition and LC feature extraction as well as (c) training a segmentation network for brain tumor tissue classification.
Fig. 1. Flow chart for segmentation of polarimetric images.
The processing pipeline is split into blocks taking care of the polarimetric augmentations [24], LC decompositions [25], and semantic segmentation [26], respectively. Based on the raw intensity images and calibration data we calculate the MMs. Then we either pass Mueller matrix feed-forward (MMFF) data or LC decomposed features as input to the segmentation network.
First, we describe the dataset and calculation of Mueller matrix images of ex vivo brain tissue images using data captured with a custom-built polarimetric imaging instrument [15,17,27]. Next, we explain how the data is processed and compare two approaches: using precomputed LC features versus using raw Mueller matrix data as input. Finally, we outline the training process including model selection, loss design, hyper-parameter tuning, and the role of isometric transformations [28] to enhance dataset diversity and mitigating overfitting.
2.1. Dataset
The dataset consists of polarimetric images, each with a resolution of 388 × 516 pixels, acquired at a wavelength of 550 nm using a custom-built polarimetric imaging system [15]. The dataset comprises Mueller matrix (MM) images of 17 tumor specimens collected after neurosurgical removal and 12 samples of tumor-free brain tissue, obtained from autopsies of individuals without neuropathological findings. For brevity, we use the term healthy to refer to non-tumor tissue throughout this manuscript. Ground truth (GT) annotations were derived from hematoxylin and eosin (H&E) stained histological sections and manually labeled by expert neuropathologists (co-authors of this work), following clinical annotation protocols consistent with prior work [17]. Regions were categorized as healthy, tumor, white matter (WM), or gray matter (GM), based on established neuroanatomical landmarks and tissue architecture visible in histology and reflected intensity images. Histological annotations were co-registered to polarimetric images using anatomical landmarks to ensure spatial alignment, and were collaboratively reviewed to ensure labeling consistency. Non-tissue areas were excluded and labeled as background. Table 1 provides the pixel-wise distribution of the annotated tissue classes.
Table 1. Number of pixels per class for white matter (WM) and gray matter (GM).
| Background | Healthy WM | Tumor WM | Healthy GM | Tumor GM | |
|---|---|---|---|---|---|
| Total | 2,310,655 | 670,433 | 1,192,790 | 715,452 | 511,102 |
To ensure robust model evaluation, we employ 4-fold cross-validation, with a fixed validation set across all folds for consistency across experiments. Each fold comprises polarimetric images of four tumor samples and three healthy samples, while the polarimetric images of remaining samples are reserved exclusively for validation. This approach ensures a balanced representation of tumor and healthy samples during training and evaluation.
For all dataset splits, we focus exclusively on labeled pixels, removing labels that meet specific criteria to enhance the quality of the data. Background pixels are discarded to direct the network’s attention toward distinguishing between relevant tissue types, such as tumor and healthy white matter regions. We apply a time-efficient physical realizability probing and filtering method [29], which uses the coefficients of the characteristic polynomial of the covariance matrix to identify invalid Mueller matrix measurements. Specifically, pixels associated with negative values of those coefficients indicate non-physical matrices, which are masked and excluded from the loss during training and testing. This filtering step removes approximately 5% of pixels per image and ensures that only physically meaningful data are passed to the network. Additionally, labels for pixels corresponding to saturated intensity values are also excluded to minimize the influence of artifacts and ensure reliable predictions.
The initial tissue labels in our dataset comprise four classes: healthy, tumor, white matter (WM), and gray matter (GM). For more anatomically and pathologically specific modeling, we reorganize these into four refined classes: healthy WM, tumor WM, healthy GM, and tumor GM. This restructuring provides greater flexibility for downstream analysis. Depending on the application, these classes can be merged to reflect broader categories (e.g., such as healthy vs. tumor). More importantly, the reorganization supports the fusion of healthy and tumor GM into a single GM class after training and inference. This reduction from four to three classes (i.e., healthy WM, tumor WM, and GM) is critical for our segmentation prototype, as the primary medical focus lies in identifying tumors within WM brain tissue. The label fusion is performed by defining the final GM ground truth mask as ygm = yhgm ∨ ytgm, where yhgm and ytgm are the binary per-pixel labels for healthy and tumor gray matter, respectively. The ∨ operator denotes the logical OR operation. Likewise, the predicted probability map for GM is obtained by aggregating model outputs as xgm = max (xhgm, xtgm), where xhgm and xtgm denote the per-pixel model predictions for healthy and tumor GM. This approach allows the network to learn finer distinctions during training while maintaining a simplified and medically relevant output space at inference time.
Although the primary clinical focus is on detecting tumors within white matter, we retain grey matter as an explicit label class to enhance anatomical specificity during training. Rather than inferring white matter boundaries indirectly (e.g., by thresholding intensity or treating all non-WM regions as background), we enable the network to learn both tissue types explicitly. This improves boundary resolution, avoids heuristic thresholds, and reduces ambiguity in transitional regions, where tumor infiltration may cross anatomical interfaces.
2.2. Image data pre-processing
To compute Mueller matrices from raw data, we process pixel-wise recorded intensity data Bi ∈ ℝ4×4 using calibration matrices Ai ∈ ℝ4×4 and Wi ∈ ℝ4×4 [30]. Each pixel index i ∈ 0, 1,…, H × W − 1 corresponds to a spatial coordinate in the image, where H and W denote the image height and width, respectively. The Mueller matrix Mi for each pixel is computed as:
| (1) |
These computed Mueller matrices serve as the foundation for further pre-processing and feature extraction, with subsequent methods leveraging either LC decomposition or direct MM data network input.
2.2.1. Lu-Chipman features
To extract human-interpretable polarimetric features from the Mueller matrix Mi, we apply the LC decomposition [14], which factorizes Mi into:
| (2) |
where Δi ∈ ℝ4×4, Ri ∈ ℝ4×4, and Di ∈ ℝ4×4 denote the Mueller matrices of depolarizer, retarder and diattenuator, respectively, each representing distinct polarization properties of the imaged sample. This decomposition employs enables the derivation of physically meaningful features that can serve as inputs for machine learning models. It is worth mentioning that no circular retardance was observed in bulk biological tissues measured in reflection at normal incidence [31]. This implies that Ri represents the Mueller matrix of a linear retarder. Guided by the findings of earlier studies [17,32], we focus on four key polarimetric properties: total depolarization, linear retardance, azimuth of the optical axis, and diattenuation. These properties comprehensively represent the sample’s polarimetric characteristics:
-
1.
Total depolarization: depolarization quantifies how much polarization is lost due to scattering within the tissue. This metric provides insights into the structural and scattering properties of a sample. Mathematically, the total depolarization Pi ∈ ℝ is calculated using
| (3) |
where Δi represents the depolarization matrix obtained from (2). A value of Pi = 1 corresponds to an ideal depolarizer, while Pi = 0 indicates a non-depolarizing sample.
-
2.
Linear retardance: linear retardance measures the phase shift introduced between the orthogonal polarization components as light propagates through a birefringent material [32]. This parameter is crucial for identifying tissues, such as aligned fiber tracts, which show a high degree of birefringence. The scalar linear retardance δi ∈ ℝ is computed as
| (4) |
where ∈ ℝ denotes the element of ∈ Ri at the u-th row and v-th column.
-
3.
Diattenuation: the diattenuation di ∈ ℝ quantifies the differential attenuation of polarized light and is calculated as
| (5) |
where represents the element of Di at the u-th row and v-th column.
-
4.
Azimuth angle: φi ∈ ℝ represents the orientation of the optical axis of a linear birefringent medium. It provides directional information about the anisotropy of refractive index and can reveal subtle organizational patterns in tissue. The azimuth angle is calculated as
| (6) |
with being the elements of Ri at the u-th row and v-th column. The orientation angle of the optical axis φi ∈ ℝ is confined to the range [0, π) by wrapping negative values appropriately. Tumor tissues often exhibit disorganized azimuth regions compared to healthy tissues, making this parameter a strong biomarker for tissue differentiation. In conjunction with other metrics, the azimuth angle enhances the network’s ability to discern spatial variations in polarization properties.
In addition to the LC parameters, we include an intensity-based feature map to provide the network with spatial intensity distribution. While not strictly part of the LC decomposition, this grayscale feature enhances the network’s ability to differentiate tissue types based on local intensity variations. Specifically, we compute a normalized intensity ni at pixel index i given by
| (7) |
where mink and maxk operations are computed over all pixels k ∈ {1, 2,…, H ×W − 1}. This normalization ensures that ni ∈ [0, 1] across the image. The resulting grayscale intensity map ni is concatenated with the LC-derived polarimetric features to form the full neural network input.
2.2.2. Mueller matrix feed-forward
For rapid MMP interpretation in neurosurgical guidance, we investigate using Mueller matrix images as network input. Unlike LC decomposition, which incurs computational overhead due to its reliance on singular value decomposition (SVD), this approach bypasses manual feature extraction, offering a more efficient alternative for real-time imaging. We pre-process the Mueller matrix Mi by extracting a normalized submatrix :
| (8) |
and flatten it to a feature channel vector . This approach excludes the first row and column of the matrix Mi, which primarily encode the diattenuation and polarizance properties of a sample (almost non-existant at normal incidence [33]) and the total reflected/scattered intensity. By focusing on the remaining 3 × 3 submatrix, we emphasize the nuanced polarization state interactions within a sample, which are more informative for tissue classification. To make use of the relative spatial intensity distribution, we concatenate . This step ensures that spatial intensity variations are preserved, which are useful for distinguishing WM and GM tissue types.
This pre-processing pipeline is designed to allow neural network models to learn essential features directly from Mueller matrix data, thus, minimizing pre-processing complexity and facilitating real-time segmentation.
2.3. Training
With the pre-processed features in place, we now detail the training strategy, including data augmentation techniques, model selection criteria, and optimization methods, designed to leverage the full potential of the extracted features.
Data augmentation
To enhance the robustness of our model and mitigate overfitting, we applied data augmentation techniques consisting of horizontal and vertical flips as well as rotations, derived from the group of special orthogonal matrices [28]. These augmentations ensure that MM image segmentation becomes equivariant to isometric transformations, a critical requirement for improving the generalization of convolutional neural networks (CNNs). Moreover, the availability of these validated augmentations served as a key motivation for incorporating a CNN into analysis of our MMP images, as they align with the geometric characteristics of the data and support more effective learning.
Models
We employ a standard U-Net architecture for semantic segmentation of brain tissue images using polarimetric data. With its encoder–decoder structure and skip connections, the U-Net is well-suited to capture spatial context at multiple scale levels, which is essential for semantic analysis of the tissue structure. For example, azimuth angle reflects the local orientation of the fiber tracts, and regions with disordered azimuth angles can indicate neoplastic tissue growth. Such spatial disorganization cannot be detected at the single-pixel level; rather, an adequate receptive field is necessary to capture these regional variations. In our experiments, the U-Net is applied to two distinct input data representations: one based on parameters extracted from the LC decomposition and the other on raw MMFF data. Using identical datasets, data augmentations, loss functions, and hyperparameters for both input types allows us to directly compare their segmentation performance and computational efficiency.
Loss
To address class imbalance in our dataset (see Table 1), we employ the sigmoid focal loss FL(·) [34], a variant of the cross-entropy loss designed to mitigate bias toward dominant classes. The mathematical formulation of focal loss is given by:
| (9) |
where αt = |y − 0.25| is a weighting factor for the class, which balances the importance of positive vs. negative examples and γ = 2 is the focusing parameter that reduces the relative loss for well-classified examples (those with large pt values) and focuses training on hard negatives. The accumulated multi-class loss is then formulated as follows:
| (10) |
where C is the number of classes for one-hot encoded input pairs (pi, yi). For the unlabeled, nonphysical, and saturated pixels (see Section 2.1), the predictions and corresponding labels are set to zero before the loss computation, ensuring that they do not affect the model’s training process.
Hyper-parameters
Model training and evaluation were performed on a compute node equipped with a single NVIDIA GeForce GTX 1080 Ti GPU and 20 CPU cores, of which 4 were used for data loading. The training batch size was set to 4, and the model was trained for up to 200 epochs, with the checkpoint exhibiting the lowest validation loss retained. The learning rate was initialized at 1e−4 and lowered dynamically using a cosine annealing scheduler to promote stable convergence and prevent overfitting. Since the computationally intensive image processing and model training are GPU-bound, with only data loading and augmentation relying on CPU resources, the CPU specification has negligible impact on model performance in our setup.
3. Experimental work
Metrics
To evaluate a network’s ability to detect brain tumors, we compute the area under the curve (AUC) of the receiver operating characteristic (ROC) for the healthy and tumor tissue classes across k-fold test batches. Additionally, we assess the overall segmentation quality of the proposed pipeline by calculating the dice similarity coefficient (DSC) for all three ground truth classes, including GM. Since real-time image processing is a core requirement, we additionally measure computational performance using two metrics: tmm, the time required for processing polarimetric data [25] module, and ts, the segmentation time in [24]. The segmentation scores are computed exclusively for labeled pixels, thereby excluding unannotated regions from the analysis.
Results discussion
Fig. 2 showcases segmentation results across data input choices, visually demonstrating their ability to differentiate tumor and healthy tissue. Table 2 provides an overview of segmentation metrics for the test sets, while Table 3 breaks down the performance metrics per tissue class for each model.Our study on the semantic segmentation of brain tumors demonstrates that the standard U-Net architecture can effectively distinguish tumor tissue, even when trained and evaluated on a relatively small set of Mueller matrix images. This finding is supported quantitatively by the scores presented in Tables 2 and 3, as well as by the qualitative results in Fig. 2. Notably, while LC-based inputs show a minor advantage over MMFF in correctly identifying tumors within WM, LC slightly underperforms in the accurate segmentation of non-tumor regions. Despite our data augmentation efforts [28], the limited size of our dataset remains a significant potential source of misclassification. Another source of segmentation inaccuracy arises from occasional misalignments between the ground truth annotations and the corresponding polarimetric tissue images, particularly near tissue boundaries. These discrepancies are primarily due to tissue deformations introduced during histological processing, which can impede precise spatial registration and labeling.
Fig. 2. Qualitative comparison of semantic segmentation results using LC and MMFF inputs.
Each column represents a distinct human brain tissue sample, with ground truth labels (GT, bottom rows) and per-pixel predictions overlaid at 30% opacity for three segmentation trials. Rows alternate between LC-derived inputs and MMFF feature stacks, allowing direct comparison of the two input modalities. Tumor types include astrocytoma (A), oligodendroglioma (O), and glioblastoma (GBM), with WHO grades indicated numerically [35].
Table 2. Segmentation performance using binary AUC and multi-class DSC.
| Input | H/T AUC | DSC | ts [s] | tmm [s] |
|---|---|---|---|---|
| MMFF | 0.891 ± 0.098 | 0.803 ± 0.044 | 0.050 ± 0.001 | 0.020 ± 0.000 |
| LC | 0.931 ± 0.019 | 0.799 ± 0.009 | 0.048 ± 0.001 | 0.255 ± 0.014 |
Table 3. Dice similarity coefficient per tissue class.
| Model | Input | Healthy WM | Tumor WM | GM |
|---|---|---|---|---|
| U-Net | LC | 0.7725 ± 0.0460 | 0.8125 ± 0.0302 | 0.8154 ± 0.0312 |
| U-Net | MMFF | 0.7934 ± 0.1021 | 0.8037 ± 0.0204 | 0.8321 ± 0.0360 |
LC decomposition applies the SVD to factorize the Mueller matrix. As indicated by the AUC results in Table 2, this SVD-based filtering retains the discriminative information needed to distinguish between tumor and healthy tissue in an optimal manner, albeit at a significant computational cost. In contrast, our analysis shows that using raw MMFF data achieves segmentation performance comparable to LC features with substantially lower computational overhead. Specifically, MMFF achieves computational efficiency, requiring less than 25 ms per frame in native PyTorch, with potential for further optimization through hardware improvements. In contrast, LC-based features offer an advantage in prediction accuracy, but incur over ten times the computation time due to the SVD required for feature extraction during LC decomposition. These findings suggest that bypassing LC enables the network to autonomously extract critical polarimetric features, avoiding the filtering effects introduced by LC transformations.
With the results presented in Table 3, we provide the DSC for each image class. This overview unravels a balanced segmentation performance for WM tissue types.
The ROC curves in Fig. 3 compare the performance of U-Net models trained and tested with LC and MMFF inputs. Each curve represents the mean performance across k-fold test sets, with shaded regions denoting the standard deviation. Both inputs demonstrate robustness whereas LC features have a tendency to achieve higher true positive rates on average. The narrower standard deviation for LC features suggests greater stability. Overall, LC inputs provide a slight advantage, making them a preferable choice for reliable performance on our dataset.
Fig. 3. Receiver operating characteristic (ROC) curves for the U-Net models based on LC and MMFF inputs.
Each curve represents the mean of the k-fold test sets while the surrounding colored area highlights the standard deviation.
Feature importance
To assess the relevance of each LC feature map for tumor segmentation, we conducted an ablation study by systematically removing individual features when training the U-Net model. This approach allowed us to quantify the contribution of each feature to overall segmentation performance and identify potential features that may hinder prediction accuracy. The results, summarized in Table 4, indicate that the segmentation model achieves optimal DSC performance when all LC features are included. Notably, the azimuth angle emerges as a particularly strong biomarker for tumor classification in white matter brain tissue. This finding is consistent with prior research [17], which highlights that azimuth angles capture the orientation of white matter fiber tracts, a property that becomes disrupted in malignant regions. These insights underline the critical role of azimuth information in polarimetric imaging for accurate tumor segmentation.
Table 4. Ablation study for Lu-Chipman features.
| Absent LC Feature | DSC | H/T AUC |
|---|---|---|
| None | 0.799 ± 0.009 | 0.931 ± 0.019 |
| Intensity ni | 0.786 ± 0.018 | 0.890 ± 0.045 |
| Azimuth φi | 0.787 ± 0.017 | 0.890 ± 0.040 |
| Total Depolarization Pi | 0.788 ± 0.020 | 0.914 ± 0.034 |
| Linear Retardance δi | 0.796 ± 0.034 | 0.915 ± 0.041 |
| Diattenuation di | 0.792 ± 0.028 | 0.942 ± 0.014 |
Qualitative feature analysis
From the magnified portions in Fig. 4, we observe that the LC U-Net associates consistently oriented azimuth angles φi with healthy tissue, while disordered φi correlate with tumor predictions. As a result, network predictions may deviate from the provided GT, as illustrated in Fig. 4(c-d). These findings demonstrate that the U-Net successfully learned the hypothesized training objective, namely that fiber tract organization, as represented by φi, serves as a distinguishing feature for tumor tissue [17]. While φi provides a strong predictive signal, we acknowledge that it is not the sole determinant in classification. As evidenced in Table 4, other LC features contribute to the decision process. In some cases, such as in Fig. 4(a) where partially aligned φi is classified as tumor, relying on azimuth alone may lead to ambiguities. These cases highlight the importance of integrating multiple LC features for more reliable predictions. Moreover, the primary source of misclassifications potentially stems from tissue heterogeneity and the limitations of binary label assignments, which do not take tumor infiltration patterns into consideration. These findings suggest that refining tumor tissue labels to account for spatial tumor infiltration could enhance model performance. Future work should explore probabilistic labeling or multi-class segmentation strategies to better differentiate healthy and neoplastic brain tissues in MMP.
Fig. 4. Qualitative analysis of the prediction.
The columns (a)-(d) show a true positive, true negative, false positive, and false negative classification case. From top to bottom, we present intensities, GT, LC network predictions, and azimuth φi in the remaining rows. The frames highlighted in orange depict magnified portions of φi for detailed analysis. For an intuitive inspection, the bottom row contains plots where tilted bars represent the azimuth angle φi averaged over 1-step sliding window of 8x8 size. The azimuth angle φi is chosen as the most influential LC feature in Table 4, which correlates with the orientation of brain fiber tracts in WM tissue [17].
To contextualize our findings, we compare our results in Table 5 with a multispectral MMP approach [11]. The multispectral MMP method utilized five polarimetric wavelengths for pancreatic tumor identification and reported an AUC of approximately 0.91. However, acquisition and processing for multispectral MMP take significantly longer, with image capture time tc alone requiring around 15 minutes [11]. In contrast, our approach achieves comparable AUC scores while using only a single wavelength (550 nm) and benefiting from substantially lower acquisition time and computation time of less than a second. This underscores the efficiency of our method, demonstrating that effective brain tumor segmentation can be achieved with reduced spectral complexity. However, direct comparisons remain challenging due to differences in tissue properties, spectral illumination ranges, and variations in neural network architectures and training procedures.
Table 5. Comparison of our method with multispectral MMP tumor identification.
| Method | Specimens | Wavelengths λ [nm] | H/T AUC ↑ | tc [s] ↓ |
|---|---|---|---|---|
| λ-MMP[11] | Pancreatic tissue | [450, 470, 500, 540, 625] | 0.910 ± 0.030 | ≈ 900 |
| LC (ours) | Brain tissue | 550 | 0.942 ± 0.014 | ≈ 60 |
4. Conclusion
Mueller matrix polarimetry holds significant promise for advancing tumor identification in neurosurgical applications, offering unique insights into tissue structure. This study demonstrates the potential of MMP for accurately segmenting ex vivo brain tumor tissue.
Our findings demonstrate that while the segmentation model achieves slightly higher prediction accuracy when using Lu-Chipman features as input, the neural network with MMFF input offers nearly comparable performance while significantly reducing computational demands. This efficiency underscores the practicality of using MMFF data for real-time applications, as neural networks can effectively learn polarimetric features without relying on decomposition methods like Lu-Chipman.
While our presented results show great promise for ex vivo data, future work should focus on translating our proposed framework to real world scenarios in the operation room. This involves further training and analysis of larger datasets comprising in vivo polarimetric images of brain taken during neurosurgery. Building larger annotated datasets with diverse tissue samples and refined tumor infiltration labeling will be crucial for developing robust, generalizable models. This will enable the analysis of physiological patterns underlying false positive and false negative predictions to refine feature selection and improve model accuracy. These advancements are vital for translating MMP into clinical practice, enabling real-time and accurate tumor segmentation to support surgical decision-making.
Acknowledgments
This work was supported by the Swiss National Science Foundation (SNSF) Sinergia Grant No. CRSII5_205904, “HORAO - Polarimetric visualization of brain fiber tracts for tumor delineation in neurosurgery.”
Funding
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (Sinergia grant CR-SII5_205904).
Footnotes
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the co-authors upon reasonable request. A curated version of the dataset, incorporating additional quality-control steps by the data owners, is planned for release at a later stage of the project. As a result of these updates, the released dataset may differ from the version used in this study, and metric values obtained from it may not be directly comparable to those reported here. Any correspondence in that regard should be addressed to theoni.maragkou@unibe.ch.
References
- 1.Arteaga O, Kahr B. Mueller matrix polarimetry of bianisotropic materials. J Opt Soc Am B. 2019;36(8):F72–F83. doi: 10.1364/JOSAB.36.000F72. [DOI] [Google Scholar]
- 2.Kaplan B, Novikova T, De Martino A, et al. Characterization of bidimensional gratings by spectroscopic ellipsometry and angle-resolved Mueller polarimetry. Appl Opt. 2004;43(6):1233–1240. doi: 10.1364/ao.43.001233. [DOI] [PubMed] [Google Scholar]
- 3.Tyo JS, Goldstein DL, Chenault DB, et al. Review of passive imaging polarimetry for remote sensing applications. Appl Opt. 2006;45(22):5453–5469. doi: 10.1364/ao.45.005453. [DOI] [PubMed] [Google Scholar]
- 4.Novikova T, Bénière A, Goudail F, et al. Sources of possible artefacts in the contrast evaluation for the backscattering polarimetric images of different targets in turbid medium. Opt Express. 2009;17(26):23851–23860. doi: 10.1364/OE.17.023851. [DOI] [PubMed] [Google Scholar]
- 5.Le Gratiet A, Mohebi A, Callegari F, et al. Review on complete Mueller matrix optical scanning microscopy imaging. Appl Sci. 2021;11(4):1632. doi: 10.1016/j.bpj.2021.06.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Ramella-Roman JC, Novikova T. Polarized light in Biomedical Imaging and Sensing. Springer; Sham: 2023. [Google Scholar]
- 7.Qi J, Elson DS. Mueller polarimetric imaging for surgical and diagnostic applications: a review. J Biophotonics. 2017;10(8):950–982. doi: 10.1002/jbio.201600152. [DOI] [PubMed] [Google Scholar]
- 8.Pierangelo A, Nazac A, Benali A, et al. Polarimetric imaging of uterine cervix: a case study. Opt Express. 2013;21(12):14120–14130. doi: 10.1364/OE.21.014120. [DOI] [PubMed] [Google Scholar]
- 9.Robinson D, Klejn W, Hoong K, et al. Polarimetric imaging for cervical pre-cancer screening aided by machine learning: ex vivo studies. J Biomed Opt. 2023;28(10):102904. doi: 10.1117/1.JBO.28.10.102904. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Khan S, Qadir M, Khalid A, et al. Characterization of cervical tissue using Mueller matrix polarimetry. Lasers Med Sci. 2023;38(1):46. doi: 10.1007/s10103-023-03712-6. [DOI] [PubMed] [Google Scholar]
- 11.Sampaio P, Lopez-Antuna M, Storni F, et al. Müller matrix polarimetry for pancreatic tissue characterization. Sci Rep. 2023;13(1):16417. doi: 10.1038/s41598-023-43195-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Boonya-Ananta T, Gonzalez M, Ajmal A, et al. Speculum-free portable preterm imaging system. J Biomed Opt. 2024;29(05):052918. doi: 10.1117/1.JBO.29.5.052918. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Schucht P, Lee HR, Mezouar HM, et al. Visualization of white matter fiber tracts of brain tissue sections with wide-field imaging Mueller polarimetry. IEEE Trans Med Imaging. 2020;39(12):4376–4382. doi: 10.1109/TMI.2020.3018439. [DOI] [PubMed] [Google Scholar]
- 14.Lu S-Y, Chipman RA. Interpretation of Mueller matrices based on polar decomposition. J Opt Soc Am A. 1996;13(5):1106–1113. doi: 10.1364/JOSAA.13.001106. [DOI] [Google Scholar]
- 15.Rodríguez-Núnez O, Schucht P, Hewer E, et al. Polarimetric visualization of healthy brain fiber tracts under adverse conditions: ex vivo studies. Biomed Opt Express. 2021;12(10):6674–6685. doi: 10.1364/BOE.439754. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Felger L, Rodríez-Núñez O, Gros R, et al. Robustness of the wide-field imaging Mueller polarimetry for brain tissue differentiation and white matter fiber tract identification in a surgery-like environment: an ex vivo study. Biomed Opt Express. 2023;14(5):2400–2415. doi: 10.1364/BOE.486438. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Gros R, Rodríez-Núñez O, Felger L, et al. Characterization of polarimetric properties in various brain tumor types using wide-field imaging Mueller polarimetry. IEEE Trans Med Imaging. 2024;43(12):4120–4132. doi: 10.1109/TMI.2024.3413288. [DOI] [PubMed] [Google Scholar]
- 18.Mirsanaye K, Uribe Castaño L, Kamaliddin Y, et al. Machine learning-enabled cancer diagnostics with widefield polarimetric second-harmonic generation microscopy. Sci Rep. 2022;12(1):10290. doi: 10.1038/s41598-022-13623-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Giannoni L, Marradi M, Marchetti M, et al. Diffuse Optical Spectroscopy and Imaging IX. Optica Publishing Group; 2023. Hyperprobe consortium: innovate tumour neurosurgery with innovative photonic solutions; 126281C [Google Scholar]
- 20.Giannoni L, Bonaudo C, Marradi M, et al. Diffuse Optical Spectroscopy and Imaging IX. Optica Publishing Group; 2023. Optical characterisation and study of ex vivo glioma tissue for hyperspectral imaging during neurosurgery; 1262829 [Google Scholar]
- 21.Anichini G, Leiloglou M, Hu Z, et al. Hyperspectral and multispectral imaging in neurosurgery: a systematic literature review and meta-analysis. Eur J Surg Oncol. 2025;51(1):108293. doi: 10.1016/j.ejso.2024.108293. [DOI] [PubMed] [Google Scholar]
- 22.Moriconi S, Rodríguez-Núñez O, Gros R, et al. Near-real-time Mueller polarimetric image processing for neurosurgical intervention. Int J Comput Assist Radiol Surg. 2024;19(6):1033–1043. doi: 10.1007/s11548-024-03090-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Ronneberger O, Fischer P, Brox T. In: Navab N, Hornegger J, Wells WM, Frangi AF, editors. U-net: Convolutional networks for biomedical image segmentation; Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015; Cham: Springer International Publishing; 2015. pp. 234–241. [Google Scholar]
- 24.Hahne Christopher. polar_segment. Github. 2025. github.com/hahnec/polar_segment .
- 25.Hahne Christopher. mm_torch. Github. 2025. github.com/hahnec/mm_torch .
- 26.Hahne Christopher. polar_augment. Github. 2025. github.com/hahnec/polar_augment .
- 27.Gros R, Rodríguez-Núñez O, Felger L, et al. Effects of formalin fixation on polarimetric properties of brian tissue: fresh or fixed? Neurophotonics. 2023;10(02):025009. doi: 10.1117/1.NPh.10.2.025009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Hahne C, Rodríguez-Núñez O, Gros È, et al. Physically consistent image augmentation for deep learning in Mueller matrix polarimetry. IEEE Transactions on Image Processing. 2024:1. doi: 10.1109/TIP.2025.3618390. Under review. [DOI] [PubMed] [Google Scholar]
- 29.Novikova T, Ovchinnikov A, Pogudin G, et al. Time-efficient filtering of imaging polarimetric data by checking physical realizability of experimental Mueller matrices. Bioinformatics. 2024;40(7):btae348. doi: 10.1093/bioinformatics/btae348. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Compain E, Poirier S, Drevillon B. General and self-consistent method for the calibration of polarization modulators, polarimeters, and mueller-matrix ellipsometers. Appl Opt. 1999;38(16):3490–3502. doi: 10.1364/ao.38.003490. [DOI] [PubMed] [Google Scholar]
- 31.Vitkin A, Ghosh N, De Martino A. In: Photonics: Biomedical Photonics, Spectroscopy, and Microscopy, IV. Andrews DL, editor. John Wiley & Sons, Inc; Hoboken, NJ, USA: 2015. Tissue polarimetry; pp. 239–321. [Google Scholar]
- 32.Sheng W, Li W, Qi J, et al. Quantitative analysis of 4 ?4 mueller matrix transformation parameters for biomedical imaging. Photonics. 2019;6(1):34. doi: 10.3390/photonics6010034. [DOI] [Google Scholar]
- 33.Novikova T, Ramella-Roman JC. Is a complete Mueller matrix necessary in biomedical imaging? Opt Lett. 2022;47(21):5549–5552. doi: 10.1364/OL.471239. [DOI] [PubMed] [Google Scholar]
- 34.Lin TY, Goyal P, Girshick R, et al. Focal loss for dense object detection; Proc IEEE international conference on computer vision; 2017. pp. 2980–2988. [Google Scholar]
- 35.Louis DN, Perry A, Wesseling P, et al. The 2021 WHO Classification of Tumors of the Central Nervous System: a summary. Neuro-Oncology. 2021;23(8):1231–1251. doi: 10.1093/neuonc/noab106. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the co-authors upon reasonable request. A curated version of the dataset, incorporating additional quality-control steps by the data owners, is planned for release at a later stage of the project. As a result of these updates, the released dataset may differ from the version used in this study, and metric values obtained from it may not be directly comparable to those reported here. Any correspondence in that regard should be addressed to theoni.maragkou@unibe.ch.




