Abstract
Polarization second harmonic generation (P-SHG) imaging is a powerful technique for studying the structure and properties of biological and material samples. However, conventional whole-sample P-SHG imaging is time consuming and requires expensive equipment. This paper introduces a novel approach that significantly improves imaging resolution under conditions of reduced imaging time and resolution, utilizing enhanced super-resolution generative adversarial networks (ESRGAN) to upscale low-resolution images. We demonstrate that this innovative approach maintains high image quality and analytical accuracy, while reducing the imaging time by more than 95%. We also discuss the benefits of the proposed method for reducing laser-induced photodamage, lowering the cost of optical components, and increasing the accessibility and applicability of P-SHG imaging in various fields. Our work significantly advances whole-sample mammary gland P-SHG imaging and opens new possibilities for scientific discovery and innovation.
1. Introduction
The mammary gland undergoes hormonal remodeling post-childbirth [1], comprising the well-studied mammary epithelium and the less-understood stroma [2], which includes adipocytes, fibroblasts, immune cells, and an extracellular matrix (ECM) of collagen, laminins, and other proteins [3,4]. The ECM plays a crucial role in gland development, especially during puberty, when stromal expansion and collagen orientation precede epithelial morphogenesis [5,6]. However, the effect of dysregulated lipid metabolism on this process remains underexplored, highlighting a gap in the understanding of mammary gland development.
SHG microscopy is the preferred method for imaging collagen in tissues because of its superior spatial resolution, reduced phototoxicity and photobleaching, focal plane selectivity, and straightforward sample preparation [7]. This label-free imaging technique enables the detection of changes in fibrillar collagen within the mammary gland, a capability that is unmatched by other imaging methods [7,8]. SHG microscopy has played a vital role in collagen research; however, relying solely on SHG intensity for orientation studies can introduce interference [9], and hindering fibril orientation imaging [10]. To address these limitations, polarization-resolved SHG microscopy (P-SHG) has emerged, offering the combined benefits of SHG microscopy and polarimetry [10–14]. P-SHG is extensively used in collagen-related investigations, providing precise information about fibril structures within the imaging plane, which is a valuable asset in mammary gland research [15,16]. In conventional P-SHG imaging, smaller sample areas are imaged and studied. However, this approach risks overlooking essential spatial information, especially in developmental studies, where the macroenvironment plays a crucial role. As the process shifts to imaging larger areas, the coherent nature of the SHG signal may result in the cancellation of some variations, which is a limitation acknowledged in the context of our research. In cancer boundary research the broad orientation of the collagen barrier is informative [8,17]. The same applies to understanding macroenvironmental effects on mammary gland development, where whole-sample P-SHG imaging is essential. While this approach may come with the caveat of missing some finer variations and fibers, the holistic view it offers on collagen orientation across the entire gland is essential for a comprehensive understanding of the developmental processes at play. Acknowledging the challenges associated with the cost and time of whole-sample P-SHG imaging, our study leveraged the capabilities of deep learning to overcome these barriers.
Deep learning (DL) significantly enhances SHG microscopy and image analysis by automating the interpretation and quantification of SHG signals [18–21]. DL has become a transformative force, significantly advancing tasks, such as classification, segmentation, and image restoration in SHG imaging. Highlighted studies have demonstrated its broad utility: one successfully applied a convolutional neural network (CNN) to differentiate ovarian tissue types with nearly perfect accuracy using SHG imaging [22], while another showed the effectiveness of U-Net CNN in segmenting collagen fibers, surpassing traditional techniques in handling the challenges of variable image intensity in SHG microscopy [23]. Despite the diverse applications explored, from cancer diagnosis to collagen fiber segmentation, a critical gap remains: the tailored application of deep-learning image super-resolution enhancement for P-SHG imaging. This presents an exciting avenue for future research, focusing on the development of bespoke deep-learning solutions that cater to the intricacies of P-SHG imaging. Our approach significantly improves the imaging resolution under conditions of reduced imaging time and resolution, addressing the challenges of prolonged imaging times and potential sample damage associated with conventional whole-sample P-SHG imaging by utilizing Generative Adversarial Networks (GANs).
Advanced techniques and super-resolution imaging supported by DL not only overcome technical limitations but also reduce noise, as exemplified by Generative Adversarial Network-based approaches that effectively achieve image upsampling [24]. A Generative Adversarial Network (GAN) is an artificial intelligence framework for generating new data, particularly images, audio, and text [25]. The framework operates by pitting two neural networks against each other in a competitive manner: a generator and discriminator.
The generator network uses random noise as the input and generates data that resemble the actual data. For example, in image generation, the generator attempts to create images that visually resemble the actual images. The Discriminator network then acts as a judge that attempts to distinguish between the actual data (e.g., real images) and fake data generated by the generator. It is a binary classifier that learns to identify genuine data from generated data [25]. Over time, the generator becomes better at creating indistinguishable data from the actual data, whereas the discriminator becomes better at distinguishing real data from fake data. Ideally, this process results in a generator that produces high-quality data that resembles actual data. GANs have been applied in various fields such as image synthesis, style transfer, super-resolution, image-to-image translation, and text-to-image synthesis [25].
Another advanced form of GAN is Enhanced Super-Resolution Generative Adversarial Network (ESRGAN), which is a deep learning-based approach for image super-resolution [26]. Image super-resolution is the process of increasing the image resolution while preserving or enhancing its quality. ESRGAN's architecture builds upon the idea of GANs but incorporates modifications to improve the super-resolution process [26]. One crucial aspect is the use of a perceptual loss function, which measures the difference between the high-resolution ground-truth image and the generated image in terms of perceptual features. The loss function of the discriminator measures how well the discriminator can classify real data as real and the generated data as fake. The generator loss function measures how well the generator can fool the discriminator to classify the generated data as real data. The generator aims to maximize the probability of the discriminator making a mistake.
The perceptual loss function allows ESRGAN to focus on capturing high-level features of an image, such as edges, textures, and structures, rather than relying solely on pixel-wise similarity [26]. ESRGAN generates images that appear visually plausible and realistic to human observers. The ESRGAN framework is trained using a combination of adversarial loss (to ensure realism) and perceptual loss (to maintain visual quality). This training process involves iteratively updating the generator and discriminator networks to improve the quality of the generated image over time [26].
In this study, we acquired high-quality SHG images of the whole mammary gland. We then obtained low-quality P-SHG images of the entire sample and upscaled them using the ESRGAN model. Next, to test the accuracy of the method, we obtained high-quality P-SHG images of some areas of different samples and compared the results with upscaled P-SHG image results. Quality metric assessments were performed to ensure that the integrity and structure of the original images were maintained. For simplicity, we introduce those that were implemented in this study, namely, the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Perceptual Image Quality Evaluator (PIQE), and Naturalness Image Quality Evaluator (NIQE). Multiple image quality metrics were used, because one metric is unsuitable for considering every aspect of a generated image [12]. The PSNR measures the maximum pixel value ratio to the mean squared error in an image [27]. Higher PSNR values indicate better image quality and correlate well with perceived visual quality. SSIM evaluates the luminance, contrast, and structure between two images and considers human visual perception. The SSIM ranges from -1 to 1, where 1 indicates identical images, 0 indicates no similarity, and -1 indicates anticorrelation [28]. PIQE is designed to evaluate the visual quality of images in a manner that closely aligns with human perception [29]. It incorporates various visual features such as contrast, luminance, and texture to compute a quality score that reflects perceived image quality [29]. NIQE explicitly targets the assessment of naturalness in images [30]. It computes features related to the distribution of pixel values, luminance, contrast, and other statistical properties [30]. Unlike SSIM and PSNR, which require a reference image, NIQE and PIQE do not require a reference image [29,30].
In addition, we evaluated the intensity, texture, and contrast metrics to provide a comprehensive assessment of the models and upscaled images. The intensity metrics included mean intensity, standard deviation of intensity, median intensity, and minimum and maximum intensity values. The mean intensity reflects the average pixel intensity of the image, whereas the standard deviation of the intensity measures the variation in pixel intensities [31]. The median intensity provides the middle value of pixel intensities, and the minimum and maximum intensities indicate the range of pixel values in the image [31].
Contrast metrics included root mean square (RMS) contrast and Michelson contrast [32]. The RMS contrast measures the overall contrast of the image, indicating the level of contrast enhancement, while the Michelson contrast evaluates the contrast between the maximum and minimum pixel intensities [32].
Texture analysis included gray-level co-occurrence matrix (GLCM) metrics such as dissimilarity, homogeneity, energy, and correlation [33]. Dissimilarity measures the difference between neighboring pixel values, with lower values indicating more uniform texture. Homogeneity reflects the closeness of the distribution of elements in the GLCM to the GLCM diagonal, indicating a uniform texture [33]. The energy, or angular second moment, measures textural uniformity, and the correlation measures the linear dependency of pixel values [33].
We also included advanced metrics such as the Feature Similarity Index (FSIM) for evaluating structural similarity [34], Visual Information Fidelity (VIF) for quantifying visual information preservation [35], Edge Preservation Ratio (EPR) for assessing edge retention [36], and local binary patterns (LBP) for texture analysis [37]. Histogram-based metrics such as histogram intersection, histogram correlation, and Kullback-Leibler divergence were used to compare the statistical properties of the images [38].
2. Methodology
2.1. Sample preparation
Sterol-CoA knockout, wild-type, and heterozygous mice were sacrificed at the following key stages of mammary gland development: prepubertal (week 4), pubertal (week 6), and adulthood (week 10). In adulthood, the female mice were cycled using an impedance meter that provided resistance to the vaginal mucosa. A peak indicated proestrus. The mice were sacrificed via CO2 asphyxiation, followed by cervical dislocation. The mouse was pinned down on a foam pedestal, the abdomen was opened, and mammary glands were visualized. The left inguinal mammary glands were harvested immediately and placed on glass slides. The mammary gland was stretched using pliers to regain its original shape. A parafilm film was placed on the gland and flattened for a few minutes using heavy metal weight. The slides were immediately immersed in a bath of Carnoy's fixative (100% EtOH, chloroform, glacial acetic acid) for four hours at room temperature to fix the tissues. The slides were gradually rehydrated in water and alcohol baths (95%, 75%, 50%, and 25% EtOH). The slides were then stained in a carmine alum bath (2% carmine and 5% potassium aluminum sulfate dissolved in water) for three hours to dye the mammary epithelium with a violet hue. The tissues were then gradually dehydrated in alcohol baths (25%, 50%, 75%, and 95% EtOH) and incubated overnight in xylene. The colored mammary glands were then imaged using a lightbox, camera, and measurement key to compare the samples. Once digitized, the epithelial branches, number of terminal buds, and general architecture of the mammary gland were analyzed using ImageJ [39].
2.2. Imaging setup
SHG microscopy was performed using a custom laser stage inverted scanning microscope. A mode-locked fiber Ytterbium (Yb) laser (MPB Communications Inc., Montréal, CA) was used. This laser emits at 1040 nm and delivers 125 fs pulses at a repetition rate of 25 MHz with an average power of 3 W. A half-wave plate and a Glan-Thompson polarizer adjusted the average power from 20 to 125 mW (0.8 to 5 nJ pulse energy). Given the size of the samples for imaging, sample scanning was performed using a high-speed motorized XY scanning stage (MLS203; Newton, NJ, USA). The focus was adjusted coarsely and finely by using mechanical and piezoelectric motors (PI Nano-Z, USA). An air objective (UplanSApo 10X, NA 0.3, Olympus, Japan) was used for the illumination. A condenser was used to collect the SHG signal of the sample, which was detected using a photomultiplier tube (R6357, Hamamatsu Photonics) set to 800 V. The SHG signal was isolated using two spectral filters that were placed before the photomultiplier. A short-pass filter (blocking any wavelength higher than 720 nm, i.e., the input fundamental laser light) and bandpass filter centered at 515 nm were employed to filter out the residual input light. A multichannel I/O board (National Instruments) and custom-written Python program were used for signal acquisition and synchronization. Given the sample size and the acceleration and deceleration times of the motorized scanning stage, each SHG image had an acquisition time of a few minutes. Raw data were visualized using Fiji-ImageJ software (NIH, USA). The imaging configuration is shown in Fig. 1.
Fig. 1.
Imaging configuration for SHG and P-SHG setups. The motorized half-wave plate was removed during SHG imaging and added during the P-SHG imaging.
For low-quality P-SHG, a motorized half-wave plate was used to rotate the linear polarization of the laser beam to acquire the images. Images were captured for 18 polarization states in 10-degree steps from 0° to 170°. The motorized half-wave plate and sample scanning were synchronized using a custom-built Python program. For high-quality P-SHG imaging, random regions of interest of 1000 × 1000 µm were imaged from different samples, and an air objective (UplanSApo 20X, NA 0.75, Olympus, Japan) was used for focusing.
2.3. Upscaling images
Image upscaling was performed using multiple models: Ultrasharp_4X [40], ESRGAN_Nomos2 K [41], NMKD [41],4X-UniScaleV2_Sharp [42], and BSRGAN [43]. The upscaling was done through the ChaiNNer program, which can be found at [50]. Additionally, we explored guided upscaling techniques via PixTransform [44], employing high-quality SHG images as references to inform the upscaling of 18 distinct P-SHG images across a spectrum of iterations (1,000–30,000) and channel-split modes. This process was optimized for performance using an RTX 3060Ti GPU with a local computing setup.
3. Result and discussion
3.1. Model performance and selection criteria
Upon rigorous evaluation, it became apparent that not all models performed equally. Despite the potential of each method, only Ultrasharp_4X has emerged as a viable solution that closely approximates the quality and fidelity of original high-quality SHG images (GT). This finding was critical, as our primary goal was to ensure that the upscaled images retained as much of the original detail and structural integrity as possible without introducing artifacts or distortions that could compromise analytical accuracy.
To objectively assess the performance of each upscaling method, we compiled the key metrics listed in Table 1.
Table 1. Comprehensive performance comparison of upscaling models.
Method | mSSIM Ratio | NRMSE Ratio | PSNR Absolute Improvement | PSNR Percentage Improvement | Visual Inspection |
---|---|---|---|---|---|
UltraSharp | 0.939 | 1.036 | -0.92 | -5.02% | Most true to original |
BSRGAN | 0.953 | 1.047 | -0.70 | -3.84% | Introduced noticeable artifacts in complex patterns |
NMKD | 0.691 | 1.164 | -1.83 | -10.08% | Tended to oversmooth, losing fine details |
NOMOS | 0.866 | 1.127 | -1.44 | -7.90% | Better detail preservation but Tended to oversmooth and artifact |
PixTransform | 1.335 | 1.100 | -1.13 | -6.22% | Not suitable for P-SHG application |
UniScale | 0.627 | 1.209 | -2.20 | -12.12% | Significant loss of detail and increased blurring |
Table 1 provides a side-by-side comparison of each evaluated upscaled model against the key performance metrics. The mSSIM ratio reflects how well the upscaled image maintains structural similarities with the original image, with higher values indicating better preservation. Ultrasharp_4X (0.939) and BSRGAN (0.953) show excellent structural preservation, while UniScale (0.627) performs poorly. The NRMSE ratio evaluates the error level relative to the original image, where a value close to 1 indicates a minimal error. Ultrasharp_4X (1.036) and BSRGAN (1.047) perform best in this metric. The PSNR improvement quantifies the change in image quality, with values closer to zero indicating better preservation. While all models showed some degradation, BSRGAN (-0.70, -3.84%) and Ultrasharp_4X (-0.92, -5.02%) showed the least degradation.
Additionally, the Visual Inspection column assesses the ability of each model to preserve the essential details and integrity of the original image. Ultrasharp_4X demonstrated balanced performance across all metrics. Its mSSIM ratio of 0.939 indicates excellent structural preservation, whereas an NRMSE ratio of 1.036 suggests minimal error introduction. Although it shows a slight PSNR degradation (-0.92, -5.02%), this is less severe compared to the other models. Crucially, visual inspection confirmed that Ultrasharp_4X produced images most accurate to the original, preserving essential details and structural integrity without introducing noticeable artifacts.
To further evaluate the performance of each model, we conducted a detailed analysis of various image statistics and texture metrics, as presented in Table 2.
Table 2. Statistics, texture and contrast metrics comparison of upscaling models.
Metric | Original | Low quality | Ultrasharp | BSRGAN | NMKD | NOMOS | PixTransform | Uniscale |
---|---|---|---|---|---|---|---|---|
Statistics- min | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Statistics - max | 1 | 1 | 1 | 1 | 1 | 1 | 0.816 | 1 |
Statistics - mean | 0.208 | 0.191 | 0.179 | 0.175 | 0.197 | 0.193 | 0.100 | 0.157 |
Statistics- std | 0.201 | 0.161 | 0.169 | 0.161 | 0.194 | 0.194 | 0.072 | 0.2 |
Statistics -median | 0.133 | 0.137 | 0.118 | 0.125 | 0.129 | 0.122 | 0.078 | 0.082 |
Texture- contrast | 1299.21 | 333.98 | 485.215 | 400.138 | 1228.48 | 995.67 | 127.484 | 1630.77 |
Texture-dissimilarity | 20.374 | 6.418 | 8.791 | 7.487 | 19.226 | 15.052 | 6.414 | 20.866 |
Texture- homogeneity | 0.110 | 0.646 | 0.357 | 0.410 | 0.131 | 0.203 | 0.239 | 0.174 |
Texture- energy | 0.022 | 0.078 | 0.052 | 0.056 | 0.023 | 0.035 | 0.045 | 0.039 |
Texture- correlation | 0.753 | 0.901 | 0.869 | 0.882 | 0.749 | 0.796 | 0.811 | 0.687 |
Contrast- rms | 0.959 | 0.836 | 0.940 | 0.912 | 0.976 | 0.994 | 0.708 | 1.259 |
Contrast- Michelson | 1.000 | 1.000 | 0.994 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
Constrat- mean_intensity | 0.209 | 0.192 | 0.179 | 0.176 | 0.198 | 0.194 | 0.102 | 0.159 |
Contrast- intensity_var | 0.040 | 0.026 | 0.028 | 0.026 | 0.037 | 0.037 | 0.005 | 0.040 |
Table 2 provides a detailed comparison of various metrics between the original high-quality image, low-quality image, and all the upscaled images. This comprehensive analysis allows us to evaluate how each model preserves or enhances different aspects of an image.
In terms of basic statistics, all models maintained the same minimum intensity (0.000) as the original and low-quality images. However, there were notable differences in the mean and median intensities across the models. NMKD (mean: 0.197, median: 0.129) and NOMOS (mean: 0.193, median: 0.122) maintained mean intensities closest to the original (0.208), potentially improving the overall brightness. Ultrasharp_4X (mean: 0.179, median: 0.118) and BSRGAN (mean: 0.175, median: 0.125) show slightly lower values, while PixTransform (mean: 0.100, median: 0.078) and UniScale (mean: 0.157, median: 0.082) demonstrate more significant reductions in overall brightness.
The standard deviation of the pixel intensities provides insight into image contrast. UniScale (0.200) and NMKD/NOMOS (both 0.194) closely matched or slightly reduced the standard deviation of the original image (0.201), whereas PixTransform showed a marked reduction (0.072), indicating a significant loss of contrast.
For texture metrics, we observed varying performance across the models. NMKD (contrast: 1228.485, dissimilarity: 19.226) and UniScale (contrast: 1630.773, dissimilarity: 20.866) showed remarkably high contrast values, even exceeding those of the original image (contrast: 1299.211, dissimilarity: 20.374). This could indicate over-sharpening or enhancement of the noise. Ultrasharp_4X (contrast: 485.215, dissimilarity: 8.791) and BSRGAN (contrast: 400.138, dissimilarity: 7.487) provide a more balanced improvement over the low-quality image (contrast: 333.988, dissimilarity: 6.418). NOMOS (contrast: 995.677, dissimilarity: 15.052) falls between these extremes, whereas PixTransform shows a significant reduction in contrast (127.484).
In terms of contrast metrics, NOMOS (0.994) and NMKD (0.976) achieved the highest RMS contrast, surpassing the original image (0.959). UniScale shows the highest value (1.259), which might indicate over-enhancement. Ultrasharp_4X (0.940) provided a more conservative enhancement, closely approximating the contrast of the original image. PixTransform showed the lowest RMS contrast (0.708), indicating a significant loss of overall contrast.
While Table 1 provides a comparison of upscaled models using fundamental image quality metrics (MSSIM, PSNR, and NRMSE), a more specialized analysis is necessary to fully understand how each upscaled image compares to the original high-quality image across various aspects of image quality. To this end, we employed a series of specialized metrics that focused on feature similarity, visual information fidelity, edge preservation, texture similarity, and intensity distribution. Table 3 presents the results of these analyses. These metrics offer complementary insights into how well each upscaling method preserves or enhances the different aspects of the original image quality.
Table 3. Specialized Image Quality Metrics for Upscaled vs. Original Image Comparison.
Metric | Original vs Low | Ultrasharp_4X | BSRGAN | NMKD | NOMOS | PixTransform | UniScale |
---|---|---|---|---|---|---|---|
FSIM | 0.884 | 0.910 | 0.904 | 0.914 | 0.919 | 0.918 | 0.896 |
VIF | 0.046 | 0.062 | 0.054 | 0.090 | 0.103 | 0.089 | 0.117 |
EPR | 0.042 | 0.019 | 0.017 | 0.013 | 0.016 | 0.673 | 0.014 |
LBP_Similarity | 0.471 | 0.758 | 0.836 | 0.886 | 0.693 | 0.960 | 0.891 |
Histogram_Intersection | 0.898 | 0.894 | 0.884 | 0.959 | 0.863 | 0.708 | 0.728 |
Histogram_Correlation | 0.964 | 0.962 | 0.937 | 0.990 | 0.935 | 0.839 | 0.722 |
KL_Divergence | 0.130 | 0.049 | 0.084 | 0.014 | 0.086 | 1.377 | 0.237 |
Table 3 provides additional insights based on the comparison metrics. FSIM scores were high across all models (ranging from 0.896 to 0.919), with NOMOS slightly outperforming the others. VIF scores show more variation, with UniScale (0.117) scoring the highest, followed by NOMOS (0.103) and NMKD (0.090). LBP_Similarity showed significant improvements for all models compared to the low-quality image (0.471), with PixTransform (0.960) and UniScale (0.891) scoring the highest.
Histogram-based metrics are particularly strong for NMKD, with high scores in Histogram_Intersection (0.959) and Histogram_Correlation (0.990), suggesting that it is highly effective at preserving the overall intensity distribution of the original image. The Kullback-Leibler Divergence shows NMKD (0.014) and Ultrasharp_4X (0.049) outperforming other models, indicating better preservation of the original image's intensity distribution. However, it is important to note that PixTransform showed a higher KL_Divergence (1.377), suggesting less similarity to the original distribution in this aspect.
In our exploration of various upscaling techniques, we initially considered the PixTransform-guided upscaling approach, which has the potential to leverage high-quality SHG images as references for improving the upscaling process. Theoretically, this method offers a promising avenue for enhancing the resolution and detail of P-SHG images, which is critical for accurately identifying and analyzing collagen fiber orientation and other microstructural details. However, the unique characteristics of P-SHG imaging, in which image properties such as signal intensity and fiber orientation dynamically change with varying laser input angles, present unforeseen challenges. During preliminary trials, we observed that while PixTransform effectively filled in missing details in regions of low signal-to-noise ratio (SNR) or where details were obscured owing to low resolution, it did so without accounting for the critical angle-dependent variation characteristics of P-SHG images. Specifically, the guided upscaling process, in its attempt to interpolate and enhance image details based on high-quality references, inadvertently introduced artifacts and inaccuracies by “filling in the gaps” in a manner inconsistent with actual, angle-dependent SHG signal variations. This discrepancy arises from the inherent design of the model to generalize from the reference images, leading to misrepresentations where P-SHG imaging relies on precise laser angle-specific signal variations to accurately delineate fiber orientations. The resultant images, although visually improved in terms of sharpness and resolution, misrepresented the underlying biological structures by overlaying or amplifying details that did not align with the actual orientation and distribution of collagen fibers, as dictated by varying the laser angles.
Furthermore, our exploration was extended to the BSRGAN, another sophisticated upscaling model known for its impressive enhancements in various imaging contexts. Despite its capabilities, BSRGAN failed to meet the stringent requirements of accuracy and detail preservation in P-SHG image upscaling. Similar to guided upscaling attempts, BSRGAN introduced alterations that were detrimental to the integrity of our imaging technique, rendering it an unsuitable option. A visual comparison of the upscaling methods elucidates the distinctions in the performance and outcome quality, as shown in Fig. 2.
Fig. 2.
Comparative analysis of upscaled models for P-SHG imaging. This figure illustrates the side-by-side comparison of a) original high-quality and b) low-quality SHG images against images upscaled using various models including c) BSRGAN, d) Nomos2 K, e) Ultrasharp_4X, f) NMKD, g) guided upscaling via PixTransform, and h) uniscale.
This comparative analysis highlighted the necessity of selecting an upscaling model that not only enhances image resolution but also has an acute sensitivity to the nuances of scientific imaging. The challenges encountered with guided upscaling and BSRGAN further reinforce the importance of a tailored approach, particularly for specialized imaging techniques such as P-SHG, where precision and detail fidelity are non-negotiable. Implementing “Ultrasharp_4X” using the ChaiNNer program has marked a significant step toward democratizing advanced P-SHG imaging enhancement. Despite its powerful capabilities, accessibility is limited. The program was optimized for ease of use and required minimal deep learning expertise from users. Hardware requirements were clearly documented, with the existing computational resources of most modern research laboratories found to be sufficient for basic operations.
3.2. Histological images
The histological images and their corresponding SHG imaging counterparts are shown in Fig. 3.
Fig. 3.
Histological and SHG Images of both samples provide a comprehensive view of tissue microstructure.
Comparing histological images with their SHG imaging counterparts can be immensely helpful in providing a comprehensive view of tissue structure and organization. This combined approach offers a more holistic understanding of tissue architecture. This integration helps during the upscaling process by providing structural guidance from the histological images, so that the enhanced SHG images maintain the structural fidelity of the tissue.
3.3. Original vs. upscaled SHG images
The original image of the sample, along with the low-quality image and its upscaled counterpart, is shown in Fig. 4.
Fig. 4.
This figure includes three categories of images: original high quality (1a,2a), original low quality (1b,2b) and upscaled images from two different samples (1c,2c). The original high-quality images (1a,2a) had a resolution of 1800 × 800 pixels, low-quality images (1b,2b) had a resolution of 225 × 100 pixels, and upscaled images (1c,2c) had a resolution of 3600 × 1600 pixels.
The original high-quality images were characterized by a resolution of 1800 × 800 pixels, which indicated a substantial amount of detail and clarity in each image. These images were captured with high precision and provided a rich visual content. The imaging time for each image was approximately 18 min, given the speed of the scanning stage and chosen pixel size of 10 µm. Although ideal for single image application, P-SHG requires 18 images in our case; therefore, if we apply the same imaging scenario, it will take over 4 h of imaging per sample, which also translates to a constant laser-sample interaction that can damage the sample. In contrast, the original low-quality images belong to a feature with a significantly reduced resolution of 225 × 100 pixels. This lower resolution implies a substantial loss of detail and sharpness compared with their high-quality counterparts.
However, capturing each image takes approximately 45 s, meaning that we can capture 18 images for P-SHG in the same amount of time as it takes to capture a single high-quality image for one polarization. Unfortunately, the P-SHG analysis method used does not perform well on low-resolution images; therefore, the loss of detail and sharpness encountered must be addressed. Therefore, we applied image upscaling to low-quality images using the Ultrasharp_4X model based on the ESRGAN. As mentioned, other upscaled models were also applied to the images; however, based on the results, it was decided that the Ultrasharp_4X model provided the best upscaled images in our use case. We also used high-quality images to upscale 18 low-quality images. This method did not work well because, in P-SHG, changing the laser input angle will cause changes in the SHG based on fiber alignment. We observed that the model attempted to fill in missing intensities and omit specific pixels to shape the image based on the reference image; therefore, individually upscaling each P-SHG image was optimal for our application. By applying Ultrasharp_4X twice, we could enhance the resolution of the images by 16× and obtain higher-resolution upscaled images of 3600 × 1600 pixels. While the upscaled images appear more detailed and visually larger than the original images, they often suffer from quality degradation owing to the interpolation and extrapolation involved in the upscaling process. Therefore, we must perform the detailed quality metric controls mentioned in the Introduction to ensure that the integrity of the information is intact.
3.4. Quality control
In the quality control section of our study on image upscaling, we meticulously assessed the effectiveness of ESRGAN in improving the quality of the low-resolution P-SHG images. Our evaluation strategy encompassed a blend of no-reference and full-reference image quality metrics, supplemented by statistical analysis through analysis of variance (ANOVA), to provide a holistic understanding of the upscaled image quality in relation to their original high-quality counterparts.
3.4.1. No-reference quality metrics
We began with no-reference quality metrics, specifically the Naturalness Image Quality Evaluator (NIQE) and the Perceptual Image Quality Evaluator (PIQE), which assess image quality without the need for a reference image. These metrics are particularly useful for evaluating the perceptual quality of the upscaled images. The findings are summarized in Table 4.
Table 4. No-reference quality metrics.
Sample | Method | Source | Prediction | Ground |
---|---|---|---|---|
a | NIQE | 8.940 | 3.186 | 6.707 |
PIQE | 40.797 | 23.234 | 46.404 | |
b | NIQE | 9.908 | 2.749 | 7.532 |
PIQE | 89.992 | 31.906 | 52.931 |
The lower scores for the predicted images across both the NIQE and PIQE metrics suggest an enhancement in the image quality post-upscaling. This indicates that our method successfully improved the perceptual quality of the images, making them more natural and visually pleasing than original high-quality (ground) images. These results confirm the effectiveness of our upscaling method, although it is important to note the potential difference between computational assessments of quality and human perception. Using PIQE and NIQE in this context makes sense because they are non-reference image quality metrics that are ideal for evaluating the quality of upscaled images when no high-quality original is available for comparison. Their application offers a method for quantitatively assessing improvements in image quality that may not be immediately apparent by visual inspection alone. Despite the concern that these metrics might be optimized for “computer perception,” the lower scores for the predicted images compared with the source images suggest a successful enhancement. However, the discrepancy between these scores and human perception highlights the importance of using a combination of metrics, including full-reference metrics such as MS-SSIM, PSNR, and NRMSE, to obtain a comprehensive evaluation of image quality post-upscaling.
3.4.2. Full-reference quality metrics
Next, we assessed image quality using the full-reference metrics MS-SSIM, PSNR, and NRMSE. These metrics require a reference image for comparison and offer different perspectives on image quality, focusing on structural similarity, signal fidelity, and error. The results are summarized in Table 5.
Table 5. Full-reference quality metrics.
Sample | Method | Source | Prediction |
---|---|---|---|
a | MS-SSIM | 0.33 | 0.31 |
PSNR | 18.31 | 17.39 | |
NRMSE | 0.28 | 0.29 | |
b | MS-SSIM | 0.01 | 0.01 |
PSNR | 9.14 | 9.14 | |
NRMSE | 0.56 | 0.56 |
The similar MS-SSIM, PSNR, and NRMSE values between the source and prediction images for both samples underscores the capability of our upscaling algorithm to maintain the structural integrity and signal fidelity of the images. Although there were slight variations in some metrics, the overall similarity in the scores suggests that our method is adept at enhancing the images without compromising the original quality. Building on a detailed examination of both the no-reference and full-reference quality metrics, we further enriched our analysis by conducting ANOVA to statistically ascertain the differences in image quality across the Source, Prediction, and Ground groups. This statistical approach allowed us to rigorously test for significant variations in the image quality resulting from our upscaling process. Below, we integrate the ANOVA findings with the previously discussed quality metric evaluations.
3.4.3. ANOVA results
After evaluating the image quality using both no-reference and full-reference metrics, we performed ANOVA to statistically compare these metrics across different image groups (source vs. prediction). ANOVA was used to identify any statistically significant differences in the image quality, thereby providing a quantitative basis for evaluating the efficacy of our upscaling methods. The results are presented in Table 6.
Table 6. ANOVA results.
Metric Category | Metric Details | F-Value Range | P-Value Range | Highest Effect Size (η2) |
---|---|---|---|---|
No-reference Quality Metrics | NIQE and PIQE combined | 0.654 | 0.543 | 0.109 |
Full-reference Quality Metrics | MS-SSIM, PSNR, and NRMSE combined | 0.003 to 0.006 | 0.948 to 0.982 | 0.001 |
Texture | Various texture metrics | 0.442-1.651 | 0.216-0.811 | 0.248 (Homogeneity) |
Contrast | Various contrast metrics | 0.233-1.199 | 0.359-0.941 | 0.194 (RMS Contrast) |
Comparison | Various comparison metrics | 0.087-2.239 | 0.112-0.993 | 0.309 (Hist. Correlation) |
In Table 6, the F-value represents the ratio of the variance between groups to the variance within groups, with larger values indicating greater differences between groups [45]. A p-value indicates the probability of obtaining test results at least as extreme as the observed results, assuming that the null hypothesis is correct [45]. A p-value less than 0.05 is typically considered statistically significant. The effect size (η2) quantifies the magnitude of the difference between groups with values of 0.01, 0.06, and 0.14 typically considered small, medium, and large effects, respectively [46].
The ANOVA results indicated no statistically significant differences between the source and prediction groups or among upscaling methods for any set of metrics (p > 0.05). However, the variation in the F-values and effect sizes (η2) suggests practical differences that warrant consideration. The no-reference quality metrics (NIQE and PIQE) showed a medium effect size (η2 ≈ 0.109), indicating a noticeable impact on perceptual image quality. In contrast, the full-reference quality metrics (MS-SSIM, PSNR, and NRMSE) show a very small effect size (η2 ≈ 0.001), suggesting high preservation of structural similarity and signal fidelity.
Among the specific metric categories, comparison metrics, particularly Histogram Correlation, showed the largest effect size (η2 = 0.309), followed by texture metrics (homogeneity, η2 = 0.248) and contrast metrics (RMS Contrast, η2 = 0.194). These moderate effect sizes suggest practical differences in these aspects of image quality across the upscaling methods, despite the lack of statistical significance.
It is important to note that a lack of statistical significance does not necessarily mean that there are no meaningful differences. This may be due to several factors: our relatively small sample size, which can limit the power of statistical tests; high variability within groups; and the nature of the improvements made by our upscaling method, which may be consistent but subtle.
Despite the lack of statistical significance, the moderate effect sizes observed for some metrics suggest practical differences that warrant consideration when selecting an upscaling method for specific P-SHG imaging applications. These findings highlight the importance of considering both statistical and practical significance in evaluating imaging enhancement techniques.
Combining the no-reference and full-reference quality metrics with the ANOVA results provided a comprehensive validation of our upscaling methods. This analysis demonstrates that our ESRGAN-based approach can enhance low-resolution images while preserving their quality. The lack of statistically significant differences, coupled with the moderate effect sizes in certain metrics, suggests that the upscaling process does not significantly alter perceived or structural image quality. This validation confirms the efficacy of the method and underscores its potential applicability in bioimaging and beyond, where maintaining the image integrity is paramount. The nuanced differences revealed by the effect size analysis provide valuable guidance for optimizing upscaling methods for specific imaging contexts, ensuring that the most critical aspects of image quality are preserved in each application.
3.5. P-SHG analysis results
Before conducting P-SHG analysis, we performed CurveAlign measurements to determine whether low-quality images could be analyzed using this method [8]. Figure 5 summarizes the results for the two samples.
Fig. 5.
Comparative analysis using CurveAlign on samples: original high-quality (1a, 2a), low-quality (1b, 2b), and GAN-upscaled images (1c, 2c). CurveAlign accurately identifies the collagen fiber orientation in high-quality images (1a, 2a). In low-quality images (1b, 2b), the performance diminishes, with only larger recognizable fibers. However, the upscaled images (1c, 2c) show significantly improved analysis, with fiber orientation discernibility comparable to that of the original high-quality images. This demonstrates the efficacy of GAN-based upscaling in enhancing image analysis for CurveAlign.
In Fig. 5, we present a comparative analysis using CurveAlign software on two sets of samples: original high-quality images (1a, 2a), their lower-quality versions (1b, 2b), and images enhanced via GAN-based upscaling (1c, 2c). CurveAlign proficiently identifies the orientation of collagen fibers in high-quality images (1a, 2a), demonstrating the effectiveness of the software with images of adequate resolution and clarity. However, acquiring such images required 15 min of continuous laser exposure per image, totaling 4.5 hours for the 18 images necessary for P-SHG analysis. This extended exposure can damage the samples, leading to degradation and affecting the repeatability of experiments. Additionally, fresh samples risk drying out and altering their morphology if removed from their chemical bath for more than a few minutes, potentially reducing SHG intensity or extinguishing harmonophores. Therefore, minimizing the laser exposure and expediting the imaging times are desirable.
Analysis of the lower-quality images (Fig. 5, 1(b), 2(b)) revealed significant limitations in both CurveAlign and our custom P-SHG algorithm. These tools struggled to accurately discern the collagen fiber orientation, identifying only a few larger fibers. This highlights the challenges that image analysis software faces with suboptimal image quality, where the loss of detail severely limits the accuracy and comprehensiveness of the P-SHG analysis. Many existing P-SHG analysis tools are optimized for higher-resolution inputs, often failing to detect finer structures or misinterpret noise as significant features when applied to low-resolution images (please refer to Supplement 1 (3.6MB, pdf) Fig. S1, and S2, respectively).
Remarkably, the GAN-upscaled images (Fig. 5, 1(c), 2(c)) showed a significant improvement, with CurveAlign's performance on these images being comparable to that on the original high-quality images. This comparative analysis underscores the necessity of our upscaling approach, rather than performing P-SHG analysis directly on low-resolution images. Our analysis demonstrates that the upscaled images provide a superior approximation of the original high-quality images across multiple metrics. For instance, the FSIM improved from 0.884 (low quality) to 0.910 (Ultrasharp_4X), and the LBP similarity increased from 0.471 to 0.758. Although upscaling does not recover all fine details, it strikes a balance between detail preservation and noise reduction.
The significant improvements in fiber orientation discernibility, as seen in Fig. 5(1c, 2c), clearly demonstrate the value of this upscaling approach. These enhancements are crucial for accurate P-SHG analysis, allowing for better differentiation of collagen structures and more reliable orientation measurements. Our approach leverages the speed of low-resolution imaging while obtaining analysis results that closely resemble those from high-resolution images, offering a pragmatic solution to the trade-off between imaging speed and analysis accuracy in P-SHG studies. In conclusion, this GAN-based upscaling approach markedly enhances the utility of lower-quality images for detailed analysis, extending the applicability of CurveAlign software and other tools to a broader spectrum of image qualities. It provides an efficient solution that combines shorter laser exposure times with image upscaling, overcoming the limitations posed by lower-quality images and technical constraints of sample preparation, and potentially opens new avenues for rapid, nondestructive P-SHG imaging in various biological applications.
Next, Images were captured at 18 polarization states, spanning 0°–170° degrees in 10-degree increments, with synchronization achieved using a custom Python program. Initially, the effort was to make the analysis software work based on low-quality images, but this was unsuccessful because there was too much loss of detail for the analysis to be accurate. The initial phase of our study attempted to conduct analyses based on low-quality images; however, this approach encountered substantial obstacles owing to the significant loss of detail, which compromised the accuracy of our analyses. To circumvent this issue, each image was individually upscaled using the Ultrasharp_4X model, thereby enhancing the resolution and clarity essential for accurate P-SHG analysis. A custom MATLAB script, inspired by the foundational work Ref.d in [8,47,48], was pivotal for processing upscaled P-SHG images. This script employs a spatial FFT algorithm to execute a Fourier transform on intensity measurements across different angles. For further details consult [47,48]. Figure 6 summarizes the results of the analysis.
Fig. 6.
P-SHG imaging of collagen fiber orientation in mammary glands. Panels (a) and (b) display the SHG signals of two distinct tissues, visualized in a range of colors corresponding to the collagen fiber orientations relative to the polarization angle of the incident light. The color wheel insets map these orientations, with each color representing a specific angle of polarization, illustrating the complex and heterogeneous arrangement of the fibers within the samples. Notably, both images contained dark regions inside the fibers, which were attributed to areas where the intensity of the SHG signal remained static, indicating a uniform orientation of collagen fibers over the polarization states captured. Owing to this uniformity, the spatial fast Fourier transform algorithm cannot discern variations, resulting in no color assignment in these specific regions.
Our P-SHG analysis protocol, detailed in Fig. 6, encompasses 18 SHG images (32-bit TIFF) taken in 10-degree steps from 0°to 170°. Each angle (0-360 degrees) is denoted by a distinct color, providing a visually intuitive depiction of the fiber orientation across the sample. In addition, a fibrillar histogram accompanies the images, offering a quantitative analysis of the fiber orientations. Some areas in the analyzed images appear darker than those in the original images. Dark regions within the fiber network arise because of the uniform fiber orientation over polarization states. This results from the smoothing effect of the upscaling algorithm and impedes the ability of the FFT to detect internal variations within fibers. However, it is noteworthy that FFT remains adept at discerning the periphery of fibers and accurately identifying their borders. Importantly, the fiber borders were aligned with the interior, providing a coherent overall fiber direction. This consistency between the border and interior orientations ensures that despite the limitations in detecting internal variations, the method still effectively conveys the general directionality of the fibers. For analyses in which specific internal areas of the fiber are of interest, a targeted focus on these regions is required to overcome the limitations of these smoother, homogeneous sections (see Fig. 7).
Fig. 7.
Comparative P-SHG Analysis Across Three ROIs. Each row represents a distinct region of interest (ROI) from different samples, showcasing original high-quality images (20X objective), low-quality images initially captured with a 10X objective then digitally zoomed and cropped, and their GAN-upscaled counterparts. Despite the initial lower resolution, upscaling restores detail and smoothness, yielding a fiber orientation analysis comparable to the original high-quality images. Normalized intensity vs. laser input angle graphs for each set illustrate the consistency of P-SHG responses across all imaging modalities, affirming the accuracy of collagen fiber orientation details in the upscaled images.
Furthermore, dark areas around the sample resulted from the deliberate removal of background elements and non-essential muscle structures surrounding the fibers, a step taken to enhance the clarity and focus of the analysis of the collagen fibers. Figure 6(a). shows a network of collagen fibers with varying orientations, as indicated by the spectrum of colors present in the tissue, where each color corresponds to a different fiber orientation relative to the polarization angle of the incident light. The color wheel inset serves as a reference for interpreting these orientations. The vibrant colors suggest a diverse and complex arrangement of fibers, with pink hues indicating fibers oriented in one direction, and other colors representing different angles. Figure 6(b). displays a collage of colors, indicating the orientation of collagen fibers. The presence of bright green and yellow hues suggests that the fibers have orientations different from those in the first image. The color intensity and distribution indicated that this sample may have a denser or more aligned collagen network than the first sample. The results, including the color wheel, orientation map, anisotropy parameter map, and histogram data, were meticulously compiled for each sample.
In Fig. 7, we focus on the analysis of regions of interest (ROI)s extracted from different samples and their counterparts, which were enhanced through an upscaling process. This examination is pivotal for assessing the fidelity of upscaling techniques to preserve the structural and optical properties that are essential for accurate P-SHG analysis. For our analysis, images of the selected P-SHG ROIs were captured using a 20X objective. 20X is optimal for resolving the intricate patterns of collagen fiber orientation while ensuring adequate field coverage. Notably, the images earmarked for upscaling were initially obtained using a 10X objective, before being digitally zoomed and cropped. This approach was strategically employed for low-quality images to simulate the conditions in which high-resolution data were not readily available or feasible to obtain, thus mimicking a real-world scenario in which upscaling could be particularly beneficial. Upon comparing the original and upscaled (yet zoomed and cropped) P-SHG images, a key observation was the smoothness of the upscaled images. This smoothness did not detract from the structural details within the images, but rather enhanced the visual clarity, making the interpretation of collagen fiber orientations more straightforward. More importantly, when we quantified the P-SHG response by plotting the normalized intensity against the laser input angle for both the original and upscaled images, we observed remarkably consistent responses. The graph corresponding to the upscaled P-SHG images exhibited a smoother curve, an effect attributable to the upscaling process, which tended to reduce noise and interpolate between data points to create a more continuous representation of the intensity response.
Crucially, despite the smoother appearance of the graphs in the upscaled images, the overall shape and trend of the P-SHG intensity responses remained unchanged. This congruence indicates that the upscaling process, while enhancing the visual quality of the images, did not alter the fundamental biophysical properties captured by P-SHG imaging. Thus, the fidelity of fiber orientation details in the upscaled images was validated, underscoring the utility of upscaling as a viable method for improving image quality in P-SHG analysis without compromising the accuracy of collagen fiber orientation information.
4. Conclusion
In conclusion, our research has demonstrated significant advancements in whole-sample mammary gland P-SHG imaging, reducing the imaging time from a time-consuming 4.5 hours to a mere 13.5 minutes (more than 95% reduction). Acquiring 18 high-quality images suitable for P-SHG analysis is a time-intensive process that poses the risk of damage to samples, particularly those of considerable size. To mitigate these challenges, we propose an innovative method that involves capturing 18 low-quality images and subsequently enhancing their resolution by using a GAN-based approach. This technique not only substantially reduces the required imaging time but also ensures preservation of sample integrity during the imaging process. By leveraging the capabilities of GANs to generate high-resolution images from their lower-quality counterparts, this approach offers a promising alternative that balances the need for high-quality imaging with the imperative of minimizing potential harm to delicate samples. In our pursuit of image upscaling, we explored various models, ultimately selecting “ultrasharp_4X” based on ESRGAN owing to its remarkable similarity to the original images. Although we initially considered using high-quality images as references for upscaling, this approach led to undesirable alterations, making it unsuitable for our specific application. This method saves substantial amounts of time and offers several advantages.
One of the most noteworthy advantages of our accelerated P-SHG imaging process is the substantial reduction in the laser exposure of the sample. Laser-induced photodamage is a concern when working with delicate biological specimens, and minimizing this risk is crucial for preserving the integrity and quality of the sample. Our faster imaging method minimizes the exposure time, reduces potential harm to the sample, and allows for extended observation without compromising the biological or material properties under investigation. Using this technique, we achieved fiber orientation analysis on par with that of high-quality images captured with a 20X objective. This accelerated process was complemented by a meticulous image analysis protocol, in which each angle of polarization was represented by a specific color on a wheel, translating into an intuitive visual depiction of the fiber orientation throughout the sample. Accompanying fibrillar histograms provides quantitative analysis that enhances the interpretive depth of the study. Our results demonstrate the robustness of P-SHG responses and fidelity of collagen fiber orientation data within upscaled images. These findings were reinforced by a comparative analysis across three distinct ROIs, which confirmed that the GAN-based upscaling process preserved the integrity of the sample while enhancing the detail and smoothness of fiber alignment.
Furthermore, the expedited P-SHG imaging process allows us to reconsider the optical components of the imaging system. Because high-resolution imaging is not required in many of our applications, we can opt for more cost-effective objective objectives and imaging systems. This optimization translates to significant cost savings and lowers barriers to entry for researchers and institutions interested in utilizing the P-SHG imaging technology. This affordability and accessibility expands the potential applications of P-SHG imaging in diverse fields and communities. Our analysis confirmed the accuracy of the results obtained using accelerated imaging. By comparing the P-SHG images generated using our streamlined approach with those produced using the traditional method, we found that the results were consistent with the characteristics of the sample. Reducing the laser exposure and equipment costs ensures that P-SHG imaging can be adopted more widely, thereby advancing scientific understanding and innovation across disciplines. Our work paves the way for discoveries and breakthroughs fueled by the efficiency and accessibility of P-SHG imaging. Therefore, there are promising directions for future research. New or emerging GAN architectures can offer more precise upscaling capabilities, particularly for images with unique challenges that are not fully addressed by the current models. The development of automated analysis tools tailored for upscaled images ensures that the upscaling process enhances data interpretation. The incorporation of AI-driven methods for identifying and quantifying specific features in upscaled images can streamline the analysis of complex biological structures. In addition, the effectiveness of our method was demonstrated through mammary gland tissue imaging. Extending this approach to other tissues or conditions such as fibrotic changes in liver disease or collagen alterations in cardiovascular health could significantly broaden its applicability. This expansion would not only validates the versatility of the proposed method, but also contributes valuable insights into the structural dynamics of various diseases. Moreover, establishing guidelines for the ethical use of AI in scientific imaging will ensure the integrity of data. Developing quality standards for upscaled images will facilitate their acceptance and use in critical research endeavors.
Supporting information
Acknowledgment
Ethics statement. Animal studies were conducted according to the procedures provided by the Canadian Council on Animal Care. The protocol (2005-02) was reviewed and approved by the Institutional Committee for Animal Protection of the Laboratoire National de Biologie Expérimentale (LNBE), the animal facilities based at the Institut National de Recherche Scientifique (INRS).
Funding
Canada Foundation for Innovation10.13039/501100000196; Fonds de recherche du Québec – Nature et technologies10.13039/501100003151; Natural Sciences and Engineering Research Council of Canada10.13039/501100000038; New Frontiers Research Fund; NSERC CREATE.
Disclosures
The authors declare that they have no conflicts of interest.
Data availability
The data, codes, and materials underlying the results presented in this paper are available for full transparency and reproducibility at [49]. The ChaiNNer program can be downloaded from [50]. All the models used in this study can be downloaded from [51], except for PixTransform which can be downloaded from [52].
Supplemental document
See Supplement 1 (3.6MB, pdf) for supporting content.
References
- 1.Macias H., Hinck L., “Mammary gland development,” Wiley Interdiscip. Rev. Dev. Biol. 1(4), 533–557 (2012). 10.1002/wdev.35 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Biswas S. K., Banerjee S., Baker G. W., et al. , “The Mammary gland: basic structure and molecular signaling during development,” Int. J. Mol. Sci. 23(7), 3883 (2022). 10.3390/ijms23073883 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Campbell J. J., Watson C. J., “Three-dimensional culture models of mammary gland,” Organogenesis 5(2), 43–49 (2009). 10.4161/org.5.2.8321 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Griffith L. G., Swartz M. A., “Capturing complex 3D tissue physiology in vitro,” Nat. Rev. Mol. Cell Biol. 7(3), 211–224 (2006). 10.1038/nrm1858 [DOI] [PubMed] [Google Scholar]
- 5.Schedin P., Hovey R. C., “Editorial: the mammary stroma in normal development and function,” J. Mammary Gland Biol. Neoplasia 15(3), 275–277 (2010). 10.1007/s10911-010-9191-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Ingman W. V., Wyckoff J., Gouon-Evans V., et al. , “Macrophages promote collagen fibrillogenesis around terminal end buds of the developing mammary gland,” Dev. Dyn. 235(12), 3222–3229 (2006). 10.1002/dvdy.20972 [DOI] [PubMed] [Google Scholar]
- 7.Aghigh A., Bancelin S., Rivard M., et al. , “Second harmonic generation microscopy: a powerful tool for bio-imaging,” Biophys. Rev. 15(1), 43–70 (2023). 10.1007/s12551-022-01041-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Aghigh A., Preston S. E. J., Jargot G., et al. , “Nonlinear microscopy and deep learning classification for mammary gland microenvironment studies,” Biomed. Opt. Express 14(5), 2181–2195 (2023). 10.1364/BOE.487087 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Rivard M., Laliberté M., Bertrand-Grenier A., et al. , “The structural origin of second harmonic generation in fascia,” Biomed. Opt. Express 2(1), 26–36 (2011). 10.1364/BOE.2.000026 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Stoller P., Reiser K. M., Celliers P. M., et al. , “Polarization-modulated second harmonic generation in collagen,” Biophys. J. 82(6), 3330–3342 (2002). 10.1016/S0006-3495(02)75673-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Campagnola P. J., Loew L. M., “Second-harmonic imaging microscopy for visualizing biomolecular arrays in cells, tissues and organisms,” Nat. Biotechnol. 21(11), 1356–1360 (2003). 10.1038/nbt894 [DOI] [PubMed] [Google Scholar]
- 12.Stanciu S. G., Ávila F. J., Hristu R., et al. , “A study on image quality in polarization-resolved second harmonic generation microscopy,” Sci. Rep. 7(1), 15476 (2017). 10.1038/s41598-017-15257-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Latour G., Gusachenko I., Kowalczuk L., et al. , “In vivo structural imaging of the cornea by polarization-resolved second harmonic microscopy,” Biomed. Opt. Express 3(1), 1–15 (2012). 10.1364/BOE.3.000001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Cisek R., Joseph A., Harvey M., et al. , “Polarization-sensitive second harmonic generation microscopy for investigations of diseased collagenous tissues,” Front. Phys. 9, 1 (2021). 10.3389/fphy.2021.726996 [DOI] [Google Scholar]
- 15.Lloyd-Lewis B., “Multidimensional imaging of mammary gland development: a window into breast form and function,” Front. Cell Dev. Biol. 8, 203 (2020). 10.3389/fcell.2020.00203 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Katsuno-Kambe H., Teo J. L., Ju R. J., et al. , “Collagen polarization promotes epithelial elongation by stimulating locoregional cell proliferation,” eLife 10, e67915 (2021). 10.7554/eLife.67915 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Ouellette J. N., Drifka C. R., Pointer K. B., et al. , “Navigating the collagen jungle: the biomedical potential of fiber organization in cancer,” Bioengineering 8(2), 17 (2021). 10.3390/bioengineering8020017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Kistenev Y. V., Nikolaev V. V., Kurochkina O. S., et al. , “Application of multiphoton imaging and machine learning to lymphedema tissue analysis,” Biomed. Opt. Express 10(7), 3353–3368 (2019). 10.1364/BOE.10.003353 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Huttunen M. J., Hassan A., McCloskey C. W., et al. , “Automated classification of multiphoton microscopy images of ovarian tissue using deep learning,” J. Biomed. Opt. 23(06), 1–7 (2018). 10.1117/1.JBO.23.6.066002 [DOI] [PubMed] [Google Scholar]
- 20.Hall G., Liang W., Li X., “Fitting-free algorithm for efficient quantification of collagen fiber alignment in SHG imaging applications,” Biomed. Opt. Express 8(10), 4609–4620 (2017). 10.1364/BOE.8.004609 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Lee S., Negishi M., Urakubo H., et al. , “Mu-net: Multi-scale U-net for two-photon microscopy image denoising and restoration,” Neural Netw. 125, 92–103 (2020). 10.1016/j.neunet.2020.01.026 [DOI] [PubMed] [Google Scholar]
- 22.Wang G., Zhan H., Luo T., et al. , “Automated ovarian cancer identification using end-to-end deep learning and second harmonic generation imaging,” IEEE J. Sel. Top. Quantum Electron. 29, 1–9 (2023). 10.1109/JSTQE.2022.3228567 [DOI] [Google Scholar]
- 23.Woessner A. E., Quinn K. P., “Improved segmentation of collagen second harmonic generation images with a deep learning convolutional neural network,” J. Biophotonics 15(12), e202200191 (2022). 10.1002/jbio.202200191 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Pradhan P., Guo S., Ryabchykov O., et al. , “Deep learning a boon for biophotonics?” J. Biophotonics 13(6), e201960186 (2020). 10.1002/jbio.201960186 [DOI] [PubMed] [Google Scholar]
- 25.Goodfellow I., Pouget-Abadie J., Mirza M., et al. , “Generative Adversarial Nets,” Advances in Neural Information Processing Systems 27 (NIPS, 2014.). [Google Scholar]
- 26.Wang X., Yu K., Wu S., et al. , “ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks,” Computer Vision–ECCV 2018 Workshops (2018). [Google Scholar]
- 27.Horé A., Ziou D., “Image Quality Metrics: PSNR vs. SSIM,” in 2010 20th International Conference on Pattern Recognition (2010), pp. 2366–2369. [Google Scholar]
- 28.Rouse D., Hemami S., “Understanding and simplifying the structural similarity metric,” in 2008 15th IEEE International Conference on Image Processing, (2008), pp. 1188–1191. [Google Scholar]
- 29.Venkatanath N., Praneeth D., Bh M. C., et al. , “Blind image quality evaluation using perception based features,” in 2015 Twenty First National Conference on Communications (NCC) (IEEE, 2015), pp. 1–6. [Google Scholar]
- 30.Mittal A., Soundararajan R., Bovik A. C., “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett. 20(3), 209–212 (2013). 10.1109/LSP.2012.2227726 [DOI] [Google Scholar]
- 31.“Intensity measurements — Microscopy for Beginners reference guide,” https://www.bioimagingguide.org/03_Image_analysis/Intensity.html.
- 32.Peli E., “Contrast in complex images,” J. Opt. Soc. Am. A 7(10), 2032–2040 (1990). 10.1364/JOSAA.7.002032 [DOI] [PubMed] [Google Scholar]
- 33.Iqbal N., Mumtaz R., Shafi U., et al. , “Gray level co-occurrence matrix (GLCM) texture based crop classification using low altitude remote sensing platforms,” PeerJ Comput. Sci. 7, e536 (2021). 10.7717/peerj-cs.536 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Zhang L., Zhang L., Mou X., et al. , “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. Image Process. 20, 2378 (2011). 10.1109/TIP.2011.2109730 [DOI] [PubMed] [Google Scholar]
- 35.Sheikh H. R., Bovik A. C., “Image information and visual quality,” IEEE Trans. Image Process. 20, 430 (2011). 10.1109/TIP.2005.859378. [DOI] [PubMed] [Google Scholar]
- 36.Chen L., Jiang F., Zhang H.,“Edge preservation ratio for image sharpness assessment,” IEEE Trans. Image Process. 20, 1377 (2011). 10.1109/WCICA.2016.7578241. [DOI] [Google Scholar]
- 37.Pan Z., Hu S., Wu X., et al. , “Adaptive center pixel selection strategy in Local Binary Pattern for texture classification,” Expert Syst. Appl. 180, 115123 (2021). 10.1016/j.eswa.2021.115123 [DOI] [Google Scholar]
- 38.Zhao W., Chellappa R., eds., “CHAPTER 19 - Near real-time robust face and facial-feature detection with information-based maximum discrimination,” in Face Processing (Academic Press, 2006), pp. 619–646. [Google Scholar]
- 39.Plante I., Stewart M. K. G., Laird D. W., “Evaluation of mammary gland development and function in mouse models,” J. Vis. Exp. JoVE 53, 2828 (2011). 10.3791/2828 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.“4x UltraSharp,” https://openmodeldb.info/models/4x-UltraSharp.
- 41.“Model Database - Upscale Wiki,” https://upscale.wiki/w/index.php?title=Model_Database&oldid=1571.
- 42.“4x UniScaleV2 Sharp,” https://openmodeldb.info/models/4x-UniScaleV2-Sharp.
- 43.“Official Research Models - Upscale Wiki,” https://upscale.wiki/wiki/Official_Research_Models.
- 44.de Lutio R., D’Aronco S., Wegner J. D., et al. , “Guided Super-Resolution as Pixel-to-Pixel Transformation,”2019 IEEE/CVF International Conference on Computer Vision (ICCV), (2019). 10.1109/ICCV.2019.00892 [DOI] [Google Scholar]
- 45.Kim T. K., “Understanding one-way ANOVA using conceptual figures,” Korean J. Anesthesiol. 70(1), 22–26 (2017). 10.4097/kjae.2017.70.1.22 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Sullivan G. M., Feinn R., “Using Effect Size—or Why the P Value Is Not Enough,” J. Grad. Med. Educ. 4(3), 279–282 (2012). 10.4300/JGME-D-12-00156.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Teulon C., Gusachenko I., Latour G., et al. , “Theoretical, numerical and experimental study of geometrical parameters that affect anisotropy measurements in polarization-resolved SHG microscopy,” Opt. Express 23(7), 9313–9328 (2015). 10.1364/OE.23.009313 [DOI] [PubMed] [Google Scholar]
- 48.Ducourthial G., Affagard J.-S., Schmeltz M., et al. , “Monitoring dynamic collagen reorganization during skin stretching with fast polarization-resolved second harmonic generation imaging,” J. Biophotonics 12(5), e201800336 (2019). 10.1002/jbio.201800336 [DOI] [PubMed] [Google Scholar]
- 49.Aghigh A., Cardot J., Mohammadi M., et al. , “Accelerating whole-sample polarization-resolved second harmonic generation imaging in mammary gland tissue via generative adversarial networks,” Zenodo 2020, 10.5281/zenodo.12788764 [DOI]
- 50.The chaiNNer Organization , “chaiNNer,” Github 2024, https://github.com/chaiNNer-org/chaiNNer
- 51.OpenModelDB , “Open Model Database,” OpenModelDB 2024, https://openmodeldb.info/
- 52.De Lutio R., DAronco S., Wegner J., et al. , “Guided Super-Resolution as a Learned Pixel-to-Pixel Transformation,” Github 2020, https://github.com/prs-eth/PixTransform
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data, codes, and materials underlying the results presented in this paper are available for full transparency and reproducibility at [49]. The ChaiNNer program can be downloaded from [50]. All the models used in this study can be downloaded from [51], except for PixTransform which can be downloaded from [52].