Abstract
We report quantitative label-free imaging with phase and polarization (QLIPP) for simultaneous measurement of density, anisotropy, and orientation of structures in unlabeled live cells and tissue slices. We combine QLIPP with deep neural networks to predict fluorescence images of diverse cell and tissue structures. QLIPP images reveal anatomical regions and axon tract orientation in prenatal human brain tissue sections that are not visible using brightfield imaging. We report a variant of U-Net architecture, multi-channel 2.5D U-Net, for computationally efficient prediction of fluorescence images in three dimensions and over large fields of view. Further, we develop data normalization methods for accurate prediction of myelin distribution over large brain regions. We show that experimental defects in labeling the human tissue can be rescued with quantitative label-free imaging and neural network model. We anticipate that the proposed method will enable new studies of architectural order at spatial scales ranging from organelles to tissue.
Research organism: Human, Mouse
eLife digest
Microscopy is central to biological research and has enabled scientist to study the structure and dynamics of cells and their components within. Often, fluorescent dyes or trackers are used that can be detected under the microscope. However, this procedure can sometimes interfere with the biological processes being studied.
Now, Guo, Yeh, Folkesson et al. have developed a new approach to examine structures within tissues and cells without the need for a fluorescent label. The technique, called QLIPP, uses the phase and polarization of the light passing through the sample to get information about its makeup.
A computational model was used to decode the characteristics of the light and to provide information about the density and orientation of molecules in live cells and brain tissue samples of mice and human. This way, Guo et al. were able to reveal details that conventional microscopy would have missed. Then, a type of machine learning, known as ‘deep learning’, was used to translate the density and orientation images into fluorescence images, which enabled the researchers to predict specific structures in human brain tissue sections.
QLIPP can be added as a module to a microscope and its software is available open source. Guo et al. hope that this approach can be used across many fields of biology, for example, to map the connectivity of nerve cells in the human brain or to identify how cells respond to infection. However, further work in automating other aspects, such as sample preparation and analysis, will be needed to realize the full benefits.
Introduction
The function of living systems emerges from the interaction of its components over spatial and temporal scales that range many orders of magnitude. Light microscopy is uniquely useful to record dynamic arrangement of molecules within the context of organelles, of organelles within the context of cells, and of cells within the context of tissues. Combination of fluorescence imaging and automated analysis of image content with deep learning (Moen et al., 2019; Belthangady and Royer, 2019; Van Valen et al., 2016) has opened new avenues for understanding complex biological processes. However, characterizing the architecture and dynamics with fluorescence remains challenging in many important biological systems. The choice of label can introduce observation bias in the experiment and may perturb the biological process being studied. For example, labeling cytoskeletal polymers often perturbs their native assembly kinetics (Belin et al., 2014). Genetic labeling of human tissue and non-model organisms is not straightforward and the labeling efficiency is often low. Labeling with antibodies or dyes can lead to artifacts and requires careful optimization of the labeling protocols. The difficulty of labeling impedes biological discoveries using these systems. By contrast, label-free imaging requires minimal sample preparation as it measures the sample’s intrinsic properties. Lable-free imaging is capable of visualizing many biological structures simultaneously with minimal photo-toxicity and no photo-bleaching, making it particularly suitable for live-cell imaging. Measurements made without label are often more robust since experimental errors associated with the labeling are avoided. Multiplexed imaging with fluorescence and label-free contrasts enables characterization of the dynamics of labeled molecules in the context of organelles or cells. Thus, label-free imaging provides measurements complementary to fluorescence imaging for a broad range of biological studies, from analyzing architecture of archival human tissue to characterizing organelle dynamics in live cells.
Classical label-free microscopy techniques such as phase contrast (Zernike, 1955), differential interference contrast (DIC) (Nomarski, 1955), and polarized light microscopy (Schmidt, 1926; Inoue, 1953) are qualitative. They turn specimen-induced changes in phase (shape of the wavefront) and polarization (the plane of oscillation of the electric field) of light into intensity modulations that are detectable by a camera. These intensity modulations are related to specimens’ properties via complex non-linear transformation, which makes it difficult to interpret. Computational imaging turns the qualitative intensity modulations into quantitative measurements of specimens’ properties with inverse algorithms based on models of image formation. Quantitative phase imaging (Popescu et al., 2006; Waller et al., 2010; Tian and Waller, 2015) measures optical path length, that is, specimen phase, which reports density of the dry mass (Barer, 1952). Quantitative polarization microscopy in transmission mode reports angular anisotropy of the optical path length, that is, retardance, (Inoue, 1953; Oldenbourg and Mei, 1995; Mehta et al., 2013) and axis of anisotropy, that is, orientation, without label.
Quantitative label-free imaging measures intrinsic properties of the specimen and provides insights into biological processes that may not be obtained with fluorescence imaging. For example, Quantitative phase microscopy (Park et al., 2018) has been used to analyze membrane mechanics, density of organelles (Imai et al., 2017), cell migration, and recently fast propagation of action potential (Ling et al., 2019). Similarly, quantitative polarization microscopy has enabled discovery of the dynamic microtubule spindle (Inoue, 1953; Keefe et al., 2003), analysis of retrograde flow of F-actin network (Oldenbourg et al., 2000), imaging of white matter in adult human brain tissue slices (Axer et al., 2011a; Axer et al., 2011b; Menzel et al., 2017; Mollink et al., 2017; Zeineh et al., 2017; Henssen et al., 2019), and imaging of activity-dependent structural changes in brain tissue (Koike-Tani et al., 2019). Given the complementary information provided by specimen density and anisotropy, a joint imaging of phase and retardance has also been attempted (Shribak et al., 2008; Ferrand et al., 2018; Baroni et al., 2020). However, current methods for joint imaging of density and anisotropy are limited in throughput due to complexity of acquisition or can only be used for 2D imaging due to the lack of accurate 3D image formation models. We sought to develop a computational imaging method for joint measurements of phase and retardance of live 3D specimens with simpler light path and higher throughput.
In comparison to fluorescence measurements that provide molecular specificity, label-free measurements provide physical specificity. Obtaining biological insights from label-free images often requires identifying specific molecular structures. Recently, deep learning has enabled translation of qualitative and quantitative phase images into fluorescence images (Ounkomol et al., 2018; Christiansen et al., 2018; Rivenson et al., 2018a; Rivenson et al., 2019; Lee et al., 2019; Petersen et al., 2017). Among different neural network architecture, U-Net has been widely applied to image segmentation and translation tasks (Ronneberger et al., 2015; Milletari et al., 2016; Ounkomol et al., 2018; Lee et al., 2019). U-Net’s success arises primarily from its ability to exploit image features at multiple spatial scales, and its use of skip connections between the encoding and decoding blocks. The skip connections give decoding blocks access to low-complexity, high-resolution features in the encoding blocks. In image translation, images from different modalities (label-free vs. fluorescence in our case) of the same specimen are presented to the neural network model. The neural network model learns the complex transformation from label-free to fluorescence images through the training process. The trained neural network model can predict fluorescence images from label-free images to enable analysis of distribution of a specific molecule. The accuracy with which the molecular structure can be predicted depends not just on the model, but also on the dynamic range and the consistency of the contrast with which the structure is seen in the label-free data. Some of the anisotropic structures are not visible in phase imaging data and therefore cannot be learned from phase imaging data. Reported methods of image translation have not utilized optical anisotropy, which reports important structures such as cell membrane and axon bundles. Furthermore, previous work has mostly demonstrated prediction of single 2D fields of view. Volumetric prediction using 3D U-Net has been reported, but it is computationally expensive, such that downsampling the data at the expense of spatial resolution is required (Ounkomol et al., 2018). We sought to improve the accuracy of prediction of fluorescence images by using information contained in complementary measurements of density and anisotropy.
In this work, we report a combination of quantitative label-free imaging and deep learning models to identify biological structures from their density and anisotropy. First, we introduce quantitative label-free imaging with phase and polarization (QLIPP) that visualizes diverse structures by their phase, retardance, and orientation. QLIPP combines quantitative polarization microscopy (Oldenbourg and Mei, 1995; Shribak and Oldenbourg, 2003; Mehta et al., 2013) with the concept of phase from defocus (Streibl, 1984; Waller et al., 2010; Streibl, 1985; Noda et al., 1990; Claus et al., 2015; Jenkins and Gaylord, 2015a; Jenkins and Gaylord, 2015b; Soto et al., 2017), to establish a novel method for volumetric measurement of phase, retardance, and orientation (Figure 1A). Data generated with QLIPP can distinguish biological structures at multiple spatial and temporal scales, making it valuable for revealing the architecture of the postmortem archival tissue and organelle dynamics in live cells. QLIPP’s optical path is simpler relative to earlier methods (Shribak et al., 2008), reconstruction algorithms are more accurate, and reconstruction software is open-source. QLIPP can be implemented on existing microscopes as a module and can be easily multiplexed with fluorescence. To translate 3D distribution of phase, retardance, and orientation to fluorescence intensities, we implement a computationally efficient multi-channel 2.5D U-Net architecture (Figure 1B) based on a previously reported single-channel 2.5D U-Net (Han, 2017). We use QLIPP for imaging axon tracts and myelination in archival brain tissue sections at two developmental stages. Label-free measurement of anisotropy allowed us to visualize axon orientations across whole sections. We demonstrate that QLIPP data increases accuracy of prediction of myelination in developing human brain as compared to brighfield data. Finally, we demonstrate robustness of the label-free measurements to experimental variations in labeling, which leads to more consistent prediction of myelination than possible with the experimental staining. Collectively, we propose a novel approach for imaging architectural order across multiple biological systems and analyzing it with a judicious combination of physics-driven and data-driven modeling approaches.
Results
QLIPP provides joint measurement of specimen density and anisotropy
The light path of QLIPP is shown in Figure 1A. It is a transmission polarization microscope based on computer controlled liquid crystal universal polarizer (Oldenbourg and Mei, 1995; Shribak and Oldenbourg, 2003; Mehta et al., 2013). QLIPP provides an accurate image formation model and corresponding inverse algorithm for simultaneous reconstruction of specimen phase, retardance, and slow axis orientation.
In QLIPP, specimens are illuminated with five elliptical polarization states for sensitive detection of specimens’ retardance (Shribak and Oldenbourg, 2003; Mehta et al., 2013). For each illumination, we collect a Z-stack of intensity to capture specimens’ phase information. Variations in the density of the specimen, for example lower density of nuclei relative to the cytoplasm, cause changes in refractive index and distort the wavefront of the incident light. The wavefront distortions lead to detectable intensity modulations through interference in 3D space as the light propagates along the optical axis. Intensity modulations caused by isopropic density variations (specimen phase) can be captured by acquiring a stack of intensities along the optical (Z) axis (Streibl, 1984; Waller et al., 2010). Anisotropic variations in the specimens’ density result from alignment of molecules along a preferential axis, for example lipid membrane has higher anisotropy relative to the cytoplasm due to the alignment of lipid molecules. This anisotropic density variation (specimen retardance) induces polarization-dependent phase difference. Specimen retardance is often characterized by the axis along which anisotropic material is the densest (slow-axis) or by the axis perpendicular to it (fast-axis) (de Campos Vidal et al., 1980; Salamon and Tollin, 2001), and the difference in specimen phase between these two axes. In addition, multiple scattering by the specimen can reduce degree of polarization of light. The specimen retardance, slow-axis orientation, and degree of polarization can be measured by probing the specimen with light in different polarization states. We develop a forward model of transformation using the formalism of partial polarization and phase transfer function to describe the relation between specimen physical properties and detected intensities. We then leverage above forward model to design an inverse algorithm that reconstructs quantitative specimen physical properties in 3D from the detected intensity modulations as illustrated in Figure 1A.
First, we utilize Stokes vector representation of partially polarized light (Born and Wolf, 2013; Bass et al., 2009; Azzam, 2016) to model the transformation from specimens’ optical properties to acquired intensities (Equation 7). By inverting this transformation, we reconstruct 3D volumes of retardance, slow-axis orientation, brightfield, and degree of polarization. Proper background correction is crucial for detection of low retardance of the biological structures in the presence of high, non-uniform background resulting from the optics or imaging chamber. We use a two-step background correction method (Materials and methods) to correct the non-uniform background polarization (Figure 2—figure supplement 2). In addition to retardance and slow-axis orientation, our use of Stokes formalism enables reconstruction of brightfield and degree of polarization, in contrast to previous work that reconstructs just retardance and slow-axis orientation (Shribak and Oldenbourg, 2003; Mehta et al., 2013). The degree of polarization measures the fitness of our model with the experiment as explained later and the brightfield images enables reconstruction of specimen phase.
Second, we utilize phase transfer function formalism (Streibl, 1985; Noda et al., 1990; Claus et al., 2015; Jenkins and Gaylord, 2015a; Jenkins and Gaylord, 2015b; Soto et al., 2017) to model how 3D phase information is transformed into brightfield contrast (Equation 17). Specimen phase information is encoded in the brightfield images but in a complex fashion. In brightfield images, optically dense structures appear in brighter contrast than the background on one side of the focus, almost no contrast at the focus, and darker contrast than the background on the other side of the focus. This is illustrated by 3D brightfield images of nucleoli, the dense sub-nuclear domains inside nuclei (Figure 2—video 1). We invert our forward model to estimate specimen phase from 3D brightfield stack (Equation 19). Phase reconstruction from the brightfield volume shows nucleoli in positive contrast relative to background consistently as the nucleoli move through the focus (Figure 2—video 1). We note that the two-step background correction is essential for background-free retardance and orientation images, but not for phase image (Figure 2—figure supplement 2).
We illustrate wide applicability of QLIPP with images of human bone osteosarcoma epithelial (U2OS) cells, tissue section from adult mouse braintissue section from In the dividing U2OS cell (Figure 2—video 2, Figure 2—video 3), the phase image shows three-dimensional dynamics of dense cellular organelles, such as lipid vesicles, nucleoli, and chromosomes.The retardance and slow-axis orientation in U2OS cells (Figure 2—video 2, Figure 2—video 3) show dynamics of membrane boundaries, spindle, and lipid droplets. We note that the two-step background correction is essential to remove biases in the retardance and orientation images, but not for phase image (Figure 2—figure supplement 2). Figure 2—video 3 shows that specific organelles can be discerned simply by color-coding the measured phase and retardance, illustrating that quantitative label-free imaging provides specificity to physical properties.
At larger spatial scale, the phase image identifies cell bodies and axon tracts in mouse and developing human brain tissue sections because of variations in their density. These density variations are more visible and interpretable in phase image as compared to the brightfield image (Figure 2—figure supplement 3). Axon tracts appear with noticeably high contrast in retardance and orientation images of mouse and human brain slices (Figure 2). The high retardance of the axons arises primarily from myelin sheath that has higher density perpendicular to the axon axis (de Campos Vidal et al., 1980; Menzel et al., 2015). Therefore, the slow axis of the axon tracts is perpendicular to the orientation of the tracts. . Figure 2—figure supplement 4 and Figure 5 show stitched retardance and orientation images of a whole mouse brain slice, in which not only the white matter tracts, but also orientation of axons in cortical regions is visible. Note that the fine wavy structure in the right hemisphere of the slice is caused by sample preparation artifacts (Figure 2—figure supplement 3).
We show degree of polarization measurements in (Figure 2—figure supplement 1). The difference between retardance and degree of polarization is that retardance measures single scattering events within the specimen that alter the polarization of the light, but do not reduce the degree of polarization. On the other hand, low degree of polarization indicates multiple scattering events that reduce the polarization of light and thus mismatch of the specimen optical properties from the model assumptions. In the future, we plan to pursue models that account for diffraction and scattering effects in polarized light microscopy that would enable more precise retrieval of specimen properties.
Data reported above illustrate simultaneous and quantitative measurements of density, structural anisotropy, and orientation in 3D biological specimens, for the first time to our knowledge. The Python software for QLIPP reconstruction is available at https://github.com/mehta-lab/reconstruct-order. In the next sections, we discuss how these complementary label-free measurements enable prediction of fluorescence images and analysis of architecture.
2.5D U-Net allows efficient prediction of fluorescent structures from multi-channel label-free images
In contrast to fluorescence imaging, label-free measurement of density and anisotropy visualize several structures simultaneously but individual structures can be difficult to identify. Label-free measurements are affected by the expression of specific molecules, but do not report the expression directly. To obtain images of specific molecular structures from QLIPP data, we optimized convolutional neural network models to translate 3D label-free stacks into 3D fluorescence stacks.
Proper prediction of of fluorescent structures with deep learning requires joint optimization of image content, architecture of the neural network, and the training process. The optimization led us to a residual 2.5D U-Net that translates a small stack (5–7 slices) of label-free channels to the central slice of fluorescent channel throughout 3D volume. We use images of the mouse kidney tissue section as a test dataset for optimizing the model architecture and training strategies. We chose the mouse kidney tissue section because it has both anisotropic and isotropic structures (F-actin and nuclei). Additionally, both structures are robustly labeled with no noticeable artifacts. Later we demonstrate predicting the fluorescent labels in specimen where labeling is not robust (Figure 6).
Optimization of 2.5D model for prediction of fluorescence images
Our work builds upon earlier work (Ounkomol et al., 2018) on predicting fluorescence stacks from brightfield stacks using 3D U-Net. Ounkomol et al., 2018 showed fluorescence predicted by 3D U-Net is superior than 2D U-Net. However, applying 3D U-Net to microscopy images poses a few limitations. Typical microscopy stacks are bigger in their extent in the focal plane (∼2000 × 2000 pixels) and smaller in extent along the optical axis (usually <40 Z slices). Since the input is isotropically downsampled in the encoding path of the 3D U-Net, it requires sufficiently large number of Z slices to propagate the data through encoding and decoding blocks. As an example, for a minimum of 3 layers in U-Net and 16 pixels at the end of the encoder path, one will need at least 64 Z slices (Figure 3—figure supplement 1). Therefore, the use of 3D translation models often requires upsampling of the data in Z, which increases data size and makes training 3D translation model computationally expensive.
To reduce the computational cost without losing accuracy of prediction, we evaluated the prediction accuracy as a function of model dimensions for a highly ordered, anisotropic structure (F-actin) and for less ordered, isotropic structure (nuclei) in mouse kidney tissue. In mouse kidney tissue, the retardance image highlights capillaries within glomeruli, and brush borders in convoluted tubules, among other components of the tissue. The nuclei appear in darker contrast in the retardance image, because of the isotropic architecture of chromatin. We evaluated three model architectures to predict fluorescence volumes: slice→slice (2D in short) models that predict 2D fluorescence slices from corresponding 2D label-free slices, stack→slice (2.5D in short) models that predict the central 2D fluorescence slice from a stack of adjacent label-free slices, and stack→stack (3D in short) models that predict 3D fluorescent stacks from label-free stacks. For 2.5D models, 3D translation is achieved by predicting one 2D fluorescence plane per stack (z = 3, 5, 7) of label-free inputs. We added a residual connection between the input and output of each block to speed up model training (Milletari et al., 2016; Drozdzal et al., 2016).
In order to fit 3D models on the GPU, we needed to predict overlapping sub-stacks, which were stitched together to get the whole 3D stack ( see Materials and methods and Figure 3—figure supplement 1 for the description of the network architecture and training process). We used Pearson correlation coefficient and structural similarity index (SSIM) (Wang and Bovik, 2009) between predicted fluorescent stacks and target fluorescent stacks to evaluate the performance of the models (Materials and methods). We report these metrics on the test set (Table 1, Table 2, Table 3), which was not used during the training.
Table 1. Accuracy of 3D prediction of F-actin from retardance stack using different neural networks.
Translation model | Input(s) | rxy | rxz | rxyz | SSIMxy | SSIMxz | SSIMxyz |
---|---|---|---|---|---|---|---|
Slice→Slice (2D) | ρ | 0.82 | 0.79 | 0.83 | 0.78 | 0.71 | 0.78 |
Stack→Slice (2.5D, ) | ρ | 0.85 | 0.83 | 0.86 | 0.80 | 0.75 | 0.81 |
Stack→Slice (2.5D, ) | ρ | 0.86 | 0.84 | 0.87 | 0.81 | 0.76 | 0.82 |
Stack→Slice (2.5D, ) | ρ | 0.87 | 0.85 | 0.87 | 0.82 | 0.77 | 0.83 |
Stack→Stack (3D, ) | ρ | 0.86 | 0.84 | 0.86 | 0.82 | 0.76 | 0.85 |
Table 2. Accuracy of prediction of F-actin in mouse kidney tissue as a function of input channels.
Translation model | Input(s) | rxy | rxz | rxyz | SSIMxy | SSIMxz | SSIMxyz |
---|---|---|---|---|---|---|---|
Stack→Slice (2.5D, ) | ρ | 0.86 | 0.84 | 0.87 | 0.81 | 0.76 | 0.82 |
BF | 0.86 | 0.84 | 0.86 | 0.82 | 0.77 | 0.83 | |
Φ | 0.87 | 0.85 | 0.88 | 0.83 | 0.78 | 0.84 | |
Φ, ρ, ωx, ωy | 0.88 | 0.87 | 0.89 | 0.83 | 0.80 | 0.85 | |
BF, ρ, ωx, ωy | 0.88 | 0.87 | 0.89 | 0.83 | 0.79 | 0.85 |
Table 3. Accuracy of prediction of nuclei in mouse kidney tissue.
Translation model | Input(s) | rxy | rxz | rxyz | SSIMxy | SSIMxz | SSIMxyz |
---|---|---|---|---|---|---|---|
Stack→Slice (2.5D, ) | ρ | 0.84 | 0.85 | 0.85 | 0.81 | 0.76 | 0.82 |
BF | 0.87 | 0.88 | 0.87 | 0.82 | 0.77 | 0.84 | |
Φ | 0.88 | 0.88 | 0.88 | 0.83 | 0.78 | 0.85 | |
Φ, ρ, ωx, ωy | 0.89 | 0.89 | 0.89 | 0.84 | 0.80 | 0.86 | |
BF, ρ, ωx, ωy | 0.89 | 0.90 | 0.89 | 0.84 | 0.80 | 0.86 |
The predictions with 2D models show discontinuity artifacts along the depth (Figure 3, Figure 3—video 2), as also observed in prior work (Ounkomol et al., 2018). The 3D model predicts smoother structures along the Z dimension with improved prediction in the XY plane. 2.5D model shows prediction accuracy comparable to 3D model, with higher prediction accuracy as the number of z-slices in the 2.5D model input increases. (Figure 3C and D; Table 1; Figure 3—video 2). While 2.5D model shows similar performance to 3D model, we note that we could train the 2.5D model with ∼3× more parameters than 3D model (Materials and methods) in shorter time. In our experiments, training a 3D model with 1.5M parameters required 3.2 days, training a 2D model with 2M parameters required 6 hr, and training a 2.5D model with 4.8M parameters and five input z-slices required 2 days, using ∼100 training volumes. This is because the large memory usage of 3D model significantly limits its training batch size and thus the training speed.
Figure 3. Accuracy of 3D prediction with 2D, 2.5D, and 3D U-Nets.
Orthogonal sections (XY - top, XZ - bottom, YZ - right) of a glomerulus and its surrounding tissue from the test set are shown depicting (A) retardance (input image), (B) experimental fluorescence of F-actin stain (target image), and (C) Predictions of F-actin (output images) using the retardance image as input with different U-Net architectures. (D) Violin plots of structral-similarty metric (SSIM) between images of predicted and target stain in XY and XZ planes. The horizontal dashed lines in the violin plots indicate 25th quartile, median, and 75th quartile of SSIM. The yellow triangle in C highlights a tubule structure, whose prediction can be seen to improve as the model has access to more information along Z. The same field of view is shown in Figure 3—video 1, Figure 3—video 2, and Figure 4—video 1.
Figure 3—figure supplement 1. Schematic illustrating U-Net architectures.
Figure 3—video 1. Z-stacks of brightfield, phase, retardance, and orientation images of mouse kidney tissue.
Figure 3—video 2. Through focus series showing 3D F-actin distribution in the test field of view shown in Figure 3.
The Python code for training our variants of image translation models is available at https://github.com/czbiohub/microDL.
Predicting structures from multiple label-free contrasts improves accuracy
Considering the trade-off between computation speed and model performance, we adopt 2.5D models with five input Z-slices to explore how combinations of label-free inputs affect the accuracy of prediction of fluorescent structures.
We found that when multiple label-free measurements are jointly used as inputs, both F-actin and nuclei are predicted with higher fidelity compared to when only a single label-free measurement is used as the input (Table 2 and Table 3). Figure 4C–D shows representative structural differences in the predictions of the same glomerulus as Figure 3. The continuity of prediction along Z-axis improves as more label-free contrasts are used for prediction (Figure 4—video 1). These results indicate that our model leverages information in complementary physical properties to predict target structures. We note that using complementary label-free contrasts boosts the performance of 2.5D models to exceed the performance of 3D single-channel models without significantly increasing the computation cost (compare Table 1 and Table 2). Noticeably, fine F-actin bundles have been shown challenging to predict from single label-free input. We found fine F-actin bundles can be predicted from multiple label-free inputs when the model is trained to minimize the difference between the fluorescence target and prediction over only the foreground pixels in the image (Figure 4—figure supplement 2).
Figure 4. Prediction accuracy improves with multiple label-free contrasts as inputs.
3D predictions of ordered F-actin and nuclei from different combinations of label-free contrasts using the 2.5D U-Net model. (A) Label-free measurements used as inputs for model training: retardance (ρ), phase (Φ), and slow axis orientation (ω). (B) The corresponding 3D volume showing the target fluorescent stains. Phalloidin-labeled F-actin in shown green and DAPI labeled nuclei is shown in magenta. (C) F-actin and nuclei predicted with single channel models trained on retardance (ρ) and phase (Φ) alone are shown. (D) F-actin and nuclei predicted with multi-channel models trained with the combined input of retardance, orientation, and phase. The yellow triangle and white triangle point out structures missing in predicted F-actin and nuclei distributions when only one channel is used as an input, but predicted when all channels are used. (E) Violin plots of structral-similarty metric (SSIM) between images of predicted and experimental stain in XY and XZ planes. The horizontal dashed lines indicate 25th quartile, median, and 75th quartile of SSIM. The 3D label-free inputs used for prediction are shown in Figure 3—video 1.
Figure 4—figure supplement 1. Pearson correlation and SSIM are insensitive to small structural differences in the images.
Figure 4—figure supplement 2. Fine structural features are better predicted with foreground loss.
Figure 4—video 1. Through focus series showing 3D F-actin and nuclei distribution in the test field of view shown in Figure 4.
Interestingly, when only a single contrast is provided as the input, a model trained on phase images has higher prediction accuracy than the model trained on brightfield images. This is possibly because the phase image has consistent, quantitative contrast along z-axis, while the depth-dependent contrast in brightfield images makes the translation task more challenging. This improvement of using phase over brightfield images, however, is not observed when the retardance and orientation are also included as inputs. This is possibly because quantitative retardance and orientation complement the qualitative brightfield input and simplify the translation task.
In conclusion, above results show that 2.5D multi-contrast models predict 3D structures with superior accuracy than single channel 3D U-Net models, but have multiple practical advantages that facilitate scaling of the approach. In addition, the results show that structures of varying density and order can be learned with higher accuracy when complementary physical properties are combined as inputs.
Imaging architecture of mouse and human brain tissue with QLIPP
Among electron microscopy, light microscopy, and magnetic resonance based imaging of brain architecture, the resolution and throughput of light-microscopy provides the ability to image whole brain slices at single axon resolution in a reasonable time (Kleinfeld et al., 2011; Axer et al., 2011a; Axer et al., 2011b; Menzel et al., 2017; Mollink et al., 2017; Zeineh et al., 2017; Henssen et al., 2019). Light-microscopy is also suitable for imaging biological processes while brain tissue is kept alive (Ohki et al., 2005; Koike-Tani et al., 2019). With quantitative imaging of brain architecture and activity at light resolution, one can envision the possibility of building probabilistic models that relate connectivity and function. QLIPP’s high-resolution, quantitative nature, sensitivity to low anisotropy of gray matter (Figure 2), and throughput make it attractive for imaging the architecture and activity in brain slices. Here, we explore how QLIPP can be used to visualize the architecture of the sections of adult mouse brain and archival sections of prenatal human brain.
Adult mouse brain tissue
We first imaged an adult mouse brain tissue section located at bregma −1.355 mm (level 68 in Allen brain reference atlas [Lein et al., 2007]) with QLIPP and rendered retardance and slow-axis orientation in two ways as shown in Figure 5. The left panel renders the measured retardance in brightness and slow-axis orientation in color, highlighting anatomical features of all sizes. The right panel renders the fast-axis orientation of the mouse brain section (orthogonal to the slow-axis orientation) as colored lines. It has been shown (de Campos Vidal et al., 1980; Menzel et al., 2015) that when axons are myelinated, the slow axis is perpendicular to the axon axis, while the fast axis is parallel to it. The visualization in the right panel highlights meso-scale axon orientation in the mouse brain tissue with spatial resolution of ∼ 100 μm, that is, each line represents net orientation of the tissue over the area of ∼ 100 μm × 100 μm. The full section rendered with both approaches is shown in Figure 5—figure supplement 1.
Figure 5. Analysis of anatomy and axon orientation of an adult mouse brain tissue with QLIPP.
The retardance and orientation measurements are rendered with two approaches in opposing hemispheres of the mouse brain, respectively. In the left panel, the slow-axis orientation is displayed with color (hue) and the retardance is displayed with brightness as shown by the color legend in bottom-left. In the right panel, the colored lines represent fast axis and the direction of the axon bundles in the brain. The color of the line still represents the slow-axis orientation as shown by color legend in bottom-right. Different cortical layers and anatomical structures are visible through this measurements. This mouse brain section is a coronal section at around bregma −1.355 mm and is labeled according to Allen brain reference atlas (level 68) (Lein et al., 2007). cc: corpus callosum, cing: cingulum bundle, CTX: cortex, CP: caudoputamen, fi: fimbria, HPF: hippocampal formation, HY: hypothalamus, int: internal capsule, MOp: primary motor cortex, MOs: secondary motor cortex, opt: optic tract, SSp: primary somatosensory area, SSs: supplemental somatosensory area, TH: thalamus, VL: lateral ventricle.
Figure 5—figure supplement 1. The full-size mouse brain images of two rendering approaches shown in Figure 5.
By comparing the size and optical measurements in our label-free images against Allen brain reference atlas, we are able to recognize many anatomical landmarks. For example, the corpus callosum (cc) traversing the left and right hemispheres of the brain is a highly anisotropic bundle of axons. The cortex (CTX) is the outermost region of the brain, with axons projecting down towards the corpus callosum and other sub-cortical structures. Within the inner periphery of the corpus callosum, we can identify several more structures such as hippocampus (HPF), lateral ventricle (VL), and caudoputamen (CP). With these evident anatomical landmarks, we are able to reference to Allen brain reference atlas (Lein et al., 2007) and label more anatomical areas of the brain such as the sensory (SSp, SSs) and motor (MOp, MOs) cortical areas.
We also found that six cortical layers are distinguishable in terms of strength of the retardance signal and the orientational pattern. These data are consistent with reports that layer I contains axon bundles parallel to the cortical layer (Zilles et al., 2016). Layer VI contains axon bundles that feed to and from the corpus callosum, so the orientation of the axon is not as orthogonal to the cortical layers as the axons in the other layers. The retardance signal arises from the collective anisotropy of myelin sheath wrapping around axons. Layers IV and V contain higher density of cell bodies and correspondingly lower density of the axons, leading to lower signal in retardance.
Tissue from developing human brain
We next imaged brain sections from developing human samples of two different ages, gestational week 24 (GW 24) (Figure 6A–C, Figure 6—figure supplement 1A) and GW20 (Figure 6D–F, Figure 6—figure supplement 1A) which correspond to the earliest stages of oligodendrocyte maturation and early myelination in the cerebral cortex (Jakovcevski et al., 2009; Miller et al., 2012; Snaidero and Simons, 2014). Similar to the observations in the mouse brain section ( Figure 5, Figure 2—figure supplement 4), the stitched retardance and orientation images show both morphology and orientation of the axon tracts that are not accessible with brightfield or phase imaging, with fast axis orientation parallel to the axon axis. The retardance in subplate is higher than cortical plates at both time points, which is consistent with the reduced myelin density in the cortical plate relative to the white matter. Importantly, with our calibration and background correction procedures (Materials and methods), our imaging approach has the sensitivity to detect axon orientation in the developing cortical plate, despite the lower retardance in developing brain compared to adult brain due to the low myelination in early brain development (Miller et al., 2012; Snaidero and Simons, 2014). Different cortical layers are visible in the retardance and orientation images at both time points. With this approach, we could identify different anatomical structures in the developing human brain without additional stains by referencing to the developing human brain atlas (Bayer and Altman, 2003, Figure 6). The individual axon tracts are also visible in phase image while with lower contrast as the phase image measures the density variation but not the axon orientation.
Figure 6. Label-free mapping of axon tracts in developing human brain tissue section.
(A) (top) Stitched image of retardance and slow axis orientation of a gestational week 24 (GW24) brain section from the test set. The slow axis orientation is encoded by color as shown by the legend. (Bottom) Axon orientation indicated by the lines. (B) Zoom-ins of retardace + slow axis, axon orientation, and brightfield at brain regions indicated by the yellow and cyan boxes in (A). (C) Zoom-ins of label-free images at brain regions indicated by the white box in (B) (D–F) Same as (A–C), but for GW20 sample. MZ: marginal zone; CP: cortical plate; SP: subplate; ESS: external sagittal stratum; ISS: internal sagittal stratum; CC: corpus callosum; SVZ: subventricular zone; PcL: paracentral lobule PL: parietal lobe; OL: occipital lobe; TL: temporal lobe. Anatomical regions in (B, D, and E) are identified by referencing to developing human brain atlas (Bayer and Altman, 2003).
Figure 6—figure supplement 1. Brightfield and phase images of human brain sections.
To analyze the variations in the density of the human brain tissue, we reconstructed 2D phase, unlike 3D phase reconstruction for U2OS cells (Figure 2) and kidney tissue (Figure 4). The archival tissue was thinner (12 μm thick) than the depth of field (∼16 μm) of the low magnification objective (10X) we used for imaging large areas. Figure 6B,C,E and F show the retardance, slow-axis orientation, axon orientation, brightfield, and phase images. Major regions such as the subplate and cortical plate can be identified in both samples. While density information represented by brightfield and phase images can identify some of the anatomical structures, axon-specific structures can be better identified with measurements of anisotropy.
To our knowledge, the above data are the first report of label-free imaging of architecture and axon tract orientation in prenatal brain tissue. The ability to resolve axon orientation in the cortical plate of the developing brain, which exhibits very low retardance, demonstrates the sensitivity and resolution of our approach.
Predicting myelination in sections of developing human brain
Next, we explore how information in the phase and retardance measurements can be used to predict myelination in prenatal human brain. The human brain undergoes rapid myelination during late development as measured with magnetic resonance imaging (MRI) (Heath et al., 2018). Interpretation of the myelination from MRI contrast requires establishing its correlation with histological measurements of myelin levels (Khodanovich et al., 2019). Robust measurements of myelination in postmortem human brains can provide new insights in myelination of human brain during development and during degeneration. QLIPP data in Figure 6 indicate that label-free measurements are predictive of the level of myelination but relationship among them is complex (Figure 7C and F). We employed our multi-channel 2D and 2.5D U-Net models to learn the complex transformation from label-free contrasts to myelination. Importantly, we developed a data normalization and training strategy that enables prediction of myelination across large slices and multiple developmental time points. We also found that a properly trained model can rescue inconsistencies in fluorescent labeling of myelin, which is often used as histological groundtruth.
Figure 7. Prediction of myelination in developing human brain from QLIPP data and rescue of inconsistent labeling.
(A) Stitched image of experimental FluoroMyelin stain of the same (GW24) brain section from the test set (top) and FluoroMyelin stain predicted from retardance, slow axis orientation, brightfield by the 2.5D model (bottom). The cyan arrow head indicates large staining artifacts in the experimental FluoroMyelin stain but rescued in model prediction. (B) Zoom-ins of experimental and predicted FluoroMyelin stain using different models at brain regions indicated by the yellow box in (A) rotated by 90 degrees. From left to right: experimental FluoroMyelin stain; prediction from brightfield using 2D model; prediction from retardance and phase using 2D model; prediction from retardance, phase, and orientation using 2D model; prediction from retardance, brightfield, and orientation using 2.5D model. (C) From region shown in (B) we show scatter plot and Pearson correlation of target FluoroMyelin intensity v.s. retardace (left), phase (middle), FluoroMyelin intensity predicted from retardance, brightfield, and orientation using 2.5D model (right). Yellow dashed line indicates the function y = x. (D–F) Same as (A–C), but for GW20 sample. MZ: marginal zone; CP: cortical plate; SP: subplate; ESS: external sagittal stratum; ISS: internal sagittal stratum; CC: corpus callosum; SVZ: subventricular zone; PcL: paracentral lobule PL: parietal lobe; OL: occipital lobe; TL: temporal lobe.
Figure 7—figure supplement 1. Normalizing training data per dataset yields prediction with correct dynamic range of intensity.
Figure 7—figure supplement 2. Model predicted FluoroMyelin intensity becomes more accurate as more label-free channels are included as input.
Data pooling for prediction over large sections of prenatal human brain
In order to train the model, we measured the level of myelination with FluoroMyelin, a lipophilic dye that can stain myelin without permeabilization (Monsma and Brown, 2012). We found the detergents used in most permeabilization protocols remove myelin from the tissue and affect our label-free measurements. We trained multi-contrast 2D and 2.5D models with different combinations of label-free input contrasts and FluoroMyelin as the target to predict. To avoid overfitting and build a model that generalizes to different developmental ages and different types of sections of the brain, we pooled imaging datasets from GW20 and GW24 with two different brain sections for each age. The pooled dataset was then split into training, validation, and test set. Similar to the observations in the mouse kidney tissue, the prediction accuracy improves as more label-free contrasts are included in training but with higher accuracy gain compared to the mouse kidney tissue. This is most likely because the additional information provided by adding more label-free channels is more informative for the model to predict the more complex and variable of human brain structures. On the other hand, 2.5D model with all four input channels shows similar performance as 2D model for this dataset due to the relatively large depth of field (∼16 μm) compared to the sample thickness (12 μm thick), so additional Z-slices only provide phase information but no extra structural information along the z dimension. (Table 4).
Table 4. Accuracy of prediction of FluoroMyelin in human brain tissue slices across two developmental points (GW20 and GW24).
Translation model | Input(s) | rxy | SSIMxy |
---|---|---|---|
Slice→Slice (2D) | BF | 0.72 | 0.71 |
ρ, Φ | 0.82 | 0.82 | |
ρ, ωx, ωy, Φ | 0.86 | 0.85 | |
Stack→Slice (2.5D, ) | BF, ρ, ωx, ωy | 0.87 | 0.85 |
To test the accuracy of prediction over large human brain slices that span multiple fields of view, we predicted FluoroMyelin using label-free images of whole sections from GW24 and GW20 brains that were not used for model training or validation. We ran model inference on each field of view and then stitched the predicted images together to obtain a stitched prediction with 20,000 × 20,000 pixels (Figure 7A and D). To the best of our knowledge, these are the largest predicted fluorescence image of tissue sections that have been generated. We were able to predict myelination level in sections from both time points with a single model, with increasing accuracy as we included more label-free channels as the input, similar to our observations from the test dataset of the mouse kidney slice (Table 4 and Figure 7B and E). The scatter plots of pixel intensities show that model-predicted FluoroMyelin intensities correlate with the target FluoroMyelin stain significantly better than the label-free contrasts alone (Figure 7C and Figure 7F). This illustrates the value of predicting fluorescence from label-free contrasts: while the label-free contrasts are predictive of FluoroMyelin stain, the complex relations between them makes estimation of myelin level from label-free contrasts challenging. The neural network can learn the complex transformation from label-free contrasts to FluoroMyelin stain and enables reliable estimation of myelin levels.
Data normalization
In addition to architecture, it is essential to devise proper image normalization for correctly predicting the intensity across different fields of view in large stitched images. We found that per-image normalization commonly applied to image segmentation tasks did not preserve the intensity variation across images and led to artifacts in prediction. The two main issues that need to be accounted for in image translation tasks are: (1) numbers of background pixels vary across images and can bias the normalization parameters if not excluded from normalization (Yang et al., 2019), and (2) there are batch variations in the staining and imaging process when pooling multiple datasets together for training. While batch variation is less pronounced in quantitative label-free imaging, it remains quite significant in fluorescence images of stained samples and therefore needs to be corrected. We found that normalizing per-dataset with the median of inter-quartile range of foreground pixel intensities gives the most accurate intensity prediction (Figure 7—figure supplement 1).
Notably, the 2D model with phase, retardance, and orientation as the input has correlation and similarity scores close to the best 2.5D model but the training takes just 3.7 hr to converge, while the best 2.5D model takes 64.7 hr to converge (Table 4). This is likely because the 2D phase reconstruction captures the density variation encoded in the brightfield Z-stack that is informative for the model to predict axon tracts accurately.
Rescue of inconsistent label
Robust fluorescent labeling usually requires optimization of labeling protocols and precise control of labeling conditions. Sub-optimal staining protocols often lead to staining artifacts and make the samples unusable. Quantitative label-free imaging, on the other hand, provides more robust measurements as it generates contrast in physical units and does not require labeling. Therefore, fluorescence images predicted from quantitative label-free inputs are more robust to experimental variations. For example, we found FluoroMyelin stain intensity faded unevenly over time and formed dark patches in the images (indicated by cyan arrow heads in Figure 7A and D), possibly due to quenching of FluoroMyelin by the antifade chemical in the mounting media. However, this quenching of dye does not affect the physical properties measured by the label-free channels. Therefore, the model trained on images without artifacts predicted the expected staining pattern even with the failure of experimental stain. This robustness is particularly valuable for precious tissue specimens such as archival prenatal human brain tissue.
Discussion
We have reported QLIPP, a novel computational imaging method for label-free measurement of density and anisotropy from 3D polarization-resolved acquisition. While quantitative fluorescence imaging provides molecular specificity, quantitative label-free imaging provides physical specificity. We show that several organelles can be identified from their density and anisotropy. We also show that multiple regions of mouse brain tissue and archival human brain tissue can be identified without label. We have also reported multi-channel 2.5D U-Net deep learning architecture and training strategies to translate this physical description of the specimen to the molecular description. Next, we discuss how we elected to balance the trade-offs and the future directions of research enabled by innovations reported here.
We have designed QLIPP to be easy to adopt and multiplex with fluorescence microscopy. Using QLIPP requires a single liquid-crystal polarization modulator and a motorized Z stage. Our open-source Python software is free to use for non-profit research. Shribak (Shribak et al., 2008) has reported joint imaging of 2D phase and retardance with orientation-independent differential interference contrast (OI-DIC) and orientation-independent PolScope (OI-POL), which required six polarization modulators and acquisition protocol more complex than QLIPP. Ptychography-based phase retrieval method has been extended with polarization sensitive components for joint imaging of 2D phase and retardance (Ferrand et al., 2018; Baroni et al., 2020), albeit requiring hundreds of images. Our method uses one polarization modulator, compared to six used by OI-DIC, and fewer images (5 × number of Z slices), compared to hundreds in ptychography-based method, for recovery of 3D phase, retardance and orientation. Our measurements also achieve diffraction-limited resolution and provide adequate time resolution for live-cell imaging, as demonstrated by the 3D movie of U2OS cells (Figure 2, Figure 2—video 2, Figure 2—video 3). We anticipate that the modularity of the optical path and the availability of reconstruction software will facilitate adoption of QLIPP.
Phase information is inherently present in polarization-resolved acquisition, but can now be reconstructed using forward models and corresponding inverse algorithms reported here. We note that our approach of recovering phase from propagation of light reports the local phase variation rather than the absolute phase. Local phase variation is less sensitive to low spatial frequency or large-scale variations in density as can be seen from phase images n Figure 2—figure supplement 3 and Figure 6—figure supplement 1. Recovering density at low spatial frequency requires a more elaborate optical path for creating interference with a reference beam and is more difficult to implement than QLIPP (Kim et al., 2018; Popescu et al., 2006). Nonetheless, most biological processes can be visualized with the local density variation. Further, our method uses partially coherent illumination, that is, simultaneous illumination from multiple angles, which improves spatial resolution, depth sectioning, and robustness to imperfections in the light path away from the focal plane.
QLIPP belongs to the class of polarization-resolved imaging in which the specimen is illuminated in transmission. Two other major classes of polarization-sensitive imaging are polarization sensitive optical coherence tomography (PS-OCT) and fluorescence polarization. PS-OCT is a label-free imaging method in which specimen is illuminated in reflection mode. PS-OCT has been used to measure round-trip retardance and diattenuaton of diverse tissues, for example of brain tissue (Wang et al., 2018). But, determination of the slow axis in the reflection mode remains challenging due to the fact that light passes through the specimen in two directions. Fluorescence polarization imaging relies on rotationally constrained fluorescent probes (DeMay et al., 2011; Mehta et al., 2016). Fluorescence polarization measurements report the rotational diffusion and angular distribution of labeled molecules, which differs from QLIPP we have reported here.
We also note that, similar to other polarization-resolved imaging systems (Mehta et al., 2016), our approach reports projection of the anisotropy onto the focal plane rather than 3D anisotropy. Anisotropic structures such as axon bundles, appear isotropic to the imaging system when they are aligned along the optical axis of the imaging path. Methods for imaging 3D anisotropy with various models and systems (Oldenbourg, 2008; Spiesz et al., 2011; Axer et al., 2011c; Zilles et al., 2016; Schmitz et al., 2018a; Schmitz et al., 2018b; Yang et al., 2018; Tran and Oldenbourg, 2018) are now in active development. Recovering 3D anisotropy along with 3D density using forward models that account for diffraction effects in the propagation of polarized light would be an important area of research for the future.
We demonstrated the potential of QLIPP for sensitive detection of orientation of axon bundles (Figure 5 and Figure 6). Combining these measurements with tractography algorithms can facilitate analysis of mesoscale connectivity. Tractography algorithms developed for diffusion weighted-MRI measurements (Zhan et al., 2015) have been adapted to brain images from a lower-resolution polarization microscope (∼60 μm) (Axer et al., 2011c). We envision that combining tractography algorithms with anisotropy measured at optical resolution, which reports the orientation of ensemble of axons, will enable development of probabilistic models of connectivity. Although multiple methods for tracing connectivity in the mouse brain at mesoscale (cellular level) have been developed (Ragan et al., 2012; Oh et al., 2014; Zeng, 2018), they have not yet been extended to human brain. The volume of fetal human brain during third trimester is 3 orders of magnitude larger than the volume of an adult mouse brain (∼1.5 × 102 mm3). Our data show that label-free measurement of myelination and axon tract orientation is possible with ∼1.5 μm diffraction-limited resolution over the scale of whole fetal human brain sections. Further work in streamlining sample preparation, imaging, data curating, and model training would be required to apply QLIPP to large scale organs.
Our multi-channel 2.5D deep learning models are designed for efficient analysis of multi-dimensional 3D data. In contrast to earlier work on image translation that demonstrated 2D prediction (Christiansen et al., 2018; Rivenson et al., 2018a; Rivenson et al., 2018b), our 2.5D architecture is inspired by Han, 2017 and provides comparable prediction accuracy at a lower computational cost than using 3D U-Net. Pearson correlation coefficient in 3D for nuclei prediction from brightfield images is 0.87 vs. ∼0.7 reported in Ounkomol et al., 2018. In comparison to Christiansen et al.’s 2D translation model (Christiansen et al., 2018) where the image translation was formulated as a pixel-wise classification task of 8-bit classes, our 2.5D model formulates the image translation as a regression task that allows prediction of much larger dynamic range of gray levels. While training a single model that predicts multiple structures seems appealing, this more complex task requires increasing the model size with the trade-off of longer model training time. Our modeling strategy to train one model to predict only one target allowed us to use significantly smaller models that can fit into the memory of a single GPU for faster training.
We systematically evaluated how the dimensions and input channels affect the prediction accuracy. Compared to previous work that predict fluorescence images from single label-free contrast (Ounkomol et al., 2018; Christiansen et al., 2018; Rivenson et al., 2018a; Rivenson et al., 2018b), we show that higher prediction accuracy can be achieved by combining multiple label-free contrasts. Additionally, we report the image normalization strategy required to predict large images stitched from smaller fields of view from multi-channel inputs.
The image quality metrics we use to evaluate the model performance depends on the accuracy of the prediction but also the noise in the target images. A more direct comparison of model performances on the same dataset would be useful in the future. Further, the more flexible 2.5D network allows for application to image data that has only a few Z-slices without up- or down-sampling the data, making it useful for analysis of microscopic images that often has variable number of Z-slices. Even though we focus on image translation in this work, the same 2.5D network can be used for 3D segmentation. 3D segmentation using the 2.5D network bears additional advantages over 3D network, because sparse annotation can be done on a subset of slices sampled from the 3D volume, while 3D network requires all the slices in the input volume to be annotated. The flexibility of sparse annotation allows for better sampling of structural variation in the data with the same effort on manual annotation.
A common shortfall of machine learning approaches is that they tend not to generalize well. We have shown that our data normalization and training process leads to models of myelination that generalize to two developmental time points. In contrast to reconstruction using physical models, the errors or artifacts in the prediction by machine learning models are highly dependant on the quality of training data and their similarity to the new input data. Therefore, prediction errors made by the machine learning models are difficult to identify in the absence of ground truth. Extending the image translation models such that they predict not just the value, but also provide estimate of the confidence interval of output values, is an important area of research.
Conclusion
In summary, we report reconstruction of specimen density and anisotropy using quantitative label-free imaging with phase and polarization (QLIPP) and prediction of fluorescence distribution from label-free images using deep convolutional neural networks. Our reconstruction algorithms (https://github.com/mehta-lab/reconstruct-order) and computationally efficient U-Net variants (https://github.com/czbiohub/microDL) facilitate measurement and interpretation of physical properties of the specimens. We reported joint measurement of phase, retardance, and orientation with diffraction-limited spatial resolution in 3D dividing cells and in 2D brain tissue slices. We demonstrated visualization of diverse biological structures: axon tracts and myelination in mouse and human brain slices, and multiple organelles in cells. We demonstrated accurate prediction of fluorescent images from density and anisotropy with multi-contrast 2.5D U-Net model. We demonstrated strategies for accurate prediction myelination in centimeter-scale prenatal human brain tissue slices. We showed that inconsistent labeling of human tissue can be rescued with qualitative label-free imaging and trained models. We anticipate that our approach will enable quantitative label-free analysis of architectural order at multiple spatial and temporal scales, particularly in live cells and clinically-relevant tissues.
Materials and methods
Key resources table.
Reagent type (species) or resource |
Designation | Source or reference | Identifiers | Additional information |
---|---|---|---|---|
biological sample (M. musculus) |
mouse kidney tissue section |
Thermo-Fisher Scientific | Cat. # F24630 | |
biological sample (M. musculus) |
mouse brain tissue section |
this paper | mouse line maintained in M. Han lab, see Specimen preparation in Materials and methods |
|
biological sample (H. sapiens) |
developing human brain tissue section |
this paper | archival tissue stored in T. Nowakowski lab, see Specimen preparation in Materials and methods |
|
chemical compound, drug |
FluoroMyelin | Thermo-Fisher Scientific | Cat. # F34652 | |
software, algorithm | reconstruction algorithms | https://github.com/mehta-lab/reconstruct-order | ||
software, algorithm | 2.5 U-Net | https://github.com/czbiohub/microDL | ||
software, algorithm | Micro-Manager 1.4.22 | https://micro-manager.org/ | RRID:SCR_016865 | |
software, algorithm | OpenPolScope | https://openpolscope.org/ |
Model of image formation
We describe dependence of the polarization resolved images on the specimen properties using Stokes vector representation of partially polarized light (Bass et al., 2009, Ch.15). This representation allows us to accurately measure the polarization sensitive contrast in the imaging volume. First, we retrieve the coefficients of the specimen’s Mueller matrix that report linear retardance, slow-axis orientation, transmission (brightfield), and degree of polarization. For brevity, we call them ‘Mueller coefficients’ of the specimen in this paper. Mueller coefficients are recovered from the polarization-resolved intensities using the inverse of an instrument matrix that captures how Mueller coefficients are related to acquired intensities. Assuming that the specimen is mostly transparent, more specifically satisfies the first Born approximation (Born and Wolf, 2013), we reconstruct specimen phase, retardance, slow axis, and degree of polarization stacks from stacks of Mueller coefficients. The assumption of transparency is generally valid for the structures we are interested in, but does not necessarily hold when the specimen exhibits significant absorption or diattenuation. To ensure that the inverse computation is robust, we need to make judicious decisions about the light path, calibration procedure, and background estimation. A key advantage of Stokes instrument matrix approach is that it easily generalizes to other polarization diverse imaging methods - A polarized light microscope is represented directly by a calibrated instrument matrix.
For sensitive detection of retardance, it is advantageous to suppress isotropic background by illuminating the specimen with elliptically polarized light of opposite handedness to the analyzer in the detector side (Shribak and Oldenbourg, 2003). For experiments reported in this paper, we acquired data by illuminating the specimen sequentially with right-handed circular and elliptical polarized light and analyzed the transmitted left-handed circular polarized light in detection.
Forward model: specimen properties → Mueller coefficients
We assume a weakly scattering specimen modeled by properties of linear retardance ρ, orientation of the slow axis ω, transmission t, and depolarization p. The Mueller matrix of the specimen can be expressed as a product of two Mueller matrices, , accounting for the effect of transmission and depolarization from the specimen, and , accounting for the effect of retardance and orientation of the specimen. The expression of is a standard Mueller matrix of a linear retarder that can be found in Bass et al., 2009, Ch.14, and is expressed as
(1) |
With and , the Mueller matrix of the specimen is then given by
(2) |
where * signs denote irrelevant entries that cannot be retrieved under our experiment scheme. The relevant entries that are retrievable can be expressed as a vector of Mueller coefficients, which is
(3) |
This vector is coincidentally the Stokes vector when right-handed circularly polarized light passing through the specimen. The aim of the measurement we describe in the following paragraphs is to accurately measure these Mueller coefficients at each point in the image plane of the microscope by illuminating the specimen and detecting the scattered light with mutually independent polarization states. Once a map of these Mueller coefficients has been acquired with high accuracy, the specimen properties can be retrieved from the above set of equations.
Forward model: Mueller coefficients → intensities
To acquire the above Mueller coefficients, we illuminate the specimen with a series of right-handed circularly and elliptically polarized light (Shribak and Oldenbourg, 2003). The Stokes vectors of our sequential illumination states are given by,
(4) |
where is the compensatory retardance controlled by the LC that determines the ellipticity of the four elliptical polarization states.
After our controlled polarized illumination has passed through the specimen, we detect the left-handed circular polarized light by having a left-handed circular analyzer in front of our sensor. We express the Stokes vector before the sensor as
(5) |
where depending on the illumination states, and is the Muller matrix of a left-handed circular analyzer (Bass et al., 2009, Ch.14). The detected intensity images are the first component of Stokes vector at the sensor under different illuminations (). Stacking the measured intensity images to form a vector
(6) |
we can link the relationship between the measured intensity and the specimen vector through an ‘instrument matrix’ as
(7) |
where
(8) |
Each row of the instrument matrix is determined by the interaction between various illumination polarization states and the specimen’s properties. Any polarization-resolved measurement scheme can be characterized by an instrument matrix that transforms specimen’s polarization property to the measured intensities. Calibration of the polarization imaging system is then done through calibrating this instrument matrix.
Computation of Mueller coefficients at image plane
Once the instrument matrix has been experimentally calibrated, the vector of Mueller coefficients can be obtained from recorded intensities using its inverse (compare Equation 7),
(9) |
Computation of background corrected specimen properties
We retrieved the vector of Mueller coefficients, , by solving Equation 9. Slight strain or misalignment in the optical components or the specimen chamber can lead to background that masks out contrast from the specimen. The background typically varies slowly across the field of view and can introduce spurious correlations in the measurement. It is crucial to correct the vector of Mueller coefficients for non-uniform background retardance that was not accounted for by the calibration process. To correct the non-uniform background retardance, we acquired background polarization images at the empty region of the specimen. We then transformed specimen () and background () vectors of Mueller coefficients as follows,
(10) |
We then reconstructed the background corrected properties of the specimen: brightfield (BF), retardance (ρ), slow axis (ω), and degree of polarization (DOP) from the transformed specimen and background vectors of Mueller coefficients and using the following equations:
When the background cannot be completely removed using the above background correction strategy with a single background measurement, (i.e. the specimen has spatially varying background retardance), we applied a second round of background correction on the measurements. In this second round, we estimated the residual transformed background Mueller coefficients by fitting a low-order 2D polynomial surface to the transformed specimen Mueller coefficients. Specifically, we downsampled each 2048 × 2048 image to 64 × 64 image with 32 × 32 binning. We took the median of each 32 × 32 bin to be each pixel value in the downsampled image. We then fitted a second-order 2D polynomial surface to the downsampled image of each transformed specimen Mueller coefficient to estimate the residual background. With this newly estimated background, we performed another background correction. The effects of two rounds of the background corrections are shown in Figure 2—figure supplement 2.
Phase reconstruction
As seen from Equation 3, the first component in the vector of Mueller coefficients, m0, is equal to the total transmitted intensity of electric field in the focal plane. Assuming a specimen with weak absorption, the intensity variations in a Z-stack encode the phase information via the transport of intensity (TIE) equation (Streibl, 1984). In the following, we leverage weak object transfer function (WOTF) formalism (Streibl, 1985; Noda et al., 1990; Claus et al., 2015; Jenkins and Gaylord, 2015a; Jenkins and Gaylord, 2015b; Soto et al., 2017) to retrieve 2D and 3D phase from this TIE phase contrast and describe the corresponding inverse algorithm.
Forward model for phase reconstruction
The linear relationship between the 3D phase and the through focus brightfield intensity was established in Streibl, 1985 with Born approximation and weak object approximation. In our context, we reformulated as (Streibl, 1985; Noda et al., 1990; Soto et al., 2017)
(17) |
where is the 3D spatial coordinate vector, is the constant background of m0 component, denotes convolution operation over coordinate, Φ refers to phase, μ refers to absorption, is the phase point spread function (PSF), and is the absorption PSF. Strictly, Φ and μ are the real and imaginary part of the scattering potential scaled by , where is the axial pixel size of the experiment and k is the wavenumber of the incident light. When the refractive index of the specimen and that of the environment are close, the real and imaginary scaled scattering potential reduce to two real quantity, phase and absorption.
When specimen’s thickness is larger than the depth of field of the microscope (usually in experiments with high NA objective), the brightfield intensity stack contains 3D information of specimen’s phase and absorption. Without making more assumptions or taking more data, solving 3D phase and absorption from 3D brightfield is ill-posed because we are solving two unknowns from one measurement. Assuming the absorption of the specimen is negligible (Noda et al., 1990; Jenkins and Gaylord, 2015b; Soto et al., 2017), which generally applies to transparent biological specimens, we turn this problem into a linear deconvolution problem, where 3D phase is retrieved.
When specimen’s thickness is smaller than the depth of field of the microscope (usually in experiments with low NA objective), the whole 3D intensity stack is coming from merely one effective 2D absorption and phase layer of specimen. We rewrite Equation 17 as (Claus et al., 2015; Jenkins and Gaylord, 2015a)
(18) |
In this situation, we have multiple 2D defocused measurements to solve for one layer of 2D absorption and phase of the specimen.
Inverse problem for phase reconstruction
With the linear relationship between the first component of the Mueller coefficients vector and the phase, we then formulated the inverse problem to retrieve 2D and 3D phase of the specimen.
When we recognize the specimen as a 3D specimen, we then use Equation 17 and drop the absorption term to estimate specimen’s 3D phase through the following optimization algorithm:
(19) |
where , is the regularization parameter for applying different degree of denoising effect, and the regularization term depending on the choice of either Tikhonov or anisotropic total variation (TV) denoiser is expressed
When using Tikhonov regularization, this optimization problem has an analytic solution that has previously described by Noda et al., 1990; Jenkins and Gaylord, 2015b; Soto et al., 2017. As for TV regularization, we adopted alternating minimization algorithm that is proposed and applied to phase imaging in Wang et al., 2008 and Chen et al., 2018, respectively, to solve the problem.
If we consider the specimen as a 2D specimen, we then turn Equation 18 into the following optimization problem:
(20) |
where we have an extra regularization parameter here for the absorption. When Tikhonov regularization is selected, the analytic solution similar to the one described in Chen et al., 2016 is adopted.
When the signal to noise ratio of the brightfield stack is high, Tikhonov regularization gives satisfactory reconstruction in a single step with computation time proportional to the size of the image stack. However, when the noise is high, Tikhonov regularization can lead to high- to medium-frequency artifacts. Using iterative TV denoising algorithm, we can trade-off reconstruction speed with robustness to noise.
Specimen preparation
Mouse kidney tissue slices were purchased (Thermo-Fisher Scientific). In the mouse kidney tissue slice, F-actin was labeled with Alexa Fluor 568 phalloidin and nuclei was labeled with DAPI. U2OS cells were seeded and cultured in a chamber made of two strain-free coverslips that allowed for gas exchange.
Mouse brain section
The mice were anesthetized by inhalation of isoflurane in a chemical fume hood and then perfused with 25 ml phosphate-buffered saline (PBS) into the left cardiac ventricle and subsequently with 25 ml of 4% paraformaldehyde (PFA) in the PBS solution. Thereafter, the brains were post-fixed with 4% PFA for 12–16 hr and then transferred to 30% sucrose solution at the temperature of 4°C for 2–3 days until the tissue sank to the bottom of the container. Then, the brains were embedded in a tissue freezing medium (Tissue-Tek O.C.T compound 4583, Sakura) and kept at the temperature of −80°C. Cryostat-microtome (Leica CM 1850, Huston TX) was used for preparing the tissue sections (12 and 50 µm) at the temperature of −20°C and the slides were stored at the temperature of −20°C until use. In order to analyze myelination with QLIPP, the OCT on the slides were melted by keeping the slides at 37°C for 15–30 min. Then, the slides were washed in PBST (PBS+Tween-20 [0.1%]) for five minutes and then washed in PBS for five minutes and coversliped by mounting media (F4680, FluromountTM aqueousm sigma).
Prenatal human brain section
De-identified brain tissue samples were received with patient consent in accordance with a protocol number approved by the Human Gamete, Embryo, and Stem Cell Research Committee (institutional review board) at the University of California, San Francisco. Human prenatal brain samples were fixed with 4% paraformaldehye in phosphate-buffered solution (PBS) overnight, then rinsed with PBS, dehydrated in 30% sucrose/OCT compound (Agar Scientific) at 4°C overnight, then frozen in OCT at −80°C. Frozen samples were sectioned at 12 μm and mounted on microscope slides. Sections were stained directly with red FluoroMyelin (Thermo-Fisher Scientific, 1:300 in PBS) for 20 min at room temperature, rinsed three times with PBS for 10 min each, then mounted with ProLong Gold antifade (Invitrogen) with a coverslip.
Image acquisition and registration
We implemented LC-PolScope on a Leica DMi8 inverted microscope with Andor Dragonfly confocal for multiplexed acquisition of polarization-resolved images and fluorescence images. We automated the acquisition using Micro-Manager v1.4.22 and OpenPolScope plugin for Micro-Manager that controls liquid crystal universal polarizer (custom device from Meadowlark Optics, specifications available upon request).
We multiplexed the acquisition of label-free and fluorescence volumes. The volumes were registered using transformation matrices computed from similarly acquired multiplexed volumes of 3D matrix of rings from the ARGO-SIM test target (Argolight).
In transmitted light microscope, the resolution increases and image contrast decreases with increased numerical aperture of illumination. We used 63 × 1.47 NA oil immersion objective (Leica) and 0.9 NA condenser to achieve a good balance between image contrast and resolution. The mouse kidney tissue slice was imaged using 100 ms exposure for five polarization channels, 200 ms exposure for 405 nm channel (nuclei) at 1.6 mW in the confocal mode, 100 ms exposure for 561 nm channel (F-actin) at 2.8 mW in the confocal mode. The mouse brain slice were imaged using 30 ms exposure for five polarization channels. U2OS cells were imaged using 50 ms exposure for five polarization channels. For training the neural network, we acquired 160 non-overlapping 2048 × 2048 × 45 z-stacks of the mouse kidney tissue slice with Nyquist sampled voxel size 103 nm × 103 nm ×250 nm. Human brain sections were imaged with a 10 × 0.3 NA objective and 0.2 NA condenser with a 200 ms exposure for polarization channels, 250 ms exposure for 568 channel (FluoroMyelin) in the epifluorescence mode. The full brain sections were imaged, approximately 200 images depending on the size of the section, with 5 Z-slices at each location. The registered images mouse kidney tissue slice are available in the BioImage Archive (https://www.ebi.ac.uk/biostudies/BioImages/studies/S-BIAD25).
Data preprocessing for model training
The images were flat-field corrected. For training 3D models, the image volumes were upsampled along Z to match the pixel size in XY using linear interpolation. The images were tiled into 256 × 256 patches with a 50% overlap between patches for 2D and 2.5D models. The volumes were tiled into 128 × 128 × 96 patches for 3D models with a 25% overlap along XYZ. Tiles that had sufficient fluorescence foreground (2D and 2.5D: 20%, 3D: 50%) were used for training. Foreground masks were computed by summing the binary images of nuclei and F-actin obtained from Otsu thresholding in the case of mouse kidney tissue sections, and binary images of FluoroMyelin for the human brain sections. Images of human brain sections were visually inspected and curated to exclude images containing quenching artifacts as shown in Figure 7 before training.
Proper data normalization is essential for predicting the intensity correctly across different fields-of-views. We found the common normalization scheme where each image is normalized by its mean and standard deviation does not produce correct intensity prediction (Figure 7—figure supplement 1). We normalized the images on a per dataset basis to correct the batch variation in the staining and imaging process across different datasets. To balance contributions from different channels during training of multi-contrast models, each channel needs to be scaled to similar range. Specifically, for each channel, we subtracted its median and divided by its inter-quartile range (range defined by 25% and 75% quantiles) of the foreground pixel intensities. We used inter-quartile range to normalize the channel because standard deviation underestimates the spread of the distribution of highly correlated data such as pixels in images.
Neural network architecture
We experimented with 2D, 2.5D and 3D versions of U-Net models Figure 3—figure supplement 1. Across the three U-Net variants, each convolution block in the encoding path consists of two repeats of three layers: a convolution layer, ReLU non-linearity activation, and a batch normalization layer. We added a residual connection from the input of the block to the output of the block to facilitate faster convergence of the model (Milletari et al., 2016; Drozdzal et al., 2016). 2 × 2 downsampling is applied with 2 × 2 convolution with stride two at the end the each encoding block. On the decoding path, the feature maps were passed through similar convolution blocks, followed by up-sampling using bilinear interpolation. Feature maps output by every level of encoding path were concatenated to feature maps in the decoding path at corresponding levels. The final output block had a convolution layer only.
The encoding path of our 2D and 2.5D U-Net consists of five layers with 16, 32, 64, 128 and 256 filters respectively, while the 3D U-Net consists of four layers with 16, 32, 64 and 128 filters each due to its higher memory requirement. The 2D and 3D versions use convolution filters of size of 3 × 3 and 3 × 3 × 3 with a stride of 1 for feature extraction and with a stride of 2 for downsampling between convolution blocks.
The 2.5D U-Net has the similar architecture as the 2D U-Net with following differences:
The 3D features maps are converted into 2D using skip connections that consist of a N × 1 × 1 valid convolution, where N = 3, 5, 7 is the number of slices in the input.
Convolution filters in the encoding path are N × 3 × 3.
In the encoding path, the feature maps are downsampled across blocks using N × 2 × 2 average pooling.
In the decoding path, the feature maps were upsampled using bilinear interpolation by a factor of 1 × 2 × 2 and the convolution filters in the decoding path are of shape 1 × 3 × 3.
The 2D , 2.5D, 3D network with single channel input consisted of 2.0 M, 4.8M, 1.5M learnable parameters, respectively.
Model training and inference
We randomly split the images in groups of 70%, 15%, and 15% for training, validation and test. The split are kept consistent across all model training to make the results comparable. All models are trained with Adam optimizer, L1 loss function, and a cyclic learning rate scheduler with a min and max learning rate of 5 × 10−5 and 6 × 10−3 respectively. The 2D, 2.5D, 3D network were trained on mini-batches of size 64, 16, and four to accommodate the memory requirements of each model. Models were trained until there was no decrease in validation loss for 20 epochs. The model with minimal validation loss was saved. Single channel 2D models converged in 6 hr, 2.5D model converged in 47 hr and the 3D model converged in 76 hr on NVIDIA Tesla V100 GPU with 32 GB RAM.
As the models are fully convolutional, model predictions were obtained using full XY images as input for the 2D and 2.5D versions. Due to memory requirements of the 3D model, the test volumes were tiled along x and y while retaining the entire z extent (patch size: 512 × 512 × 96) with an overlap of 32 pixels along X and Y. The predictions were stitched together by linear blending of the model predictions across the overlapping regions. Inference time for a single channel U-Net model was 105, 3 and 18 seconds/frame for 2D, 2.5D, and 3D models respectively, with 2048 × 2048 pixels to a frame.
Model evaluation
Pearson correlation and structural similarity index (SSIM) along the XY, XZ and XYZ dimensions of the test volumes were used for evaluating model performance.
The Pearson correlation coefficient between a target image T and a prediction image P is defined as
(21) |
where is the covariance of T and P, and and are the standard deviations of T and P respectively.
SSIM compares two images using a sliding window approach, with window size ( for XYZ). Assuming a target window t and a prediction window p,
(22) |
where and , and L is the dynamic range of pixel values. Mean and variance are represented by μ and respectively, and the covariance between t and p is denoted . We use . The total SSIM score is the mean score calculated across all windows, SSIM for a total of M windows. For XY and XZ dimensions, we compute one test metric per plane and for XYZ dimension, we compute one test metric per volume.
Importantly, it is essential to scale the the model prediction back to the original range before normalization for correct calculation of target-prediction SSIM. This is because unlike Pearson correlation coefficient, SSIM is not a scale-independent metrics.
Acknowledgements
We thank Spyros Dermanis (CZ Biohub) and Bing Wu (CZ Biohub) for providing the mouse brain slice used for acquiring data shown in Figure 5. We thank Greg Huber, Loic Royer, Joshua Batson, Jim Karkanias, Joe DeRisi, and Steve Quake from the Chan Zuckerberg Biohub for numerous discussions. We also thank Eva Dyer from Georgia Tech for discussions about applications of the 2.5D models. This research was supported by the Chan Zuckerberg Biohub.
Appendix 1
Glossary
QLIPP: Quantitative label-free imaging with phase and polarization.
specimen phase: optical path length (OPL) of the specimen that is proportional to the product of its thickness and difference in the refractive index relative to the surrounding medium.
specimen anisotropy: angular anisotropy in OPL, which refers to retardance and slow axis orientation collectively.
wavefront: a surface in 3D space over which the time of propagation of light from the source is constant.
phase of a wavefront: time delay of the wavefront, which is affected by the specimen phase.
retardance: The difference in OPL induced by anisotropy specimen due to polarization-dependent refractive index.
slow axis orientation: the orientation along which the refractive index of an anisotropic material is the highest. The light polarized along the slow axis experiences the highest phase delay relative to the light polarized along the other axes.
U-Net: A fully convolution network consisting of a contracting and an expansive path, giving the architecture its U-shape.
2D (Slice→Slice) U-Net: A U-Net model, using convolution filters, that predicts a 2D slice from a 2D input slice.
2.5D (Stack→Slice) U-Net: A U-Net model, using convolution filters in the contracting path and in the expansive path, that predicts a 2D slice from a small stack of input slices.
3D (Stack→Stack) U-Net: A U-Net model, using convolution filters, that predicts a 3D stack from a 3D input stack.
Normalization per tile: The data used for training neural networks are split into tiles. In this normalization strategy, each time is normalized independently to have zero mean and unit variance.
Normalization per field of view: In this normalization strategy, each field of view (which consists of 16 tiles) is normalized to have zero mean and unit variance. The variations across tiles capture variations in the input and target data over the field of view.
Normalization per dataset: In this normalization strategy, whole training set is normalized to have zero mean and unit variance. The variations across tiles capture variations in the input and target data over the entire dataset, for example, a large brain slice.
SSIM: The Structural SIMilarity (SSIM) index is a method for measuring the similarity between two images.
Funding Statement
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Contributor Information
Shalin B Mehta, Email: shalin.mehta@czbiohub.org.
Birte Forstmann, University of Amsterdam, Netherlands.
Vivek Malhotra, The Barcelona Institute of Science and Technology, Spain.
Funding Information
This paper was supported by the following grant:
Chan Zuckerberg Biohub to Syuan-Ming Guo, Li-Hao Yeh, Jenny Folkesson, Ivan E Ivanov, Matthew G Keefe, David Shin, Bryant B Chhun, Nathan H Cho, Tomasz J Nowakowski, Shalin B Mehta.
Additional information
Competing interests
No competing interests declared.
Author contributions
Conceptualization, Resources, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing.
Conceptualization, Resources, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing.
Resources, Data curation, Software, Formal analysis, Supervision, Validation, Visualization, Methodology, Writing - original draft.
Conceptualization, Software, Validation, Investigation, Visualization, Methodology, Writing - review and editing.
Data curation, Software, Formal analysis, Validation, Visualization.
Resources, Investigation, Methodology, Writing - review and editing.
Resources, Methodology, Writing - original draft.
Resources, Methodology.
Resources, Data curation, Software, Validation.
Resources, Investigation.
Resources, Supervision.
Resources, Supervision, Writing - review and editing.
Conceptualization, Resources, Supervision, Project administration, Writing - review and editing.
Conceptualization, Resources, Software, Formal analysis, Supervision, Funding acquisition, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing.
Ethics
Human subjects: De-identified brain tissue samples were received with patient consent in accordance with a protocol approved by the Human Gamete, Embryo, and Stem Cell Research Committee (institutional review board) at the University of California, San Francisco.
Additional files
Data availability
Our experiments generated imaging data from mouse kidney tissue and human brain tissue slices that are useful for machine learning and other analyses. The data are available in the BioImage Archive (http://www.ebi.ac.uk/bioimage-archive) under accession number S-BIAD25.
The following dataset was generated:
Guo SM, Yeh LH, Folkesson J, Ivanov IE, Krishnan AP, Keefe MG, Hashemi E, Shin D, Chhun B, Cho N, Leonetti M, Han MH, Nowakowski TJ, Mehta S. 2020. Revealing architectural order with quantitative label-free imaging and deep learning. BioImage Archive. S-BIAD25
References
- Axer M, Grässel D, Kleiner M, Dammers J, Dickscheid T, Reckfort J, Hütz T, Eiben B, Pietrzyk U, Zilles K, Amunts K. High-resolution fiber tract reconstruction in the human brain by means of three-dimensional polarized light imaging. Frontiers in Neuroinformatics. 2011a;5:34. doi: 10.3389/fninf.2011.00034. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Axer H, Beck S, Axer M, Schuchardt F, Heepe J, Flücken A, Axer M, Prescher A, Witte OW. Microstructural analysis of human white matter architecture using polarized light imaging: views from neuroanatomy. Frontiers in Neuroinformatics. 2011b;5:28. doi: 10.3389/fninf.2011.00028. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Axer M, Amunts K, Grässel D, Palm C, Dammers J, Axer H, Pietrzyk U, Zilles K. A novel approach to the human connectome: ultra-high resolution mapping of fiber tracts in the brain. NeuroImage. 2011c;54:1091–1101. doi: 10.1016/j.neuroimage.2010.08.075. [DOI] [PubMed] [Google Scholar]
- Azzam RMA. Stokes-vector and Mueller-matrix polarimetry [Invited] Journal of the Optical Society of America A. 2016;33:1396–1408. doi: 10.1364/JOSAA.33.001396. [DOI] [PubMed] [Google Scholar]
- Barer R. Interference microscopy and mass determination. Nature. 1952;169:366–367. doi: 10.1038/169366b0. [DOI] [PubMed] [Google Scholar]
- Baroni A, Chamard V, Ferrand P. Extending quantitative phase imaging to Polarization-Sensitive materials. Physical Review Applied. 2020;13:054028. doi: 10.1103/PhysRevApplied.13.054028. [DOI] [Google Scholar]
- Bass M, DeCusatis C, Enoch JM, Lakshminarayanan V, Li G, MacDonald C, Mahajan VN, Stryland EV. Handbook of Optics, Volume I: Geometrical and Physical Optics, Polarized Light, Components and Instruments(Set) McGraw Hill Professional; 2009. [Google Scholar]
- Bayer SA, Altman J. The Human Brain During the Third Trimester. Taylor & Francis; 2003. [Google Scholar]
- Belin BJ, Goins LM, Mullins RD. Comparative analysis of tools for live cell imaging of actin network architecture. BioArchitecture. 2014;4:189–202. doi: 10.1080/19490992.2014.1047714. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Belthangady C, Royer LA. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nature Methods. 2019;16:1215–1225. doi: 10.1038/s41592-019-0458-z. [DOI] [PubMed] [Google Scholar]
- Born M, Wolf E. Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light. Elsevier; 2013. [Google Scholar]
- Chen M, Tian L, Waller L. 3D differential phase contrast microscopy. Biomedical Optics Express. 2016;7:3940. doi: 10.1364/BOE.7.003940. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen M, Phillips ZF, Waller L. Quantitative differential phase contrast (DPC) microscopy with computational aberration correction. Optics Express. 2018;26:32888. doi: 10.1364/OE.26.032888. [DOI] [PubMed] [Google Scholar]
- Christiansen EM, Yang SJ, Ando DM, Javaherian A, Skibinski G, Lipnick S, Mount E, O'Neil A, Shah K, Lee AK, Goyal P, Fedus W, Poplin R, Esteva A, Berndl M, Rubin LL, Nelson P, Finkbeiner S. In Silico labeling: predicting fluorescent labels in unlabeled images. Cell. 2018;173:792–803. doi: 10.1016/j.cell.2018.03.040. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Claus RA, Naulleau PP, Neureuther AR, Waller L. Quantitative phase retrieval with arbitrary pupil and illumination. Optics Express. 2015;23:26672. doi: 10.1364/OE.23.026672. [DOI] [PubMed] [Google Scholar]
- de Campos Vidal B, Mello ML, Caseiro-Filho AC, Godo C. Anisotropic properties of the myelin sheath. Acta Histochemica. 1980;66:32–39. doi: 10.1016/S0065-1281(80)80079-1. [DOI] [PubMed] [Google Scholar]
- DeMay BS, Noda N, Gladfelter AS, Oldenbourg R. Rapid and quantitative imaging of excitation polarized fluorescence reveals ordered septin dynamics in live yeast. Biophysical Journal. 2011;101:985–994. doi: 10.1016/j.bpj.2011.07.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Drozdzal M, Vorontsov E, Chartrand G, Kadoury S, Pal C. The importance of skip connections in biomedical image segmentation. arXiv. 2016 https://arxiv.org/abs/1608.04117
- Ferrand P, Baroni A, Allain M, Chamard V. Quantitative imaging of anisotropic material properties with vectorial ptychography. Optics Letters. 2018;43:763. doi: 10.1364/OL.43.000763. [DOI] [PubMed] [Google Scholar]
- Han X. Automatic liver lesion segmentation using A deep convolutional neural network method. arXiv. 2017 https://arxiv.org/abs/1704.07239
- Heath F, Hurley SA, Johansen-Berg H, Sampaio-Baptista C. Advances in noninvasive myelin imaging. Developmental Neurobiology. 2018;78:136–151. doi: 10.1002/dneu.22552. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Henssen D, Mollink J, Kurt E, van Dongen R, Bartels R, Gräβel D, Kozicz T, Axer M, Van Cappellen van Walsum AM. Ex vivo visualization of the trigeminal pathways in the human brainstem using 11.7T diffusion MRI combined with microscopy polarized light imaging. Brain Structure and Function. 2019;224:159–170. doi: 10.1007/s00429-018-1767-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Imai R, Nozaki T, Tani T, Kaizu K, Hibino K, Ide S, Tamura S, Takahashi K, Shribak M, Maeshima K. Density imaging of heterochromatin in live cells using orientation-independent-DIC microscopy. Molecular Biology of the Cell. 2017;28:3349–3359. doi: 10.1091/mbc.e17-06-0359. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Inoue S. [Polarization optical studies of the mitotic spindle. I. the demonstration of spindle fibers in living cells] Chromosoma. 1953;5:487–500. doi: 10.1007/BF01271498. [DOI] [PubMed] [Google Scholar]
- Jakovcevski I, Filipovic R, Mo Z, Rakic S, Zecevic N. Oligodendrocyte development and the onset of myelination in the human fetal brain. Frontiers in Neuroanatomy. 2009;3:5. doi: 10.3389/neuro.05.005.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jenkins MH, Gaylord TK. Quantitative phase microscopy via optimized inversion of the phase optical transfer function. Applied Optics. 2015a;54:8566. doi: 10.1364/AO.54.008566. [DOI] [PubMed] [Google Scholar]
- Jenkins MH, Gaylord TK. Three-dimensional quantitative phase imaging via tomographic deconvolution phase microscopy. Applied Optics. 2015b;54:9213–9227. doi: 10.1364/AO.54.009213. [DOI] [PubMed] [Google Scholar]
- Keefe D, Liu L, Wang W, Silva C. Imaging meiotic spindles by polarization light microscopy: principles and applications to IVF. Reproductive BioMedicine Online. 2003;7:24–29. doi: 10.1016/S1472-6483(10)61724-5. [DOI] [PubMed] [Google Scholar]
- Khodanovich M, Pishchelko A, Glazacheva V, Pan E, Akulov A, Svetlik M, Tyumentseva Y, Anan’ina T, Yarnykh V. Quantitative imaging of white and gray matter remyelination in the cuprizone demyelination model using the macromolecular proton fraction. Cells. 2019;8:1204. doi: 10.3390/cells8101204. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim D, Lee S, Lee M, Oh J, Yang S-A, Park Y. Holotomography: refractive index as an intrinsic imaging contrast for 3-D label-free live cell imaging. bioRxiv. 2018 doi: 10.1101/106328. [DOI] [PubMed]
- Kleinfeld D, Bharioke A, Blinder P, Bock DD, Briggman KL, Chklovskii DB, Denk W, Helmstaedter M, Kaufhold JP, Lee W-CA, Meyer HS, Micheva KD, Oberlaender M, Prohaska S, Reid RC, Smith SJ, Takemura S, Tsai PS, Sakmann B. Large-Scale automated histology in the pursuit of connectomes. Journal of Neuroscience. 2011;31:16125–16138. doi: 10.1523/JNEUROSCI.4077-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koike-Tani M, Tominaga T, Oldenbourg R, Tani T. Instantaneous polarized light imaging reveals activity dependent structural changes of dendrites in mouse hippocampal slices. bioRxiv. 2019 doi: 10.1101/523571. [DOI]
- Lee M, Lee Y-H, Song J, Kim G, Jo Y, Min H, Kim CH, Park Y. DeepIS: Deep learning framework for three-dimensional label-free tracking of immunological synapses. bioRxiv. 2019 doi: 10.1101/539858. [DOI] [PMC free article] [PubMed]
- Lein ES, Hawrylycz MJ, Ao N, Ayres M, Bensinger A, Bernard A, Boe AF, Boguski MS, Brockway KS, Byrnes EJ, Chen L, Chen L, Chen TM, Chin MC, Chong J, Crook BE, Czaplinska A, Dang CN, Datta S, Dee NR, Desaki AL, Desta T, Diep E, Dolbeare TA, Donelan MJ, Dong HW, Dougherty JG, Duncan BJ, Ebbert AJ, Eichele G, Estin LK, Faber C, Facer BA, Fields R, Fischer SR, Fliss TP, Frensley C, Gates SN, Glattfelder KJ, Halverson KR, Hart MR, Hohmann JG, Howell MP, Jeung DP, Johnson RA, Karr PT, Kawal R, Kidney JM, Knapik RH, Kuan CL, Lake JH, Laramee AR, Larsen KD, Lau C, Lemon TA, Liang AJ, Liu Y, Luong LT, Michaels J, Morgan JJ, Morgan RJ, Mortrud MT, Mosqueda NF, Ng LL, Ng R, Orta GJ, Overly CC, Pak TH, Parry SE, Pathak SD, Pearson OC, Puchalski RB, Riley ZL, Rockett HR, Rowland SA, Royall JJ, Ruiz MJ, Sarno NR, Schaffnit K, Shapovalova NV, Sivisay T, Slaughterbeck CR, Smith SC, Smith KA, Smith BI, Sodt AJ, Stewart NN, Stumpf KR, Sunkin SM, Sutram M, Tam A, Teemer CD, Thaller C, Thompson CL, Varnam LR, Visel A, Whitlock RM, Wohnoutka PE, Wolkey CK, Wong VY, Wood M, Yaylaoglu MB, Young RC, Youngstrom BL, Yuan XF, Zhang B, Zwingman TA, Jones AR. Genome-wide atlas of gene expression in the adult mouse brain. Nature. 2007;445:168–176. doi: 10.1038/nature05453. [DOI] [PubMed] [Google Scholar]
- Ling T, Boyle KC, Zuckerman V, Flores T, Ramakrishnan C, Deisseroth K, Palanker D. How neurons move during action potentials. bioRxiv. 2019 doi: 10.1101/765768. [DOI] [PMC free article] [PubMed]
- Mehta SB, Shribak M, Oldenbourg R. Polarized light imaging of birefringence and diattenuation at high resolution and high sensitivity. Journal of Optics. 2013;15:094007. doi: 10.1088/2040-8978/15/9/094007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mehta SB, McQuilken M, La Riviere PJ, Occhipinti P, Verma A, Oldenbourg R, Gladfelter AS, Tani T. Dissection of molecular assembly dynamics by tracking orientation and position of single molecules in live cells. PNAS. 2016;113:E6352–E6361. doi: 10.1073/pnas.1607674113. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Menzel M, Michielsen K, De Raedt H, Reckfort J, Amunts K, Axer M. A jones matrix formalism for simulating three-dimensional polarized light imaging of brain tissue. Journal of the Royal Society Interface. 2015;12:20150734. doi: 10.1098/rsif.2015.0734. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Menzel M, Reckfort J, Weigand D, Köse H, Amunts K, Axer M. Diattenuation of brain tissue and its impact on 3D polarized light imaging. arXiv. 2017 doi: 10.1364/BOE.8.003163. https://arxiv.org/abs/1703.04343 [DOI] [PMC free article] [PubMed]
- Miller DJ, Duka T, Stimpson CD, Schapiro SJ, Baze WB, McArthur MJ, Fobbs AJ, Sousa AMM, Sestan N, Wildman DE, Lipovich L, Kuzawa CW, Hof PR, Sherwood CC. Prolonged myelination in human neocortical evolution. PNAS. 2012;109:16480–16485. doi: 10.1073/pnas.1117943109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Milletari F, Navab N, Ahmadi S. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. 2016 Fourth International Conference on 3D Vision; 2016. pp. 565–571. [DOI] [Google Scholar]
- Moen E, Bannon D, Kudo T, Graf W, Covert M, Van Valen D. Deep learning for cellular image analysis. Nature Methods. 2019;16:1233–1246. doi: 10.1038/s41592-019-0403-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mollink J, Kleinnijenhuis M, Cappellen van Walsum AV, Sotiropoulos SN, Cottaar M, Mirfin C, Heinrich MP, Jenkinson M, Pallebage-Gamarallage M, Ansorge O, Jbabdi S, Miller KL. Evaluating fibre orientation dispersion in white matter: comparison of diffusion MRI, histology and polarized light imaging. NeuroImage. 2017;157:561–574. doi: 10.1016/j.neuroimage.2017.06.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Monsma PC, Brown A. FluoroMyelin red is a bright, photostable and non-toxic fluorescent stain for live imaging of myelin. Journal of Neuroscience Methods. 2012;209:344–350. doi: 10.1016/j.jneumeth.2012.06.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Noda T, Kawata S, Minami S. Three-dimensional phase contrast imaging by an annular illumination microscope. Applied Optics. 1990;29:3810–3815. doi: 10.1364/AO.29.003810. [DOI] [PubMed] [Google Scholar]
- Nomarski GM. Differential microinterferometer with polarized waves. Journal de Physique et Le Radium. 1955;16:9S [Google Scholar]
- Oh SW, Harris JA, Ng L, Winslow B, Cain N, Mihalas S, Wang Q, Lau C, Kuan L, Henry AM, Mortrud MT, Ouellette B, Nguyen TN, Sorensen SA, Slaughterbeck CR, Wakeman W, Li Y, Feng D, Ho A, Nicholas E, Hirokawa KE, Bohn P, Joines KM, Peng H, Hawrylycz MJ, Phillips JW, Hohmann JG, Wohnoutka P, Gerfen CR, Koch C, Bernard A, Dang C, Jones AR, Zeng H. A mesoscale connectome of the mouse brain. Nature. 2014;508:207–214. doi: 10.1038/nature13186. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ohki K, Chung S, Ch'ng YH, Kara P, Reid RC. Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex. Nature. 2005;433:597–603. doi: 10.1038/nature03274. [DOI] [PubMed] [Google Scholar]
- Oldenbourg R, Katoh K, Danuser G. Mechanism of lateral movement of filopodia and radial actin bundles across neuronal growth cones. Biophysical Journal. 2000;78:1176–1182. doi: 10.1016/S0006-3495(00)76675-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Oldenbourg R. Polarized light field microscopy: an analytical method using a microlens array to simultaneously capture both conoscopic and orthoscopic views of birefringent objects. Journal of Microscopy. 2008;231:419–432. doi: 10.1111/j.1365-2818.2008.02053.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Oldenbourg R, Mei G. New polarized light microscope with precision universal compensator. Journal of Microscopy. 1995;180:140–147. doi: 10.1111/j.1365-2818.1995.tb03669.x. [DOI] [PubMed] [Google Scholar]
- Ounkomol C, Seshamani S, Maleckar MM, Collman F, Johnson GR. Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nature Methods. 2018;15:917–920. doi: 10.1038/s41592-018-0111-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Park Y, Depeursinge C, Popescu G. Quantitative phase imaging in biomedicine. Nature Photonics. 2018;12:578–589. doi: 10.1038/s41566-018-0253-x. [DOI] [Google Scholar]
- Petersen D, Mavarani L, Niedieker D, Freier E, Tannapfel A, Kötting C, Gerwert K, El-Mashtoly SF. Virtual staining of Colon cancer tissue by label-free raman micro-spectroscopy. The Analyst. 2017;142:1207–1215. doi: 10.1039/C6AN02072K. [DOI] [PubMed] [Google Scholar]
- Popescu G, Ikeda T, Dasari RR, Feld MS. Diffraction phase microscopy for quantifying cell structure and dynamics. Optics Letters. 2006;31:775–777. doi: 10.1364/OL.31.000775. [DOI] [PubMed] [Google Scholar]
- Ragan T, Kadiri LR, Venkataraju KU, Bahlmann K, Sutin J, Taranda J, Arganda-Carreras I, Kim Y, Seung HS, Osten P. Serial two-photon tomography for automated ex vivo mouse brain imaging. Nature Methods. 2012;9:255–258. doi: 10.1038/nmeth.1854. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rivenson Y, Liu T, Wei Z, Zhang Y, Ozcan A. PhaseStain: Digital staining of label-free quantitative phase microscopy images using deep learning. arXiv. 2018a doi: 10.1038/s41377-019-0129-y. https://arxiv.org/abs/1807.07701 [DOI] [PMC free article] [PubMed]
- Rivenson Y, Ceylan Koydemir H, Wang H, Wei Z, Ren Z, Günaydın H, Zhang Y, Göröcs Z, Liang K, Tseng D, Ozcan A. Deep learning enhanced Mobile-Phone microscopy. ACS Photonics. 2018b;5:2354–2364. doi: 10.1021/acsphotonics.8b00146. [DOI] [Google Scholar]
- Rivenson Y, Wang H, Wei Z, de Haan K, Zhang Y, Wu Y, Günaydın H, Zuckerman JE, Chong T, Sisk AE, Westbrook LM, Wallace WD, Ozcan A. Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nature Biomedical Engineering. 2019;3:466–477. doi: 10.1038/s41551-019-0362-y. [DOI] [PubMed] [Google Scholar]
- Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A, editors. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Cham: Springer; 2015. pp. 234–241. [DOI] [Google Scholar]
- Salamon Z, Tollin G. Optical anisotropy in lipid bilayer membranes: coupled plasmon-waveguide resonance measurements of molecular orientation, Polarizability, and shape. Biophysical Journal. 2001;80:1557–1567. doi: 10.1016/S0006-3495(01)76128-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schmidt WJ. Die Bausteine des Tierkörpers in polarisiertem Lichte. Protoplasma. 1926;1:618–619. doi: 10.1007/BF01603040. [DOI] [Google Scholar]
- Schmitz D, Muenzing SEA, Schober M, Schubert N, Minnerop M, Lippert T, Amunts K, Axer M. Derivation of fiber orientations from oblique views through human brain sections in 3D-Polarized light imaging. Frontiers in Neuroanatomy. 2018a;12:75. doi: 10.3389/fnana.2018.00075. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schmitz D, Amunts K, Lippert T, Axer M. A least squares approach for the reconstruction of nerve fiber orientations from tiltable specimen experiments in 3D-PLI. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI); 2018b. pp. 132–135. [DOI] [Google Scholar]
- Shribak M, LaFountain J, Biggs D, Inouè S. Orientation-independent differential interference contrast microscopy and its combination with an orientation-independent polarization system. Journal of Biomedical Optics. 2008;13:014011. doi: 10.1117/1.2837406. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shribak M, Oldenbourg R. Techniques for fast and sensitive measurements of two-dimensional birefringence distributions. Applied Optics. 2003;42:3009–3017. doi: 10.1364/AO.42.003009. [DOI] [PubMed] [Google Scholar]
- Snaidero N, Simons M. Myelination at a glance. Journal of Cell Science. 2014;127:2999–3004. doi: 10.1242/jcs.151043. [DOI] [PubMed] [Google Scholar]
- Soto JM, Rodrigo JA, Alieva T. Label-free quantitative 3D tomographic imaging for partially coherent light microscopy. Optics Express. 2017;25:15699–15712. doi: 10.1364/OE.25.015699. [DOI] [PubMed] [Google Scholar]
- Spiesz EM, Kaminsky W, Zysset PK. A quantitative collagen fibers orientation assessment using birefringence measurements: calibration and application to human osteons. Journal of Structural Biology. 2011;176:302–306. doi: 10.1016/j.jsb.2011.09.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Streibl N. Phase imaging by the transport equation of intensity. Optics Communications. 1984;49:6–10. doi: 10.1016/0030-4018(84)90079-8. [DOI] [Google Scholar]
- Streibl N. Three-dimensional imaging by a microscope. Journal of the Optical Society of America A. 1985;2:121–127. doi: 10.1364/JOSAA.2.000121. [DOI] [Google Scholar]
- Tian L, Waller L. Quantitative differential phase contrast imaging in an LED array microscope. Optics Express. 2015;23:11394–11403. doi: 10.1364/OE.23.011394. [DOI] [PubMed] [Google Scholar]
- Tran MT, Oldenbourg R. Mapping birefringence in three dimensions using polarized light field microscopy: the case of the juvenile clamshell. Journal of Microscopy. 2018;271:315–324. doi: 10.1111/jmi.12721. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van Valen DA, Kudo T, Lane KM, Macklin DN, Quach NT, DeFelice MM, Maayan I, Tanouchi Y, Ashley EA, Covert MW. Deep learning automates the quantitative analysis of individual cells in Live-Cell imaging experiments. PLOS Computational Biology. 2016;12:e1005177. doi: 10.1371/journal.pcbi.1005177. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Waller L, Tian L, Barbastathis G. Transport of intensity phase-amplitude imaging with higher order intensity derivatives. Optics Express. 2010;18:12552–12561. doi: 10.1364/OE.18.012552. [DOI] [PubMed] [Google Scholar]
- Wang Y, Yang J, Yin W, Zhang Y. A new alternating minimization algorithm for total variation image reconstruction. SIAM Journal on Imaging Sciences. 2008;1:248–272. doi: 10.1137/080724265. [DOI] [Google Scholar]
- Wang H, Magnain C, Wang R, Dubb J, Varjabedian A, Tirrell LS, Stevens A, Augustinack JC, Konukoglu E, Aganj I, Frosch MP, Schmahmann JD, Fischl B, Boas DA. as-PSOCT: volumetric microscopic imaging of human brain architecture and connectivity. NeuroImage. 2018;165:56–68. doi: 10.1016/j.neuroimage.2017.10.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Z, Bovik AC. Mean squared error: love it or leave it? A new look at signal fidelity measures. IEEE Signal Processing Magazine. 2009;26:98–117. doi: 10.1109/MSP.2008.930649. [DOI] [Google Scholar]
- Yang B, Jan NJ, Brazile B, Voorhees A, Lathrop KL, Sigal IA. Polarized light microscopy for 3-dimensional mapping of collagen fiber architecture in ocular tissues. Journal of Biophotonics. 2018;11:e201700356. doi: 10.1002/jbio.201700356. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang L, Ghosh RP, Franklin JM, You C, Liphardt JT. NuSeT: a deep learning tool for reliably separating and analyzing crowded cells. bioRxiv. 2019 doi: 10.1101/749754. [DOI] [PMC free article] [PubMed]
- Zeineh MM, Palomero-Gallagher N, Axer M, Gräßel D, Goubran M, Wree A, Woods R, Amunts K, Zilles K. Direct visualization and mapping of the spatial course of fiber tracts at microscopic resolution in the human Hippocampus. Cerebral Cortex. 2017;27:1779–1794. doi: 10.1093/cercor/bhw010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeng H. Mesoscale connectomics. Current Opinion in Neurobiology. 2018;50:154–162. doi: 10.1016/j.conb.2018.03.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zernike F. How I discovered phase contrast. Science. 1955;121:345–349. doi: 10.1126/science.121.3141.345. [DOI] [PubMed] [Google Scholar]
- Zhan L, Zhou J, Wang Y, Jin Y, Jahanshad N, Prasad G, Nir TM, Leonardo CD, Ye J, Thompson PM, For The Alzheimer's Disease Neuroimaging Initiative Comparison of nine tractography algorithms for detecting abnormal structural brain networks in Alzheimer's disease. Frontiers in Aging Neuroscience. 2015;7:48. doi: 10.3389/fnagi.2015.00048. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zilles K, Palomero-Gallagher N, Gräßel D, Schlömer P, Cremer M, Woods R, Amunts K, Axer M. Chapter 18 - High-resolution fiber and fiber tract imaging using polarized light microscopy in the human, monkey, rat, and mouse brain. In: Rockland K. S, editor. Axons and Brain Architecture. San Diego: Academic Press; 2016. pp. 369–389. [DOI] [Google Scholar]