Abstract
The use of microcomputed tomography (Micro-CT) for imaging biological samples has burgeoned in the past decade, due to increased access to scanning platforms, ease of operation, and the advance of software platforms that enable accurate microstructure quantification. However, manual data analysis of Micro-CT images can be laborious and time intensive. Deep learning offers the ability to streamline this process but historically has included caveats, such as the need for a large amount of training data, which is often limited in many Micro-CT studies. Here we show that accurate 3D deep learning models can be trained using only 1-3 Micro-CT images of the adult Drosophila melanogaster brain using pre-trained neural networks and minimal user knowledge. We further demonstrate the power of our model by showing that it can be expanded to accurately segment the brain across different tissue contrast stains, scanner models, and genotypes. Finally, we show how the model can assist in identifying morphological similarities and differences between mutants based on volumetric quantification, enabling rapid assessment of novel phenotypes. Our models are freely available and can be adapted to individual users’ needs.
Keywords: Deep Learning, Microcephaly, Micro-CT, Image Analysis, Drosophila melanogaster
Summary:
Micro-CT data can be automatically segmented and quantified using a deep learning model trained on as few as 3 samples, facilitating rapid comparison of developmental phenotypes.
Introduction
X-Ray based imaging methods, such as computed tomography (CT) and its higher resolution counterpart, Microcomputed tomography (Micro-CT), are advantageous for their ability to generate three dimensional (3D) images of a wide variety of intact materials. As a result, they have become common in diverse fields, including materials science, geology, biology, and medicine (Buffiere et al., 2010; Carlson, 2006; Hounsfield, 1980; Schoborg et al., 2019). With the proliferation of improved and standardized tissue staining techniques, Micro-CT has established a niche in developmental biology as an effective imaging method for analyzing soft tissues as well as more dense structures such as bone (Clark and Badea, 2014; du Plessis et al., 2017; Keklikoglou et al., 2021; Metscher, 2009; Singhal et al., 2013).
Despite these advances, Micro-CT remains an underutilized imaging method due in part to the laborious process of image analysis and quantification, which is essential for realizing the true power of this imaging technology for scientific investigation. Manual whole organ segmentation from entire specimens is especially labor intensive. Although many automated and semi-automated segmentation methods have been devised to address this issue, many of them also require extensive manual input to accurately segment structures of interest, thus limiting their feasibility (Chai et al., 2024; McGrath et al., 2020; Yushkevich et al., 2006). Furthermore, deep learning for image segmentation, while powerful, has historically been inaccessible to many labs due to the volume of data required (>thousands of training datasets) and the required expertise in computational science (Lee et al., 2022; Sapoval et al., 2022).
Recently, many image analysis software platforms have implemented deep learning solutions to help end-users overcome these limitations in image processing and analysis (Moen et al., 2019). While there are many open source or paid packages for analysis of confocal and electron microscopy, there are relatively few options for Micro-CT. Several labs have developed custom neural networks for the segmentation of organs in mice (Matula et al., 2022) or insects (Toulkeridou et al., 2023). For our analysis, we elected to use Dragonfly because of its pretrained neural networks and accessible training wizard and deep learning tools, which require no knowledge of programming and are integrated into their image analysis program (Makovetsky et al., 2018). These innovations, coupled with Dragonfly’s free non-commercial license, allowed us to automate the laborious effort of manually segmenting Drosophila melanogaster organs from Micro-CT scans (Schoborg et al., 2019) without any programming or neural network development on the user end.
Here we demonstrate the ease and feasibility by which deep learning tools can accelerate the analysis of complex 3D Micro-CT datasets. We used the Drosophila melanogaster brain and visual system as our targeted structures and built a series of models that can segment with a high degree of accuracy (>98% for total volume, >95% for individual regions in non-mutant flies) and precision compared to manual methods. The simplest models, consisting of just 1-3 training datasets, are suitable for images that use the same sample parameters, such as stain type, Micro-CT scanner model, and genotype. Using these models as a baseline, we then implemented additional imaging and sample parameters. In each case, we found that providing 6 samples of training data is generally sufficient when adding a new sample parameter. For example, we show that a single deep learning model trained with 12 datasets can accurately predict fly brain regions from images taken on two different Micro-CT scanner models, while a model with 43 datasets was sufficient to segment images taken with 2 different scanners, stains, and genotypes.
Together, our results show that a researcher can consistently create models capable of accurate, reproducible organ segmentation and volumetric quantification, with a 6-10-fold increase in speed compared to manual segmentations. With our framework of pretrained models, other labs can retrain and refine them for their own needs using minimal training data, thus lowering the burden of Micro-CT imaging and its associated analysis.
Materials and Methods
Fly Stocks
Animals were maintained at 25C on cornmeal-agar. aspT25/aspDf mutants were obtained by crossing aspt25/Tm6b and aspDf/Tm6b lines and selecting for the microcephalic phenotype (Schoborg et al., 2015). Yellow white (yw, Bloomington #1495) and Oregon-R (#25211) were used as wildtype. eyD1Da stocks were a generous gift from Patrick Callaerts (Callaerts et al., 2001).
Microcomputed Tomography
Staining and fixation: Flies were collected and stained following our standard protocol (Schoborg, 2020) for both Iodine and PTA stains. Imaging was performed on both the Zeiss Xradia Versa 610 and the Bruker Skyscan 1172, using the following imaging parameters:
Skyscan:
The source was set for 40 Kv, 250 uA, 10 W, with no filter. Images were taken using the medium pixel camera setting (2x2 binning), with a pixel size of 3.01 μm. The camera position was 80mm from the source and the object was 48mm from the source, with a 379 ms exposure time for iodine and 360 ms for PTA. The stage was set for a full 360-degree rotation, taking 900 projection images, using 3 frame averages and a random movement of 10. Reconstructions were performed using Skyscan’s nRecon reconstruction software.
Versa:
The source was set at 40 Kv and 3 W, with an LE1 filter and a 4x lens. Source distance was between 10.5-11 mm, and detector distance was set between 13.5-14 mm for a pixel size range of 2.95-3.01. Images were taken with an 850 ms exposure time for iodine stains, and a 1 s exposure time for PTA stains. 1601 projection images were taken with adaptive motion compensation. Reconstructions were performed using Zeiss’ automatic reconstruction tool.
Deep learning and image analysis
Workstation and Software:
Image analysis was performed using Dragonfly Pro 2022.2 (Comet Technologies Canada Inc) using the Deep Learning Module. The free, non-commercial version of Dragonfly software also contains the identical Deep Learning module, which we have also validated for use. A Dell Precision 7920 Tower workstation with an Intel Xeon Silver 4114 CPU, Nvidia Quadro RTX 5000 Graphics card, and 128GB of RAM was used for all rendering and Deep Learning. GPU should be prioritized for Deep Learning applications.
General Criteria:
The following criteria should be strictly adhered to for generating accurate models: 1) images should have the comparable pixel size; 2) consistent image orientation; 3) calibration of the image based on a consistent intensity scale. Image pixel size can be changed using the Image Properties function, but bear in mind this will change the reported volume of the sample. If large pixel size adjustments are needed, adjusting the image sampling while initially loading the file is preferable. To consistently orient the images, we used the esophagus as a landmark for the deep learning field of view (Fig. 1A). We then resampled the image based on that orientation, followed by an extraction of the portion of the fly we intended to segment including a portion of air (background) and the surrounding pipet tip. The air and pipet tip are crucial, as this serves as the image pixel intensity calibrator. These have consistent pixel intensity values, unlike the sample itself or even the liquid surrounding the sample due to variability in sample staining and leaching. Once the image has been “prepped” using these steps, it can either be used to create training data or segmented using deep learning.
Figure 1. Accurate fly brain segmentation models using deep learning can be generated from 1-3 Micro-CT images.

(A) Overview of Dragonfly’s deep learning workflow. Models were trained and segmented along the transverse plane (blue line), while most images are shown along the coronal plane (red line). See Materials and Methods for details. (B) Section of a wildtype adult Drosophila melanogaster head imaged on the Zeiss Xradia Versa 610. (C) Manual segmentation of the brain and lamina. Regions are color coded. (D) Deep learning segmentation of the same brain, using the 3-brain model. (E) Comparison of deep learning model accuracy between the 1- and 3-brain models. Segmentation accuracy is the volume of a manual segmentation divided by the volume of the deep learning model (n=10 brains). Each circle represents one brain. Whole Brain (WB), Central Brain (CB), Optic Lobe (OL), and Lamina (LM). (F) Reproducibility of the 1- and 3-brain models (see Material and Methods). Each diamond represents the average of 10 brains segmented by a model trained on either the 1 brain or 3 brain training data. Model precision based on the coefficient of variation (CV%), shown above each bar in (E) and (F). n≥10 brains, error bars represent standard deviation. Anterior (A), Posterior (P), Dorsal (D), Left (L), Right (R). Scale bars: 100 μm.
Selection of Training Data:
To facilitate robust training and segmentation, it is important to eliminate extraneous portions of a sample by extracting a subsection of an imaged sample. Small samples with distinct shapes and intensities will require less training time and data than samples that include larger samples, extraneous organs, or are homogeneous in shape or intensity. For example, our model included only the head region in the training data, eliminating extraneous organs with rounded shapes similar to the brain (like the ova) as well as structures with a similar tissue density to the brain (like the ventral nerve cord). Structures that are small compared to the total volume of the training data required (like the heart) may require additional labeling, especially if the organ of interest appears in a small number of tomogram slices. Organs that are difficult to identify because of their continuity with or similarity to other structures (like the midgut and hindgut) will benefit from having both identified and labeled in the training data. Finally, structures with variable conformations like the uterus and Malpighian tubules would be the most challenging to train models on and likely will require training data that covers a representative number of configurations.
Generating Manual Segmentations:
Manual image segmentation in Dragonfly was performed by orienting the images (Fig. 1A, blue axis), then creating a multi-ROI, and thresholding the image to eliminate background by defining a range. We then used the multi-slice painting tool to label each slice of the central brain and optic lobe. We then reoriented the brain along the frontal plane (Fig.1a, red axis) and repeated the process to correct any mistakes. Finally, we readjusted the threshold for the lamina and repeated the same process, segmenting only the lamina. Segmentations were then converted into meshes using identical smoothing parameters and used to calculate the 3D volume.
Deep Learning:
Training data for the models was generated by manual segmentation of images in the Deep Learning wizard. Approximately 20% of the data was manually segmented using the single-slice manual painting tool, or every ~5th slice of the tomogram. Training data was then exported and used in the Deep Learning tool. Model architecture was based off a Pretrained model from the Dragonfly with the following parameters: Depth level 5 or 7, initial filter count 32, input dimension 2.5D, input slices 3, Batch size of 256-512, Epochs 100, patch size 64, stride ratio 0.25, Data augmentation factor of 10, horizontal and vertical flips, 180 degree rotation, scale .9-1.1, shear 2.0.The Dragonfly pretrained model is based off of the U-net architecture (Ronneberger et al., 2015), and is optimized for Micro-CT data and consumer-level GPU setups. U-net or U-net++ (Zhou et al., 2018) may also be good choices for starting architecture depending on the capabilities of the user’s system. When creating models, users can adapt the model’s depth level or batch size based on GPU and the size of the dataset. Dragonfly features an estimated memory ratio to help optimize these parameters for the user’s system, which should be kept between .7 and .9. Likewise, the data augmentation factor can be adjusted depending on CPU capabilities and dataset size if it becomes a limiting factor. Total time for training the 1, 3, 12, and 43 brain models took 7, 4.5, 28, and 57 hours respectively, with total epochs trained amounting to 130, 65, 103, and 51. If the total number of epochs was greater than 75 during our initial training, we ran the model a second time to verify that it was fully trained.
Final Steps:
After training is complete, a final image processing step may be necessary to remove islands prior to volume calculation, which can be done by right clicking on the desired class in a Multi-ROI, selecting process islands, keep by largest, and then choosing the amount of islands to keep (1 for structures that are continuous, 2 for mirrored structures like the lamina). If mutations have caused structures to disconnect, you may have to evaluate the appropriate number of islands. Segmentations are then converted to meshes, smoothed using identical parameters, 3D volumes generated, then exported to .csv files that can be opened in Microsoft Excel or GraphPad Prism software for statistical analysis.
Time Savings Over Manual Segmentation:
Orienting and resampling a new image takes 2-3 minutes for an experienced user. Deep Learning segmentation with a previously trained model is 30-60 seconds using our workstation, and processing islands, generating meshes, and putting the data in Excel is about a minute. This represents a 6-10-fold increase in segmentation speed compared to our manual segmentation analysis pipeline, which is ~30 minutes per brain.
Model Reproducibility:
To determine the reproducibility of each model (Fig. 1F), we trained 10 different 1 or 3 brain models on the same brain(s) using identical training data for 50-150 epochs. Data reflects the comparison of the average volume of 10 brains by 10 replicates of the 1- and 3-Brain models.
Statistical Analysis
All statistical analyses and graph generation were performed using GraphPad Prism software (v. 10.3.1)
Results and Discussion
We first set out to discover the minimum number of training images required to create a deep learning model capable of segmenting an adult Drosophila brain imaged by Micro-CT (Fig. 1A,B). The Drosophila brain consists of three large structures: a central brain sandwiched between two optic lobes, which can be resolved by the 3-5 μm spatial sampling rates achievable with many Micro-CT scanners (Ito et al., 2014). The lamina, located distally to each optic lobe, helps relay visual information from the retina to the optic lobes (Fig. 1C).
We found that a single training image (tomogram) consisting of 20% ground truth (every 5th slice manually segmented, Fig 1A) was sufficient to create a single model (1-Brain model) that was able to segment the whole brain, central brain, optic lobes, and lamina with >98% accuracy and a high degree of precision (Coefficient of Variation (CV) 2.8-11%) from 10 different brains imaged with the same conditions (Fig. 1D,E). We define ‘accuracy’ as the ratio of the normalized brain volumetric measurement (μm3/thorax width, μm2) (Schoborg et al., 2019) from the model segmentation to the normalized volumetric measurement from the manual segmentation of the same brain, which we take to be the true/accepted volume value.
The performance of our 1-Brain model was on par with a second model that we built, which consisted of 20% ground truth each from three different brain images (3-Brain model). The accuracy and precision of both models were comparable (>98% Accuracy, 1.8-11.5% CV; Fig. 1E, Video 1 and 2). We conclude that high performing models can be trained from a minimal amount of ground truth and computational knowledge, thus limiting the burden to investigators.
Video 1,
related to Figure 1: Comparison of manual segmentation (left) vs deep learning segmentation (right) for the 3-brain model. Central Brain (yellow), Optic Lobes (purple), Lamina (cyan). Scale bar: 100 μm.
Video 2,
related to Figure 1: Rotating 3D views of the manual segmentation (top) and the deep learning segmentation for the 3-brain model (bottom). Central Brain (yellow), Optic Lobes (purple), Lamina (cyan).
We next addressed the reproducibility of our models. We trained ten different 1- and 3-Brain models using the same 20% ground truth for each training session, then compared each model’s accuracy and precision in predicting brain volumes from the same 10 brain images (Fig. 1F). We observed a larger variation (4-42% CV) in the ability of each one brain model to accurately predict volumes, particularly for the central brain and optic lobes (Fig. 1F). However, the 3-Brain models were much more precise (0.5%-2.6% CV) with a higher degree of accuracy (Fig. 1F). Thus, while accurate segmentation models can be obtained from just a single training image, incorporating three images for training greatly improves the reproducibility of the models. It is also important to validate these models against a manually segmented image to ensure high model accuracy prior to use.
While these models performed well, given the consistent imaging conditions (same Micro-CT scanner, iodine stain for tissue contrast, wildtype brain morphology), we next sought to determine how altering these variables would affect the amount of training data required to generate a highly robust segmentation model. First, we addressed if a segmentation model could be trained to accurately detect brain regions from images taken on different Micro-CT scanners—the Zeiss Xradia Versa 610 (Fig. 1) and the Bruker Skyscan 1172 (Fig. 2). When we attempted to segment data imaged on the Skyscan (Fig. 2A) using the 3-Brain model that was trained on the Versa images, we found that the model struggled to accurately segment each brain region (Fig. 2C) compared to manual segmentation (Fig. 2B) and was highly variable (29-87% CV, Fig. 2E).
Figure 2. A deep learning model can identify fly brains imaged on different Micro-CT scanners.

(A) Wildtype adult Drosophila melanogaster head imaged on the Bruker Skyscan 1172. (B) Manual segmentation of the brain and lamina. Central Brain (yellow), Optic Lobes (purple), Lamina (Cyan). (C) Deep learning segmentation of a Skyscan image segmented using the 3 brain model trained on Zeiss Xradia Versa images. (D) Deep learning segmentation of a Skyscan image segmented using a 12 brain mixed model that included 6 Skyscan and 6 Versa images in the training dataset. (E) Volumetric quantification of segmentations from 3 brain and 12 brain mixed models. Whole Brain (WB), Central Brain (CB), Optic Lobe (OL), and Lamina (LM). Model precision based on the coefficient of variation (CV%, red). n≥10 brains, Welch’s t-test. ns, P>0.05; ****P≤0.0001. Error bars represent standard deviation. Dorsal (D), Left (L). Scale bars: 100 μm.
To resolve this, we iteratively added additional training data to a new model until we had 6 images from each scanner. Using six brains from each scanner (12 total) as training data allowed us to generate a highly accurate and precise (5-15% CV) multivariable 12-Brain model that could predict brain regions from images taken with either scanner. With this quantity of training data, the model was able to consistently segment samples (Fig. 2D) and achieve >98% accuracy on the full brain, and >97% accuracy for the central brain and optic lobes individually, regardless of the scanner used for imaging (Fig.2E).
We next tested whether changing multiple imaging variables (scanner type, staining protocols, and developmental phenotypes) could be incorporated into a comprehensive mixed model that could accurately and precisely segment the brain. We used animals carrying mutations in the microcephaly gene abnormal spindle (asp) as our developmental phenotype (Schoborg et al., 2015). Asp mutant brains display severe but highly variable morphology and size defects compared to wildtype brains, particularly in the optic lobe neuropils and lamina (Fig. 3B) (Mannino et al., 2023; Schoborg et al., 2019).
Figure 3. A robust comprehensive model that accounts for different scanner models, contrast agents, and tissue phenotypes can be trained to identify fly brains using limited training data.

(A) Deep learning segmentation of a phosphotungstic acid (PTA)-stained Drosophila brain imaged on the Versa. Central Brain (yellow), Optic Lobes (purple), Lamina (cyan). (B) asp mutant head with defective neuropil architecture and microcephaly, stained with iodine and imaged on the Versa. (C) Manual segmentation of the asp mutant brain from (B). (D) Deep learning segmentation of the same asp mutant using the 43-brain comprehensive model. (E) Comparison of final model accuracy for each imaging condition. Whole Brain (WB), Central Brain (CB), Optic Lobe (OL), and Lamina (LM). Model precision based on the coefficient of variation (CV%, red). n≥10 brains, Welch’s t-test. ns, P>0.05; **P≤0.01; ***P≤0.001. Error bars represent standard deviation. Dorsal (D), Left (L). Scale bars: 100 μm.
To effectively train this model, we started with 6 images of training data for our typical imaging condition and 6 images for each of our major imaging variables (scanner, stain, mutant), as that was sufficient for the two separate imaging conditions in our 12-Brain model. We also included 3 samples of each additional possible combination of imaging conditions (e.g., asp mutants stained with PTA, imaged on the Skyscan), totaling 36 pieces of training data. However, this model struggled with asp mutants stained with iodine, so we continued to add training data until we were satisfied with the model’s performance, finally arriving at 43 images of training data for the comprehensive model. This model is >98% accurate for total brain volume, central brain, and optic lobe volume for samples with similar imaging conditions and developmental phenotypes as the training data (Fig. 3).
It also was very precise for the whole brain, central brain, and optic lobe (<4% CV). However, this model was less precise for the lamina (4.5-15% CV), particularly for the asp mutants (Fig. 3E). This may be due to the severe and highly variable morphological defects in this region (Schoborg et al., 2019). While incorporating more training datasets from asp mutants might increase the accuracy and precision of detecting the lamina, there may be a point of diminishing returns for training models to predict highly variable tissue phenotypes.
With a comprehensive model in hand, our next objective was to determine if this model could accurately segment phenotypically similar mutant developmental phenotypes it had not been explicitly trained on. To evaluate this model, we attempted to segment eyD1Da mutants using our comprehensive model that had been trained on asp mutants. We chose eyD1Da mutants since they show similar morphological defects in the optic lobes as asp mutants (Fig. 4A). Ey expression is also significantly downregulated in asp mutants (Callaerts et al., 2001; Clements et al., 2009; Mannino et al., 2023; Schoborg et al., 2019).
Figure 4. Deep learning segmentation can be used to quickly segment untrained mutants and quantify new phenotypes.

(A) Eyeless mutant (eyD1DA) head. The angle represents optic lobe slope, measured in (I). (B) Manual segmentation of eyD1DA. (C) Deep learning segmentation of eyD1DA using the 43-brain comprehensive model. Arrow highlights region of the lamina called as optic lobe. 3D frontal view of (D) wildtype, (D’) asp, and (D’’) eyD1DA. Inferior fiber system (IFS) defects are seen in eyD1DA and reduced volume of the gnathal ganglion (GNG) in both mutants. 60-degree rotation of the (E) wildtype, (E’) asp, and (E”) eyD1DA brains. Antennal lobes (AL) reduced in size of both mutants. 90-degree rotation of the (F) wildtype, (F’) asp, and (F”) eyD1DA brains. Optic lobe (purple) rotation relative to the central brain (yellow) (red angle line). (G) Quantification of brain volume in wildtype (WT), asp, and eyD1DA mutants. (H) Violin plot showing optic lobe rotation angle, as shown in F-F”. (I) Violin plot showing the optic lobe slope, as shown in (A). n≥10 brains, Welch’s t-test. ns, P>0.05; **P≤0.01; ****P≤0.0001. Error bars represent standard deviation. Dorsal (D), Left (L). Scale bars: 100 μm.
Upon segmentation of ey mutants (Fig. 4), we noted that the central brain and other regions that were visually similar to asp or wildtype brains were accurately segmented, while areas that were less similar such as the lamina had some imperfections (Fig. 4C). This suggests that while our model is capable of accurately segmenting similar mutant phenotypes without explicit training, results should always be closely evaluated when comparing dissimilar tissue between mutants. Nonetheless, despite the occasional imperfections manifest in segmenting different mutant backgrounds, the amount of time required to correct any minor defects with these models is significantly less than manual segmentation.
We also uncovered an unexpected advantage of employing a model on untrained mutant images—its ability to reveal new phenotypes. For example, ey mutants have not been reported to have microcephaly, yet our volume assessment using our trained asp model clearly showed that these animals, much like asp mutants, also have microcephaly, with the ey mutant optic lobes showing the greatest reduction in size (Fig. 4G). These data agree with studies of Pax-6, the vertebrate ortholog of ey, which causes microcephaly when mutated in mice, humans, and Xenopus (Peng et al. 2004; Lim 2023).
Furthermore, a ‘mistake’ in the automated segmentation of untrained mutants can also reveal clues to additional phenotypic defects that may not be obvious to the human eye. For example, a comparison of manual and automatic segmentations revealed a previously unnoticed morphological phenotype in the central brain of some of the more severe ey mutants (Fig. 4D-4D”, 4E-4E”, 25% of animals). This phenotype exposes the inferior fiber system (IFS). In addition, there is a visible reduction in size of the gnathal ganglion (GNG) and antennal lobe (AL) in both mutant brains when compared to wild type. Other well characterized ey defects, such as mushroom body size and morphology, were also clearly evident in our analysis (Kurusu et al., 2000), a defect shared by asp mutants. Interestingly, central brain volume was not significantly affected, despite the morphological differences observed (Fig. 4G), which may be due to compensatory overgrowth in other neuropil regions in this part of the brain. This could be revealed by additional ‘high resolution’ AI models that are trained to identify individual fly neuropils, rather than larger brain structures, which would also help identify new phenotypes.
In support of this idea, we also observed additional defects in the optic lobe neuropils in both asp and ey mutants, which consists of only three neuropils. Upon analyzing the three-dimensional segmented structures of these mutants (Video 3), we noted that the optic lobes and lamina of asp (Fig. 4F’) and ey (Fig. 4F”) mutants were ‘rotated’ in different directions compared to wild type brains. Quantifying this rotation showed asp mutants were biased towards a negative angle of rotation, while ey mutants showed severe rotation angles in both directions (Fig. 4H). In addition, mutant optic lobes also showed a significantly increased slope relative to the cuticle (Fig. 4A,I). While the molecular mechanisms underlying these phenotypes require further investigation, these data highlight the utility of using AI models to identify new mutant phenotypes, even in untrained datasets.
Video 3,
related to Figure 4: Rotating 3D views of a wildtype (top), asp mutant (middle), and ey mutant (bottom) segmented using the comprehensive model. Central Brain (yellow), Optic Lobes (purple), Lamina (cyan).
In summary, our deep learning model facilitates fast, accurate, and consistent segmentation of the Drosophila brain with over 98% accuracy under a variety of Micro-CT imaging conditions. This allows for rapid characterization of brain mutant phenotypes and can serve as a tool for rapid quantitative analysis of potentially significant mutants. Our AI model reduces the burden of manual brain segmentation from ~30 minutes to ~3-5 minutes per brain, greatly enhancing the speed of our workflows. Key factors that determine the accuracy of a given model include image pixel size, image calibration and orientation, and consistency of staining procedures/methods. Our brain models are freely available for use, with other organ models to be released in the future.
Acknowledgements
We thank Sam Fay, Holden Bindl, and Lars Kotthoff for discussions and effort on an earlier phase of this project and members of the Schoborg lab for feedback and beta testing. We also thank the WY INBRE Program for supporting the purchase of the Bruker SkyScan 1172 and the WY Science Institute’s Center for Advanced Scientific Instrumentation (CASI) for providing access to the Zeiss Xradia Versa 610.
Funding
This work was supported by the National Institutes Health (NIGMS 1R35GM155195-01). Wyoming INBRE is supported by an Institutional Development Award (IDeA) from the National Institute of General Medical Sciences of the National Institutes of Health under Grant # 2P20GM103432.
Footnotes
Competing interests
MM was an employee of Comet Technologies Canada, Inc when the work was carried out but is now affiliated with Math2Market North America, Inc. JM and TS declare no competing interests.
Data Availability
All models can be downloaded from either of two sources:
Dragonfly users can also access the models through Dragonfly Social, in the “AI Models Group” under "Models for Drosophila Brain Segmentation".
References
- Buffiere JY, Maire E, Adrien J, Masse JP, Boller E, 2010. In Situ Experiments with X ray Tomography: an Attractive Tool for Experimental Mechanics. Exp. Mech 50, 289–305. 10.1007/s11340-010-9333-7. [DOI] [Google Scholar]
- Callaerts P, Leng S, Clements J, Benassayag C, Cribbs D, Kang YY, Walldorf U, Fischbach KF, Strauss R, 2001. Drosophila Pax-6/eyeless is essential for normal adult brain structure and function. Journal of Neurobiology. [DOI] [PubMed] [Google Scholar]
- Carlson WD, 2006. Three-dimensional imaging of earth and planetary materials. Earth and Planetary Science Letters 249, 133–147. 10.1016/j.epsl.2006.06.020. [DOI] [Google Scholar]
- Chai B, Efstathiou C, Yue H, Draviam VM, 2024. Opportunities and challenges for deep learning in cell dynamics research. Trends Cell Biol. 34, 955–967. 10.1016/j.tcb.2023.10.010. [DOI] [PubMed] [Google Scholar]
- Clark DP, Badea CT, 2014. Micro-CT of rodents: state-of-the-art and future perspectives. Phys. Med 30, 619–634. 10.1016/j.ejmp.2014.05.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clements J, Hens K, Merugu S, Dichtl B, de Couet HG, Callaerts P, 2009. Mutational analysis of the eyeless gene and phenotypic rescue reveal that an intact Eyeless protein is necessary for normal eye and brain development in Drosophila. Dev. Biol 334, 503–512. 10.1016/j.ydbio.2009.08.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- du Plessis A, Broeckhoven C, Guelpa A, le Roux SG, 2017. Laboratory x-ray micro-computed tomography: a user guideline for biological samples. Gigascience 6, 1–11. 10.1093/gigascience/gix027. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hounsfield GN, 1980. Computed medical imaging. Science 210, 22–28. 10.1126/science.6997993. [DOI] [PubMed] [Google Scholar]
- Ito K, Shinomiya K, Ito M, Armstrong JD, Boyan G, Hartenstein V, Harzsch S, Heisenberg M, Homberg U, Jenett A, Keshishian H, Restifo LL, Rössler W, Simpson JH, Strausfeld NJ, Strauss R, Vosshall LB, Insect Brain Name Working Group, 2014. A systematic nomenclature for the insect brain. Neuron 81, 755–765. 10.1016/j.neuron.2013.12.017. [DOI] [PubMed] [Google Scholar]
- Keklikoglou K, Arvanitidis C, Chatzigeorgiou G, Chatzinikolaou E, Karagiannidis E, Koletsa T, Magoulas A, Makris K, Mavrothalassitis G, Papanagnou E-D, Papazoglou AS, Pavloudi C, Trougakos IP, Vasileiadou K, Vogiatzi A, 2021. Micro-CT for Biological and Biomedical Studies: A Comparison of Imaging Techniques. J. Imaging 7. 10.3390/jimaging7090172. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee BD, Gitter A, Greene CS, Raschka S, Maguire F, Titus AJ, Kessler MD, Lee AJ, Chevrette MG, Stewart PA, Britto-Borges T, Cofer EM, Yu K-H, Carmona JJ, Fertig EJ, Kalinin AA, Signal B, Lengerich BJ, Triche TJ, Boca SM, 2022. Ten quick tips for deep learning in biology. PLoS Comput. Biol 18, e1009803. 10.1371/journal.pcbi.1009803. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lim Y., 2023. Transcription factors in microcephaly. Front. Neurosci 17, 1302033. 10.3389/fnins.2023.1302033. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Makovetsky R, Piche N, Marsh M, 2018. Dragonfly as a Platform for Easy Image-based Deep Learning Applications. Microsc. Microanal 24, 532–533. 10.1017/S143192761800315X. [DOI] [Google Scholar]
- Mannino MC, Cassidy MB, Florez S, Rusan Z, Chakraborty S, Schoborg T, 2023. Mutations in abnormal spindle disrupt temporal transcription factor expression and trigger immune responses in the Drosophila brain. Genetics 225. 10.1093/genetics/iyad188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Matula J, Polakova V, Salplachta J, Tesarova M, Zikmund T, Kaucka M, Adameyko I, Kaiser J, 2022. Resolving complex cartilage structures in developmental biology via deep learning-based automatic segmentation of X-ray computed microtomography images. Sci. Rep 12, 8728. 10.1038/s41598-022-12329-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGrath H, Li P, Dorent R, Bradford R, Saeed S, Bisdas S, Ourselin S, Shapey J, Vercauteren T, 2020. Manual segmentation versus semi-automated segmentation for quantifying vestibular schwannoma volume on MRI. Int. J. Comput. Assist. Radiol. Surg 15, 1445–1455. 10.1007/s11548-020-02222-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Metscher BD, 2009. MicroCT for comparative morphology: simple staining methods allow high-contrast 3D imaging of diverse non-mineralized animal tissues. BMC Physiol. 9, 11. 10.1186/1472-6793-9-11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moen E, Bannon D, Kudo T, Graf W, Covert M, Van Valen D, 2019. Deep learning for cellular image analysis. Nat. Methods 16, 1233–1246. 10.1038/s41592-019-0403-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peng Y, Yang P-H, Ng SSM, Wong OG, Liu J, He M-L, Kung H-F, Lin MCM, 2004. A critical role of Pax6 in alcohol-induced fetal microcephaly. Neurobiol. Dis 16, 370–376. 10.1016/j.nbd.2004.03.004. [DOI] [PubMed] [Google Scholar]
- Ronneberger O, Fischer P, Brox T, 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv. 10.48550/arxiv.1505.04597. [DOI] [Google Scholar]
- Sapoval N, Aghazadeh A, Nute MG, Antunes DA, Balaji A, Baraniuk R, Barberan CJ, Dannenfelser R, Dun C, Edrisi M, Elworth RAL, Kille B, Kyrillidis A, Nakhleh L, Wolfe CR, Yan Z, Yao V, Treangen TJ, 2022. Current progress and open challenges for applying deep learning across the biosciences. Nat. Commun 13, 1728. 10.1038/s41467-022-29268-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schoborg T, Zajac AL, Fagerstrom CJ, Guillen RX, Rusan NM, 2015. An Asp-CaM complex is required for centrosome-pole cohesion and centrosome inheritance in neural stem cells. J. Cell Biol 211, 987–998. 10.1083/jcb.201509054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schoborg TA, Smith SL, Smith LN, Morris HD, Rusan NM, 2019. Micro-computed tomography as a platform for exploring Drosophila development. Development 146. 10.1242/dev.176685. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schoborg TA, 2020. Whole Animal Imaging of Drosophila melanogaster using Microcomputed Tomography. J. Vis. Exp 10.3791/61515. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Singhal A, Grande JC, Zhou Y, 2013. Micro/Nano-CT for Visualization of Internal Structures. Micros. Today 21, 16–22. 10.1017/S1551929513000035. [DOI] [Google Scholar]
- Toulkeridou E, Gutierrez CE, Baum D, Doya K, Economo EP, 2023. Automated segmentation of insect anatomy from micro-CT images using deep learning. Natural Sciences 3. 10.1002/ntls.20230010. [DOI] [Google Scholar]
- Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, Gerig G, 2006. User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage 31, 1116–1128. 10.1016/j.neuroimage.2006.01.015. [DOI] [PubMed] [Google Scholar]
- Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J, 2018. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. arXiv. 10.48550/arxiv.1807.10165. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
All models can be downloaded from either of two sources:
Dragonfly users can also access the models through Dragonfly Social, in the “AI Models Group” under "Models for Drosophila Brain Segmentation".
