Skip to main content
Patterns logoLink to Patterns
. 2024 Jun 21;5(8):101007. doi: 10.1016/j.patter.2024.101007

A hierarchically annotated dataset drives tangled filament recognition in digital neuron reconstruction

Wu Chen 1, Mingwei Liao 1, Shengda Bao 1, Sile An 1, Wenwei Li 1, Xin Liu 1, Ganghua Huang 3, Hui Gong 1,2, Qingming Luo 3, Chi Xiao 3,, Anan Li 1,2,3,4,∗∗
PMCID: PMC11368685  PMID: 39233689

Summary

Reconstructing neuronal morphology is vital for classifying neurons and mapping brain connectivity. However, it remains a significant challenge due to its complex structure, dense distribution, and low image contrast. In particular, AI-assisted methods often yield numerous errors that require extensive manual intervention. Therefore, reconstructing hundreds of neurons is already a daunting task for general research projects. A key issue is the lack of specialized training for challenging regions due to inadequate data and training methods. This study extracted 2,800 challenging neuronal blocks and categorized them into multiple density levels. Furthermore, we enhanced images using an axial continuity-based network that improved three-dimensional voxel resolution while reducing the difficulty of neuron recognition. Comparing the pre- and post-enhancement results in automatic algorithms using fluorescence micro-optical sectioning tomography (fMOST) data, we observed a significant increase in the recall rate. Our study not only enhances the throughput of reconstruction but also provides a fundamental dataset for tangled neuron reconstruction.

Graphical abstract

graphic file with name fx1.jpg

Highlights

  • From a data-driven perspective, focusing particularly on challenging tangled neurons

  • Enhanced the neuron image quality to facilitate tangled neuron reconstruction

  • HiNeuron integrates into network training, enhancing tangled filament recognition

  • HiNeuron enables the evaluation of neuron automatic tracing methods’ performance

The bigger picture

Neurons possess intricate morphological structures that receive information through dendrites and transmit integrated signals to downstream neurons via axons. In mammals, billions of neurons interconnect to form a complex brain network. For research, it is important to label and identify neurons and their connections. However, sometimes labeled neurons can appear entangled, and this creates difficulties for both humans and computers in identifying and studying neuronal connections. Unlike previous technology-driven approaches, this study presents a data-driven strategy to enhance the automatic identification and reconstruction of tangled neural regions.


This study presents a data-driven perspective, focusing particularly on neurons that have been identified as challenging in previous labeling efforts. The authors extracted thousands of tangled neuron blocks and created HiNeuron, a dataset that categorizes neural blocks into three density levels. The HiNeuron dataset enables incorporation into the existing specialized training network, enhancing tangled filament recognition and evaluation of the performance of the automatic tracing method. Ultimately, the HiNeuron dataset is a valuable resource for tangled neuron reconstruction.

Introduction

Neurons serve as the fundamental units of brain circuits, playing a crucial role in the study of brain structure and function. The reconstruction of neuronal morphology holds significant value for cell typing, unveiling projection patterns between distinct brain regions, and investigating brain connectivity patterns.1,2 Recent advancements in techniques such as neuronal labeling and three-dimensional high-resolution optical microscopy have enabled scientists to achieve mesoscale-level imaging of neurons.3,4,5,6 These innovative approaches facilitate the acquisition of comprehensive, high-throughput, and three-dimensional images of the entire mammalian brain,6,7 thereby enabling the identification of neuronal morphology from these images. The complexity of neuronal morphology is characterized by the presence of dendrites and long axons extending from the soma. Notably, axons, especially those distant from the soma, exhibit intricate branching patterns.7 It is difficult to recognize specific regions of densely tangled filaments within whole-brain neurons, such as dense fibers and neighboring or crossover fibers.7,8 Reconstruction involves tracing molecular labeling signals and accurately extracting structural information from the images. This intricate process entails gradually traversing all the branches until the terminals are reached, capturing the soma and branching structures precisely.

Currently, the gold standard for neuron reconstruction is manual tracing, a time-consuming and labor-intensive scientific endeavor.6,7 Nonetheless, the development of automatic tracing methods has become imperative but requires high accuracy and efficiency in neuron reconstruction. To achieve a faster and more accurate reconstruction of complete neurons, some researchers have directed their efforts toward developing sparse labeling techniques to reduce or eliminate mutual interference.9,10 However, even with sparse labeling, the presence of tangled filament neurons still poses a challenge.11,12 Some studies have made strides in enhancing reconstruction accuracy by optimizing imaging quality.6 Nevertheless, this approach can introduce implementation difficulties, such as a significant reduction in imaging throughput. In addition, various methods have been proposed in information processing for automatic neuron reconstruction.13,14,15,16 For instance, the MouseLight project from the Howard Hughes Medical Institute has reconstructed thousands of neurons through a combination of manual and automatic reconstruction. They established reconstruction criteria that selectively target neurons exhibiting continuous and clear signals while excluding those that fail to meet the criteria.17 Although these studies have advanced the field of neuron reconstruction, they still encounter numerous limitations when dealing with data that is difficult to discern. These challenges encompass various difficulties, including but not limited to the following: when fibers are in close proximity, the narrow gap between fiber boundaries can result in perplexing connections. In the case of fiber crossover, dozens of connection patterns may exist,18 making it difficult to distinguish the connectivity relationships accurately. Moreover, the branching structure of fibers, particularly in the presence of dense fibers or interference from other fibers, can introduce confusion during reconstruction.18 Furthermore, the poor axial image resolution19 adds complexity to distinguishing false connections between fibers, making it challenging to determine the correct connectivity. These challenging data further compound the difficulties in automatic tracing, ultimately leading to diminished reconstruction accuracy and efficiency.

To tackle the challenges associated with the reconstruction of difficult-to-recognize neurons, a common approach is to employ multiple manual reconstructions to enhance identification and reconstruction accuracy.17 However, manual reconstruction is known to be time consuming and labor intensive,17,20 underscoring the pressing need to develop automatic reconstruction algorithms. In recent years, deep learning, a widely applied technique in academia and industry,21,22,23,24,25,26,27,28 has also been explored for neuron reconstruction.12,29,30,31,32,33 For instance, Li and Shen12 devised a deep network for semantic segmentation of dense nerve fibers followed by tracing. Nonetheless, the development of such automatic neuron reconstruction algorithms typically necessitates a substantial amount of neuron image data coupled with corresponding annotations.12,29 Existing public neuron image databases provide partially annotated data, contributing to the advancement of neuron reconstruction methods.34,35 However, the available neuron annotation data from microscopic imaging are limited and often encompass single or sparsely distributed neurons, lacking tangled filaments or an adequate representation of other neurons.12 If the development of reconstruction algorithms solely relies on these data, the limited features learned by the network may result in insufficient robustness and generalization of the model.12,13 Some studies have attempted to generate dense data through simulations,12 but these artificially generated data differ from real optical microscopy data, leading to variations in learned features and lower accuracy when applied to real fights. Furthermore, existing tracing methods typically demonstrate favorable reconstruction outcomes on specific data types, yet they often lack evaluation and analysis of algorithm performance on challenging tangled filaments data.13,16 In summary, there is currently a dearth of challenging neuron image data featuring difficult-to-recognize characteristics coupled with the corresponding high-quality annotations. Therefore, enhancing the reconstruction accuracy for such data is imperative.

To address these challenges, the key is a data-driven approach that involves extracting the challenging locations within tangled neurons, mining their characteristics of small fiber-boundary distances from the data, and utilizing image enhancement to increase fiber continuity, thereby reducing the identification difficulty and improving the reconstruction accuracy of existing automatic tracing methods.

In this study, we extracted hundreds to thousands of challenging image blocks from the data of dozens of reconstructed neurons, each validated by multiple experts. These data mainly included tangled filament features such as neighboring fibers, branching and crossover structures, and dense fibers. By algorithmically analyzing the data, we categorized it into different density levels and created a dataset of approximately 2,800 blocks of tangled filament with three density levels, called hierarchical ImageNet for neuron reconstruction (HiNeuron). These data compensated for the limited number of difficult-to-recognize tangled filament samples in the existing database and enabled the evaluation of the reconstruction performance of existing tracing methods. We publicly released this dataset as a valuable resource for developing new neuron reconstruction methods.

We also improved the spatial resolution of neuron images to achieve three-dimensional isotropy through image enhancement techniques. This enhancement process improved the reconstruction accuracy of existing automatic tracing methods on difficult-to-recognize tangled filament data. We extracted the challenging data of tangled filaments from fluorescence micro-optical sectioning tomography (fMOST)4,5,6 and compared the results of existing automatic tracing algorithms (e.g., neuTube36) on these data before and after image enhancement. The most significant recall rate of neuron reconstruction increased from 0.58 ± 0.12 to 0.89 ± 0.10, reducing the difficulty of neuron recognition and minimizing the need for manual corrections. For certain test data where current automatic tracing methods struggled to achieve accurate reconstruction (precision <0.2), we achieved a reconstruction precision improvement exceeding 0.7 by utilizing image enhancement. This improvement effectively increased the throughput of neuron reconstruction. Moreover, incorporating HiNeuron data into the existing neural network enhanced their performance, increasing the average F1 score by 10.74% in high-density neuron images.

Results

Construction of a tangled neuronal image dataset at the whole-brain scale

Neuronal automatic reconstruction achieves high accuracy on sparsely or clearly structured data.17 However, it results in numerous false connections and reconstruction errors when dealing with tangled filament data such as neighboring fibers, branching and crossover fibers, and dense fibers, requiring substantial manual correction.8 Thus, there is a need to focus on difficult-to-recognize neuronal data. We constructed a neuronal image dataset with different levels of densities called HiNeuron, which consisted of neuron images and their corresponding manual tracing results as the gold standard (Figure 1A). First, we extracted hundreds to thousands of image blocks with difficult-to-recognize data of several dozen reconstructed neurons. Each reconstructed neuron result was verified by multiple experts. When identifying inconsistent locations among the multiple manual reconstructions completed during the reconstruction process (Figure 1B), these difficult-to-recognize data in different positions were prone to confusion. We extracted the challenging neuron images, typically those of neighboring, branching, crossover, and dense fibers. Subsequently, we calculated signal density and cluster analysis to divide the dataset into different density levels (Figure 1C).

Figure 1.

Figure 1

Construction of a tangled neuronal dataset of images at the whole-brain level

(A) The pipeline of constructing a neural image dataset includes data extraction, image density grading, and image isotropy.

(B) The extraction of neuron images is an inconsistency in multiple manual reconstructions. B1–B5 represent difficult-to-recognize images of neurons in different positions, using B1 as an example for explanation. Scale bar, 50 μm.

(C) The density grading of neural images is divided into different density levels through algorithms. Scale bar, 20 μm.

Due to the physical limitations of optical imaging systems, the axial-voxel resolution is generally lower compared to the lateral-voxel resolution.3,4,5,6 This anisotropic voxel resolution leads to insufficient spatial continuity in the images.8 Particularly, neuronal images with tangled filaments7,12 are challenging due to the small fiber-to-fiber distances, which often result in false connections and worsen accurate reconstruction. Moreover, existing neuronal image databases34,35 typically provide data with a specific resolution, while actual imaging data exhibit diversity. To further enhance the versatility and compatibility of the data, the deep learning network was employed for image enhancement, aiming to improve the axial-voxel resolution and quality of the images. This approach produced three-dimensional isotropic high-resolution data, increased the continuity of neuronal fiber, reduced the recognition difficulty, and contributed to improving the reconstruction accuracy of existing automatic reconstruction algorithms for difficult-to-recognize data.

Axial image enhancement of neurons

Neuronal images acquired through optical imaging techniques are typically anisotropic.3,4,5,6,7 Due to the lower axial-voxel resolution, there is a lack of continuity in neuronal fiber connections, which presents limitations in the subsequent applications of neuron tracing.37 To enhance the versatility and compatibility of the data, we employed image enhancement techniques using a deep learning neural network to predict high-axial-voxel-resolution images from optically acquired images with low axial-voxel resolution (Figure 2A). This generated three-dimensional isotropic data, enhancing neuronal fiber continuity and improving the accuracy of automatic reconstruction for challenging data. However, obtaining high-resolution axial data for deep learning network training was difficult, creating a bottleneck. Due to the high-resolution optical images in the lateral direction and the low-resolution images in the axial direction, there were similar structural features in different slices.37,38 Thereby, the network can be trained through high-resolution lateral images and then applied to low-resolution axial images to improve axial resolution. To address this problem, we utilized higher-lateral-resolution images in the coronal plane as the ground truth.37,38 By down-sampling the coronal plane images, we constructed images similar to the axial low-resolution images, thereby creating image pairs with high resolution in the coronal plane and low resolution in the axial plane whose features matched each other. Subsequently, the constructed dataset was used to train the AINet (axial interpolation network) model, which was based on the U-Net model.39 We used parameter optimization and ablation experiments, including variations in the number of up-sampling and down-sampling layers in the network (Figure S2), activation function, and loss function, to better adapt the model to the characteristics of the neuron images. After multiple parameter adjustments, we obtained an optimal network that was better suited to training and predicting neuronal images. The trained network was then applied to predict high-axial-voxel-resolution isotropic data from low-axial-voxel-resolution images. The steps of axial image enhancement included the construction of the training dataset, AINet training, and image prediction (Figure S1). We provide a more detailed description in the experimental procedures.

Figure 2.

Figure 2

The results of neuron image isotropy

(A) The flowchart for training and predicting high-axial-voxel resolution of neuronal images.

(B) The lateral section (x-y) images. The coronal plane of the original images, bicubic up-sampling, and AINet prediction images is shown. Scale bar, 20 μm.

(C) The axial section (z-y) images are a comparison of axial-voxel-resolution raw images, bicubic up-sampling, and AINet predicted images. The drawn grayscale curve is the distribution along the orange line. Scale bar, 20 μm.

Following training on the lateral slices (x-y) of the neuronal dataset (Figure S3), the trained model was applied to predict the low-axial-voxel-resolution plane (z-y) images, where the axial plane (z-y) images were obtained through data reslicing.40 The results included raw images, images obtained through bicubic interpolation, and images predicted using the AINet (Figure 2C). We plotted the grayscale profiles for comparison, revealing that the raw images exhibit noticeable aliasing artifacts and are blurry, making it difficult to distinguish the direction of neuronal fibers. Bicubic up-sampling offered some improvement and enhancement, but the AINet, utilizing fully convolutional methods, significantly enhanced the quality of the predicted images and provided clearer visualization of the neuronal fiber (Figures S4 and S5). After predicting the high-axial-voxel-resolution (z-y) images, further validation of the axial-voxel-resolution images was performed. The predicted high-axial-voxel-resolution (z-y) images were then resliced to their original coronal (x-y) plane images for comparison (Figure 2B). It was observed that the predicted high-axial-voxel-resolution images were resliced to the coronal plane to maintain consistency with the original coronal plane (x-y) images. The predicted images effectively suppressed background fluorescence effects and, when combined with image deconvolution techniques,41 reduced image blurring. Experimental validation on simulated neuronal images was performed to verify the training and prediction of the network (Figure S6). Our approach was compared with other superresolution models,23,24,37,42 obtaining better-quality axial neuron images and improved quantitative evaluation metrics (Figure S10). The results demonstrated that our method produces higher image quality and closer resemblance to the reference images in the simulated neuronal dataset.

Neuron reconstruction based on density grading

In neuron reconstruction, the varying density levels of neuronal images correspond to different degrees of difficulty in recognition. By employing image enhancement to achieve isotropy, we explored whether this approach can improve the accuracy of the automatic reconstruction and compared the reconstruction performance of different algorithms on images with different density levels. Through cluster analysis, we classified the image data into three density grades (Figure 1C), which helped to assess the reconstruction results for different density levels during neuronal reconstruction evaluation. We combined six different reconstruction algorithms for testing, i.e., APP2,16 NeuronStudio,43 neuTube,36 SparseTracer,14 ST-LFV,15 and NeuroGPS-Tree,13 on images with varying density levels to measure the reconstruction recall rate before and after image enhancement (Figures 3C–3H). Due to our neuron reconstruction being based on single-neuron reconstruction, some of the different reconstruction algorithms we tested traced all fibers in the image block, resulting in lower reconstruction precision and F1 score. Therefore, we compared the neural reconstruction results with the reconstruction recall rate. The quantitative results of automatic neuron reconstruction indicated that as the density of the neuron images increased, the overall reconstruction recall rate tended to decrease, which also indicated greater difficulty in reconstruction (Table S1). In addition, we performed the same reconstruction tests on isotropic images with enhanced voxel resolution at different density levels. Compared to the original neuron images, the reconstruction results of image-enhanced neuron images using existing automatic tracing algorithms showed improved recall rates, with the most significant increase from 0.58 ± 0.12 to 0.89 ± 0.10 for neuTube (Table S1). This reduction in recognition difficulty facilitated the correct reconstruction of challenging locations such as branches, neighbors, and crossover in neuronal fibers (Figure 3B), thereby reducing manual correction time. Moreover, in some test data where current automatic tracing methods were almost unable to achieve correct reconstruction (precision <0.2), the reconstruction accuracy increased from 0.2 to above 0.7 after the image enhancement (Table S2), effectively improving the throughput of neuron reconstruction. We further manually corrected the tracing results before and after image enhancement. Taking the NeuroGPS-Tree algorithm as an example, we examined the trace results of 26 data blocks with different density levels. After obtaining the initial tracing results, the original data before the image enhancement required manual corrections 52 times, taking approximately 19 min, while the data after the image enhancement required manual corrections only 14 times, taking approximately 4 min (Table S3). This demonstrated that isotropic image enhancement can effectively improve the throughput of reconstruction and reduce the time required for manual correction.

Figure 3.

Figure 3

Automatic reconstruction recall rate results of neural image density grading

(A) The reconstruction results of neuron images under different densities in the original image and at high-axial-voxel resolution, with manual tracing results as the gold standard. Scale bar, 50 μm.

(B) The reconstruction results at some key positions, and the last is the manual reconstruction as the gold standard. Scale bar, 20 μm.

(C–H) Quantitative results of recall rate for automatic neuron reconstruction in APP2, NeuronStudio, neuTube, SparseTracer, ST-LFV, and NeuroGPS-Tree at different densities. LR represents the original low-resolution images and HR represents the high-resolution neuron images after image enhancement.

In the simulation of neuron images with different densities, we utilized NeuroGPS-Tree for automatic reconstruction (Figure S7; Table S4). We demonstrated the reconstruction results of challenging locations within tangled filaments and showed that improving the axial image quality contributed to the accurate reconstruction of critical points in neurons (Figure S7C). Furthermore, we conducted complete reconstruction experiments on 24 image blocks with different densities (Figure S8). The results indicated that isotropic blocks can improve the accuracy of neuron reconstruction by 10%–30% in terms of the F1 score (Figure S8C; Table S5). Particularly, in dense neuronal datasets where distinguishing neighboring neurons and crossover neurons was difficult, isotropic blocks can reduce the difficulty of recognition (Figures S8B, S8E, and S8F). We conducted experiments using simulated neuron data (Figure S12), generating training data with both low and high voxel resolution in the axial direction and employing the same structured network model for training. The experimental results (Figure S12C) show that the network trained on high-axial-resolution images yields better reconstruction results in neuron reconstruction. We also conducted reconstruction tests on images with different axial-voxel resolutions (Figure S14). According to the automatic reconstruction results (Figure S14B), the axial-voxel resolution decreased under the different image densities, indicating lower continuity between adjacent images, and the F1 score exhibited a decreasing trend (Tables S6 and S7).

The influence of a graded dataset on automatic reconstruction accuracy for network training

Existing neural network models for neuron reconstruction often utilize training data that mainly consist of single neurons or sparse signals, with minimal interference from other fiber signals.12,29 Moreover, these training data contain a mixture of difficulty levels, which leads to limited learning of different types of features in the network. As a result, the robustness and generalization capability of the model are insufficient, and it fails to adequately and specially train on challenging neuron data. The predictive performance of the network model is poor when applied to real optical microscopy images containing challenging neuronal data, leading to low accuracy in neuron reconstruction. In contrast, the HiNeuron includes data that are manually recognized as difficult during neuron reconstruction, such as tangled filament features with neighboring, branching, and crossover fibers and dense fibers. Incorporating these challenging data into the existing network and specializing in the training enhanced the model’s ability to recognize tangled fibers, improving prediction accuracy and reducing false fiber connections in neuron reconstruction.

Currently, training-based models generally segment neuronal images first, followed by neuron reconstruction.8,12,29 For instance, Huang et al.29 mainly focused on enhancing weak signal fibers in micro-optical neuronal images and then performing neuron tracing. In this study, we aimed to explore the impact of the challenging dataset we constructed on the automatic reconstruction accuracy during network model training. Due to the limited existing research on neuron images from optical microscopy, we opted to utilize network models trained on similar types of images. Based on Huang et al.,29 we employed the VoxResNet network44 (Figure 4A) to segment the graded neuron data of HiNeuron and followed that with neuron tracing. The results of neuron segmentation are shown in Figure 4B. Using the network and initial weight, we predicted and obtained neuron segmentation results. However, this also identified some background noise as signals, resulting in more false-positive signals and interference in the reconstruction. We then incorporated the HiNeuron dataset into the VoxResNet, imported the initial weight, and retrained the model to enhance its ability to recognize the challenging neuron data. We added more than 20 difficult-to-recognize samples for training and then iterated multiple times to predict neuron images and compared the results with the predictions of the original model. As shown in Figure 4B, training with the HiNeuron dataset mitigates the impact of background noise on neuron reconstruction. Quantitative analysis compared neuron segmentation results at different densities before and after training VoxResNet, followed by neuron reconstruction (Figure 4C). After retraining with VoxResNet, the predicted neuron images at the original resolution and isotropy images achieved higher reconstruction accuracy compared to those obtained using the original model. In the high-density data, compared to the original images in the initial network reconstruction, retrained isotropic images improved the average F1 score by 10.74% (Table S8). This indicated that HiNeuron contributes to the improvement of the existing network model performance.

Figure 4.

Figure 4

Neuron reconstruction results before and after training with VoxResNet

(A) VoxResNet network architecture.

(B) Predicted results using the VoxResNet network and the predictions after incorporating the HiNeuron dataset.

(C) Reconstruction F1 score results of predicted neuron images before and after training with the VoxResNet. LR represents the original low-resolution images and HR represents the high-resolution neuron images after image enhancement. Scale bar, 50 μm.

Testing and analysis of automatic neuron reconstruction algorithms in a graded dataset

Existing automatic neuronal reconstruction algorithms are often developed based on specific datasets or specific problems, and their performance may vary across different types of datasets.29 To explore the reconstruction performance of these automatic algorithms on challenging data, we conducted testing and analysis using our density-based dataset. We selected a completely reconstructed neuron and divided it into different image blocks, including some challenging neuronal images from our benchmark dataset. We then performed cluster analysis on all the image data to obtain neuron images with different densities (Figure S9). Each neuron image had the target fiber that needed to be reconstructed, not all fibers. We tested the accuracy of different neuronal reconstruction algorithms, such as SparseTracer,14 ST-LFV,15 and NeuroGPS-Tree,13 and compared the quantitative results. The result of manual annotation and verification was the gold standard. These three methods can trace target neurons from given seed points. In previous studies, without density classification of the images, it was not possible to compare the reconstruction results of different types of images such as sparse or dense images.

However, classifying the neuron images by density allows for a comprehensive analysis of the performance of different algorithms on different types of data. In the quantitative results (Figure 5B), the black dashed line represents the average F1 score of each method. At lower densities, the three tested algorithms exhibited relatively higher accuracy, but as the density increased, the reconstruction accuracy decreased. This indicated that the reconstruction algorithms generated more false connections with higher density, indicating greater reconstruction difficulty (Table S9).

Figure 5.

Figure 5

Results of all graded data blocks for a single neuron reconstruction using different automatic algorithms

(A) Automatic reconstruction results of different reconstruction algorithms and manual reconstruction results as the gold standard. Scale bar, 50 μm.

(B) Quantitative comparison of F1 score results obtained from the different reconstruction algorithms.

By classifying the density of neuronal images, we can evaluate the performance of neuronal reconstruction algorithms more comprehensively, enabling the selection of more suitable algorithms or improvements to the algorithms based on specific research needs.

HiNeuron data availability of tangled neuron images with density grading

We constructed the HiNeuron dataset, which comprises neuronal images, corresponding manual reconstruction as the gold standard, and isotropic high-resolution voxel images for each neuronal image block. In this dataset, all the images were divided into three different density levels, ranging from low to high density. The dataset comprised approximately 2,800 image blocks, including neighboring fibers, branching, and crossover fibers (Figure 6), all in three-dimensional 16-bit format. The pixel size of each block is not the same, and it is cropped according to the actual extension direction of the target neuron fiber. For each folder, we provided a document containing the identification number for each piece of data, along with the starting point and direction of the corresponding data block tracing.

Figure 6.

Figure 6

Neuron images of HiNeuron data

(A) Neuron images at different densities. Scale bar, 50 μm.

(B) Numbers of neuron images with different densities.

(C–E) Local neuron images of branching fibers, neighboring fibers, and crossover fibers. Scale bar, 20 μm.

The primary purpose of building the dataset is 2-fold. First, it allows for the analysis of the reconstruction performance of existing algorithms. Using our benchmark dataset, different neuronal algorithms can be tested, and quantitative results can be obtained to evaluate the reconstruction effectiveness for neurons of different densities. Second, based on our constructed HiNeuron dataset, it can be combined with existing or developed new neural network models to improve the ability of the challenging neurons and enhance the generalization capability of the network. It also facilitates the development of new neuron reconstruction algorithms. We make the dataset publicly available and provide the data download links, thereby offering a foundational data resource for future research on neuron reconstruction methods.

Discussion

In this study, we focused on the recognition of challenging data in neuron images. We constructed a difficult-to-recognize tangled neuronal dataset with varying levels of density. Employing neural networks, we enhanced the images to improve fiber continuity and reduce the difficulty of neuron identification, consequently enhancing the reconstruction accuracy of existing automatic tracing methods. While existing automatic neuron reconstruction methods achieved satisfactory tracing results on sparse or clear signal images,17,29 they often encountered challenges in tangled filaments with closely packed fibers, branching, or crossovers or dense fiber regions.8 These fibers’ common feature was the mutual interference between fibers that were in close proximity to one another and caused confusion during the reconstruction process. It was easy to trace to adjacent fibers, resulting in a large number of false reconstructed connections. These false connections linked different neuronal fibers, leading to numerous reconstruction errors and significantly diminished reconstruction accuracy. To address this problem, we extracted challenging data, utilized image enhancement to increase fiber continuity, and obtained isotropic images that reduced the recognition difficulty. Our approach improved reconstruction accuracy, which could reduce the manual correction time and increase the throughput of neuron reconstruction.

The current neuron reconstruction is still manual annotation as the gold standard, a time-consuming and labor-intensive task. Therefore, there is an urgent need to develop automatic reconstruction methods. While existing automatic reconstruction methods have somewhat improved efficiency,7,8,13 deep learning methods have also been applied in neuron reconstruction research.8,12,29 However, these algorithms require a large number of training images and often exhibit poor performance with tangled filaments.12 Moreover, some methods are tailored to specific datasets or problems, leading to poor performance when applied to diverse datasets.29 And a comprehensive analysis of reconstruction performance is lacking. In our dataset, we include tangled neuron images with different levels of density and their corresponding manual reconstruction. This dataset allows for performance analysis of different algorithms and more details testing and evaluation of the automatic reconstruction capabilities. It enables the targeted selection of more suitable neuron reconstruction methods and improvement techniques. In addition, it serves as a valuable resource for developing existing or new reconstruction algorithms, particularly for challenging neuron data, and improving accuracy for difficult-to-recognize data.

Reconstructing difficult-to-recognize neurons poses a challenging task as tangled filaments and small fiber distances often result in numerous false connections and lower reconstruction accuracy. Optical imaging has inherent physical limitations, leading to low-axial-voxel resolution and axial blurring. Although some deep learning networks21,22,23,24,25,26 can enhance image resolution, obtaining high-axial-voxel-resolution training data remains difficult,8 limiting the application of deep learning networks in neuron reconstruction. To address the problem, we leveraged the structural similarities of different sectional slices in three-dimensional images37,38 and simulated low-voxel-resolution axial images using high-lateral-voxel-resolution images. This allowed us to construct a training dataset for the network model. By employing deep networks for image enhancement, we achieved isotropic three-dimensional images with improved axial image quality and reduced difficulty in neuron recognition. Our approach was compared with other superresolution models, obtaining better-quality axial neuron images and improved quantitative evaluation metrics. The predicted isotropic high-voxel-resolution three-dimensional images demonstrated improved reconstruction accuracy compared to the original images when used for automatic neuron reconstruction. The isotropic high-voxel-resolution data we obtained can be sampled according to voxel requirements in other relevant studies to obtain different-voxel-resolution data. We carried out experiments using simulated neuron data, generating training data with both low- and high-voxel resolution in the axial direction and training them with the same structured network model. The results showed that the network trained on high-axial-voxel-resolution images outperformed in neuron reconstruction.

In summary, we have constructed a dataset of tangled neurons with varying levels of difficulty in recognition across the entire brain. This challenging dataset was extracted from neurons obtained from fMOST. Using neural networks, we performed image enhancement to improve the continuity of fibers. By comparing the results before and after image enhancement using existing automatic tracing algorithms, we achieved improved reconstruction accuracy and throughput. In addition, we have made the constructed dataset publicly available, and it can be utilized for scientific research in the development of neuron reconstruction methods.

Limitations of the study

This study still has some limitations. First, optical systems capture diverse types of images,19 due to factors like signal intensity and background noise,3,6 leading to distinct features across different images. A single model used for these types might not effectively learn all image characteristics, impacting prediction accuracy. When enhancing neuronal images, models are trained on different types of images to adapt to their characteristics and ensure prediction accuracy. Accommodating multiple datasets with a single network model is challenging, especially with diverse optical neuronal images. The model must handle varying signal intensities and adapt to different imaging conditions. The importance of using a single model lies in its advantages, including improved efficiency and enhanced generalization,45 ensuring effectiveness in complex environments. Moreover, such a model offers broader application potential.46 Second, when the voxel resolution ratio between the axial and the lateral directions is significantly larger, like differences of 5 or even 10 times, the difficulty of generating high-quality images using the network increases. Third, we classified the neuron images into different density levels based on signal density. In future research, we would use additional criteria, such as branching points and crossover points, to further refine the difficulty levels of reconstruction. Last, although our research improves the reconstruction accuracy in tangled neurons to some extent, further exploration is still needed for the reconstruction of challenging locations.

Experimental procedures

Resource availability

Lead contact

Requests for further information and resources should be directed to and will be fulfilled by the lead contact, Anan Li (aali@hust.edu.cn).

Materials availability

This study did not generate new unique reagents.

Data and code availability

Data acquisition

The mouse brain neuron images in this study were obtained using the fMOST technique.5,6 The acquisition of neuronal tracing data for the seven specimens followed the fMOST workflow, which included sample preparation, whole-brain optical imaging, and neuron reconstruction (Figure S11).

All histological procedures have been previously described.6 Briefly, the mice were anesthetized using a 1% solution of sodium pentobarbital and subsequently intracardially perfused with 0.01 M phosphate-buffered saline (PBS) followed by 4% paraformaldehyde (PFA) in 0.01 M PBS. Thereafter, each brain was dissected, fixed in 4% PFA for 24 h at 4°C, and rinsed in PBS for 12 h at 4°C. Acquisition of the fine-detailed whole morphology of individual neurons required resin embedding due to its advantages in terms of ensuring signal integrity and reducing losses from tissue scattering. Briefly, each brain was dehydrated in a graded ethanol series (50%, 70%, and 95% ethanol, changing from one concentration to the next every 1 h at 4°C). After dehydration, the brains were immersed in a graded glycol methacrylate (GMA) series, including 0.2% Sudan black B (SBB) (70%, 85%, and 100% GMA for 2 h each and 100% GMA overnight at 4°C). Finally, the samples were impregnated in a pre-polymerization GMA solution for 3 days at 4°C and embedded in a vacuum oven at 35°C for 24 h.

Next, the mouse brain samples were immersed in a water bath. Whole-brain imaging was performed in the water bath. Imaging was conducted using the fMOST system. Sectioning was achieved through a relative motion between the fixed diamond knife and the three-dimensional translation stage. The fMOST system automatically performed the sectioning and imaging to complete the brain-wide data acquisition. The original images were stitched to form complete coronal sections, and the image quality was improved by light correction, which was convenient for subsequent image processing and analysis. The voxel resolution was 0.35 × 0.35 × 1 μm or 0.32 × 0.32 × 1 μm, and the data were saved in 16-bit-depth LZW-compressed TIFF format.

Due to the entire brain images reaching terabyte size,6 it was difficult to process such a huge amount of data. We transformed the data format from TIFF to the native TDat format.48 Then we performed the neuron reconstruction using the GTree software.49 During the reconstruction, each data block was independently reconstructed by at least two annotators for accuracy. Ultimately, experienced annotators were tasked with overseeing and validating all results before the results were accepted, obtaining the gold standard for neuron reconstruction. In general, based on the obtained neuronal imaging data, we conducted neuron reconstruction and then used it to extract challenging neuronal data, which were prone to reconstruction confusion at these tangled filaments, thereby establishing the HiNeuron dataset. The dataset comprised approximately 2,800 neuron image blocks.

Extraction of tangled neuron image data

Based on neuron images and the obtained reconstruction results, we extracted the difficult-to-recognize tangled filament data. For locations where the reconstructions were inconsistent, a distance threshold was set by iterating through the distances between each point. Then we selected challenging data blocks during the neuron reconstruction.

To mitigate the biases from manual reconstructions, there may be data-block selections affected by the point deviations. The neuron image blocks were further subjected to manual inspection to remove erroneous reconstructions caused by inconsistent points and selected sparse data blocks. The final extracted neuron images were tangled filaments that were difficult-to-recognize data, including dense fibers, branching and crossover fibers, and neighboring fibers. Due to data cropping being performed along the reconstruction direction of the fibers, the three-dimensional size of the images was not entirely uniform. The constructed dataset currently comprises thousands of image blocks. Furthermore, the constructed neuronal image dataset has been made publicly available, providing a fundamental data resource for subsequent research on neuronal-related studies.

Density grading of tangled neuronal image dataset

The extracted difficult-to-recognize neuron images contain varying degrees of density. If all the data were tested and evaluated together, it could pose challenges in analyzing the results of automatic reconstructions. It may not provide a comprehensive assessment of the reconstruction performance of neuronal tracing algorithms. By classifying the neuron images based on density levels, we can perform targeted testing on images with different densities and focus on the reconstruction results of neuronal images with different density distributions.

When performing density grading on neuronal image blocks, we first extracted the signal density by applying neural network29 and threshold segmentation to the image blocks. The signal density was then normalized using the formula D=Num(s)/Max(s), where D represents the density of the neuronal image, Num(s) denotes the number of signals in each neuron image block, and Max(s) represents the maximum number of signals among all neuronal image blocks. The high quality of the neuron images, with minimal background noise interference, enabled accurate signal extraction. Subsequently, hierarchical clustering was employed to group the neuronal images into different density levels. Hierarchical clustering was an automated process that ensured the objectivity of classification. When testing different neuron reconstruction methods, we compared them based on neuron image density grading results.

The axial image enhancement for tangled neuronal images

To increase the applicability and compatibility of neuronal data, especially for tangled filament images, the axial voxel resolution is lower than the lateral voxel resolution. This anisotropy leads to insufficient spatial continuity in neuronal images, making them prone to false connections during reconstruction. To address this problem, we employed axial image enhancement, which converted the low-resolution axial data into high-resolution data, thereby enhancing the continuity between neuron fibers and improving the accuracy of automatic neuron reconstruction. We utilized a deep learning network for training and prediction. First, we constructed a training dataset specifically designed for axial image up-sampling. Next, we trained the network to learn the features of neuron images. Finally, the trained network predicted high-resolution axial images (Figure S1).

Generation of training datasets

The three-dimensional image data obtained by optical microscopy have the characteristics of axial voxel resolution lower than lateral voxel resolution. The voxel resolution of the three-dimensional image is expressed as V=Vx×Vy×Vz, the high lateral-voxel resolution is Vx=Vy, and the low axial-voxel resolution is Vz, and the different ratio of lateral and axial voxel resolution is α, and the calculation formula is as follows: α=Vx/Vz=Vy/Vz. The image obtained by optical microscopy usually has three types of slices. The slices along the z axis represent coronal plane sequences in the xy plane, the slices along the x axis represent sagittal plane sequences in the yz plane, and slices along the y axis represent horizontal plane sequences in the xz plane, where xyz represents the orientation of the three-dimensional image. This is expressed by the following formulas: the slice along the z axis is coronal, I(x,y)×z=I(x,y,z),zN; the slice along the x axis is sagittal, I(y,z)×x=I(x,y,z),xN; and the slice along the y axis is horizontal, I(x,z)×y=I(x,y,z),yN.

In this study, we mainly consider the interpolation along the axial direction, where ILR (LR, low resolution) represents the low-resolution axial image, IHR (HR, high resolution) represents the high-resolution axial image, and the transformation process is represented by T:ILRIHR to achieve the transition from the low-resolution axial to the high-resolution images. The axial-voxel-resolution ratio of all the three-dimensional image data is increased α to isotropy as described in Equation 1:

ILR(x,y,z)×α=IHR(x,y,α×z). (Equation 1)

Through the operation of data reslicing,40 the axial data along the x and y axes can be obtained: Ix(x,y,z)=Rx[I(y,z)×x] and Iy(x,y,z)=Ry[I(x,z)×y], where Rx[I(y,z)×x] is the image of the axial section along the x-axis sagittal plane and Ry[I(x,z)×y] is the image of the axial section along the y-axis horizontal plane.

To improve the axial voxel resolution of images, the high-resolution axial data cannot be directly obtained for the network training, resulting in a bottleneck due to a lack of training data. Because of the similarity of different sections of three-dimensional images, high-resolution lateral images can be used as the ground truth. The transformation relationship S of image processing is used to construct data similar to axial features for training. The relationship between the sagittal and the horizontal planes is as follows in Equations 2 and 3:

Sα[Ix(x,y)×z]=IxLR(x,y)×zRx[I(y,z)×x], (Equation 2)
Sα[Iy(x,y)×z]=IyLR(x,y)×zRy[I(x,z)×y]. (Equation 3)

Specifically, we started by selecting the image region of 128 × 128 pixels for cropping on consecutive coronal slices, and it constructed the high-resolution image training dataset. Next, down-sampling was performed on the cropped data, that is, low-resolution image data were obtained by transforming the relationship S to simulate the low-resolution axial slice data. Thus, the matching high- and low-resolution image pairs from the coronal image were constructed as the training data. The amount of data was more than 10 thousand matching image pairs.

Training of deep network models

The matched pairs of high-resolution and low-resolution images obtained from the coronal slices were used to train a deep learning network for feature extraction. In this study, based on the U-Net model,39 we conducted ablation experiments to compare the network structure of the model, and we built a fully convolutional axial interpolation network called AINet (Figure S13). AINet was designed to learn the prediction from low-resolution images to high-resolution neuron images. The entire network architecture was constructed through multiple down-sampling and up-sampling operations, with the input and output images having the same sizes. We conducted the network’s ablation experiments and compared the numbers of up-sampling and down-sampling layers (Figure S2).

AINet mainly includes the encoder and decoder. The model consists of the two-dimensional convolution, max-pooling, activation, and up-sampling layers. In the encoder, the down-sampling module, which includes two consecutive convolution layers with the kernel of 3 × 3, followed by a PReLU (parametric rectified linear unit) activation layer and a down-sampling 2 × 2 max-pooling layer, reduces the image resolution by one time and doubles the number of feature channels. The same down-sampling module is repeated three times, and the image resolution is reduced by three times after the last operation. The network encoder is represented by ECLR, and the calculation formula is expressed as ECLR=f(ILR,P1)+D(ILR), where ILR represents the low-resolution image, ILR{IxLR(x,y)×z,IyLR(x,y)×z}, f represents feature extraction, P1 represents parameters of the encoder, and D(ILR) represents the image down-sampling process.

The decoder gradually recovers the resolution of the image, which includes three repeated up-sampling modules. Each up-sampling module consists of two convolution layers with a kernel of 3 × 3, followed by a PReLU activation layer and an up-sampling layer to increase the resolution of the image. After each up-sampling module, the resolution of the image is doubled, and the number of feature channels is halved. The operation is repeated three times to gradually restore the image to the size of the input image. At the same time, a jump connection is added between the encoder and the decoder. In the process of down-sampling and up-sampling, layers with the same number of channels are connected, so that the features of different layers are fused to recover the image details. Finally, a layer of 1 × 1 convolution is added to transform the feature map into the result of specific depth. The whole network structure has 33 layers. The network decoder is represented by DCLR, and the calculation formula is expressed as DCLR=FF(ELR,P2)+U(D(ILR)), where FF(ELR,P2) represents the layer of the same feature channel in the connection network for feature fusion, and the parameter P2 of the decoder U(D(ILR)) stands for up-sampling the down-sampled image to restore the size of the input image. The whole network architecture can well integrate the larger receptive field features of the deep network and the texture information of the shallow network and make better image predictions. The loss function of the network model is L2 loss,13 which measures the difference between the predicted image and the ground truth image as defined in Equation 4:

LossAINet=1Ni=1N[IiGT(x,y)×zIiHR(x,y)×z]2, (Equation 4)

where IiGT(x,y)×z is the ground truth of the coronal image, and IiHR(x,y)×z is the predicted image of ILR for each input. The AINet final output high-resolution image is IHR, and the calculation formula is IHR=ECLR+DCLR.

In the training process, each read image batch is 10, the cycle epoch is 100, the learning rate is 1e−5, and the optimizer is Adam.50 In network training, 90% of the training data are randomly selected as the training data and 10% as the verification data. GPU is used to accelerate the training of the network model. The full convolution network AINet is implemented based on TensorFlow.51 We tested our method on a computer with an NVIDIA Quadro RTX5000 GPU card, 32 CPU cores (Intel Xeon gold 6246R × 2), and 256 GB of RAM.

Image prediction with the high-axial-voxel resolution of neurons

After training through the AINet, we applied the model trained on the coronal plane to predict images in the sagittal and horizontal planes. The predicted high-axial-voxel-resolution image is expressed as in Equations 5 and 6:

RxHR[I(y,z)×x]=Tα(RxLR[I(y,z)×x]), (Equation 5)
RyHR[I(x,z)×y]=Tα(RyLR[I(x,z)×y]), (Equation 6)

where RxLR[I(y,z)×x] and RyLR[I(x,z)×y] represent the low-resolution images of the axial slice sagittal plane and horizontal plane, RxHR[I(y,z)×x] and RyHR[I(x,z)×y] represent the predicted high-resolution images of the axial slice sagittal plane and horizontal plane, Tα represents the image interpolation transformation relationship, and the magnification is α.

Due to the large number of pixels in the sagittal slices of the acquired three-dimensional images, it is not feasible to directly input a single large image into the network for prediction. Therefore, it is necessary to divide the image into a series of small blocks. Each block is individually predicted, and then the predictions are stitched back together to form the complete large image (Figure S1C). To account for the redundancy at the block boundaries, a certain number of pixels are left as overlap regions on both sides of the patches. This overlapping region helps to eliminate artifacts at the stitching boundaries.

Evaluation methods

The evaluation is divided into image quality evaluation and neuron reconstruction evaluation. The image quality evaluation is divided into subjective and objective evaluations.23,52,53 Subjective evaluation is mainly the observation by human eyes of the predicted image and the selection of the predicted image with better quality. For objective evaluation, the image evaluation indexes are peak signal-to-noise ratio (PSNR)52 and structural similarity (SSIM).23 To further validate the accuracy of the predicted high-resolution axial images, the predicted axial slice images are resliced along the sagittal or horizontal plane and then reconstructed back into coronal plane images using data reslicing (Figures 2B and S4A). The reslice validation formula is defined in Equations 7 and 8:

Rz[IHR(y,z)×x]=IHR(x,y)×z, (Equation 7)
Rz[IHR(x,z)×y]=IHR(x,y)×z, (Equation 8)

where RZ represents the reslice along the z axis, IHR(y,z)×x and IHR(x,z)×y represent the high-resolution prediction results of the sagittal and horizontal planex, and IHR(x,y)×z represents the predicted high-axial-voxel-resolution image to the coronal plane. Compared with the signal in the coronal plane, by selecting slices of the same thickness for max-projection, we examined whether the predicted high-axial-voxel-resolution image is the same as the image information of the coronal plane.

The evaluation of neuron reconstruction employed precision, recall, and F1 score.13,15,53 The manually reconstructed neurons were used as the gold standard. The quantitative evaluation of neuron reconstruction mainly focused on the accuracy and integrity of the skeleton. Precision was defined as the ratio of the total number of true-positive points to the total number of points in the algorithm’s reconstruction results. The recall was defined as the ratio of the total number of true-positive points to the total number of points in the manually reconstructed results. The F1 score was a weighted average of precision and recall.

Acknowledgments

We would like to thank the MOST group of the Britton Chance Center for Biomedical Photonics, the Wuhan National Laboratory for Optoelectronics, the MoE Key Laboratory for Biomedical Photonics, and the Huazhong University of Science and Technology. This study was supported by STI 2030-Major Projects (2021ZD0201002) and National Natural Science Foundation of China grants (T212015 and 32192412).

Author contributions

H.G., Q.L., and A.L. conceived and designed this study. W.C. and C.X. performed the experiments, data analysis, and graph drawing. M.L. and S.B. provided the data. G.H. participated in some of the experiments. S.A., W.L., X.L., and C.X. participated in image processing and visualization. W.C. and A.L. wrote this paper.

Declaration of interests

The authors declare no competing interests.

Published: June 21, 2024

Footnotes

Supplemental information can be found online at https://doi.org/10.1016/j.patter.2024.101007.

Contributor Information

Chi Xiao, Email: xiaochi@hainanu.edu.cn.

Anan Li, Email: aali@hust.edu.cn.

Supplemental information

Document S1. Figures S1–S14 and Tables S1–S9
mmc1.pdf (12.6MB, pdf)
Document S2. Article plus supplemental information
mmc2.pdf (17.8MB, pdf)

References

  • 1.Lichtman J.W., Denk W. The big and the small: challenges of imaging the brain's circuits. Science. 2011;334:618–623. doi: 10.1126/science.1209168. [DOI] [PubMed] [Google Scholar]
  • 2.Helmstaedter M. Cellular-resolution connectomics: challenges of dense neural circuit reconstruction. Nat. Methods. 2013;10:501–507. doi: 10.1038/nmeth.2476. [DOI] [PubMed] [Google Scholar]
  • 3.Li A., Gong H., Zhang B., Wang Q., Yan C., Wu J., Liu Q., Zeng S., Luo Q. Micro-optical sectioning tomography to obtain a high-resolution atlas of the mouse brain. Science. 2010;330:1404–1408. doi: 10.1126/science.1191776. [DOI] [PubMed] [Google Scholar]
  • 4.Yang T., Zheng T., Shang Z., Wang X., Lv X., Yuan J., Zeng S. Rapid imaging of large tissues using high-resolution stage-scanning microscopy. Biomed. Opt Express. 2015;65:1867–1875. doi: 10.1364/BOE.6.001867. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Gong H., Xu D., Yuan J., Li X., Guo C., Peng J., Li Y., Schwarz L.A., Li A., Hu B., et al. High-throughput dual-colour precision imaging for brain-wide connectome with cytoarchitectonic landmarks at the cellular level. Nat. Commun. 2016;7 doi: 10.1038/ncomms12142. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Zhong Q., Li A., Jin R., Zhang D., Li X., Jia X., Ding Z., Luo P., Zhou C., Jiang C., et al. High-definition imaging using line-illumination modulation microscopy. Nat. Methods. 2021;18:309–315. doi: 10.1038/s41592-021-01074-x. [DOI] [PubMed] [Google Scholar]
  • 7.Wang Y., Li Q., Liu L., Zhou Z., Ruan Z., Kong L., Li Y., Wang Y., Zhong N., Chai R., et al. TeraVR empowers precise reconstruction of complete 3-D neuronal morphology in the whole brain. Nat. Commun. 2019;10:3474. doi: 10.1038/s41467-019-11443-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Zhou H., Cao T., Liu T., Liu S., Chen L., Chen Y., Huang Q., Ye W., Zeng S., Quan T. Super-resolution segmentation network for reconstruction of packed neurites. Neuroinformatics. 2022;20:1155–1167. doi: 10.1007/s12021-022-09594-3. [DOI] [PubMed] [Google Scholar]
  • 9.Jefferis G.S.X.E., Livet J. Sparse and combinatorial neuron labelling. Curr. Opin. Neurobiol. 2012;22:101–110. doi: 10.1016/j.conb.2011.09.010. [DOI] [PubMed] [Google Scholar]
  • 10.Lin R., Wang R., Yuan J., Feng Q., Zhou Y., Zeng S., Ren M., Jiang S., Ni H., Zhou C., et al. Cell-type-specific and projection-specific brain-wide reconstruction of single neurons. Nat. Methods. 2018;15:1033–1036. doi: 10.1038/s41592-018-0184-y. [DOI] [PubMed] [Google Scholar]
  • 11.Jiang S., Pan Z., Feng Z., Guan Y., Ren M., Ding Z., Chen S., Gong H., Luo Q., Li A. Skeleton optimization of neuronal morphology based on three-dimensional shape restrictions. BMC Bioinf. 2020;21:395. doi: 10.1186/s12859-020-03714-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Li Q., Shen L. 3D neuron reconstruction in tangled neuronal Image with deep networks. IEEE Trans. Med. Imag. 2020;39:425–435. doi: 10.1109/TMI.2019.2926568. [DOI] [PubMed] [Google Scholar]
  • 13.Quan T., Zhou H., Li J., Li S., Li A., Li Y., Lv X., Luo Q., Gong H., Zeng S. NeuroGPS-Tree: automatic reconstruction of large-scale neuronal populations with dense neurites. Nat. Methods. 2016;13:51–54. doi: 10.1038/nmeth.3662. [DOI] [PubMed] [Google Scholar]
  • 14.Li S., Zhou H., Quan T., Li J., Li Y., Li A., Luo Q., Gong H., Zeng S. SparseTracer: the reconstruction of discontinuous neuronal morphology in noisy images. Neuroinformatics. 2017;15:133–149. doi: 10.1007/s12021-016-9317-6. [DOI] [PubMed] [Google Scholar]
  • 15.Li S., Quan T., Zhou H., Yin F., Li A., Fu L., Luo Q., Gong H., Zeng S. Identifying weak signals in Inhomogeneous neuronal images for large-scale tracing of sparsely distributed neurites. Neuroinformatics. 2019;17:497–514. doi: 10.1007/s12021-018-9414-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Xiao H., Peng H. APP2: automatic tracing of 3D neuron morphology based on hierarchical pruning of a gray-weighted image distance-tree. Bioinformatics. 2013;29:1448–1454. doi: 10.1093/bioinformatics/btt170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Winnubst J., Bas E., Ferreira T.A., Wu Z., Economo M.N., Edson P., Arthur B.J., Bruns C., Rokicki K., Schauder D., et al. Reconstruction of 1,000 projection neurons reveals new cell types and organization of long-range connectivity in the mouse brain. Cell. 2019;179:268–281.e13. doi: 10.1016/j.cell.2019.07.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Li S., Quan T., Zhou H., Li A., Fu L., Gong H., Luo Q., Zeng S. Review of advances and prospects in neuron reconstruction. Chin. Sci. Bull. 2019;64:532–545. [Google Scholar]
  • 19.Luo Q. 2014 Conference on Lasers and Electro-Optics (CLEO) - Laser Science to Photonic Applications. 2014. Visible Brain-wide Networks at Single-neuron Resolution with Micro-Optical Sectioning Tomography. [DOI] [Google Scholar]
  • 20.Gao L., Liu S., Gou L., Hu Y., Liu Y., Deng L., Ma D., Wang H., Yang Q., Chen Z., et al. Single-neuron projectome of mouse prefrontal cortex. Nat. Neurosci. 2022;25:515–529. doi: 10.1038/s41593-022-01041-5. [DOI] [PubMed] [Google Scholar]
  • 21.Tai Y., Yang J., Liu X. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017. Image Super-Resolution via Deep Recursive Residual Network; pp. 2790–2798. [DOI] [Google Scholar]
  • 22.Freeman W.T., Jones T.R., Pasztor E.C. Example-based super-resolution. IEEE Comput. Graph. Appl. 2002;22:56–65. [Google Scholar]
  • 23.Dong C., Loy C.C., He K., Tang X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016;38:295–307. doi: 10.1109/TPAMI.2015.2439281. [DOI] [PubMed] [Google Scholar]
  • 24.Kim J., Lee J.K., Lee K.M. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016. Accurate Image Super-Resolution Using Very Deep Convolutional Networks; pp. 1646–1654. [DOI] [Google Scholar]
  • 25.Huang G., Liu Z., Van Der Maaten L., Weinberger K.Q. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017. Densely connected convolutional networks; pp. 2261–2269. [DOI] [Google Scholar]
  • 26.Tong T., Li G., Liu X., Gao Q. 2017 IEEE International Conference on Computer Vision (ICCV) 2017. Image Super-Resolution Using Dense Skip Connections; pp. 4809–4817. [DOI] [Google Scholar]
  • 27.Komura D., Onoyama T., Shinbo K., Odaka H., Hayakawa M., Ochi M., Herdiantoputri R.R., Endo H., Katoh H., Ikeda T., et al. Restaining-based annotation for cancer histology segmentation to overcome annotation-related limitations among pathologists. Patterns. 2023;4 doi: 10.1016/j.patter.2023.100688. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Davaasuren D., Chen Y., Jaafar L., Marshall R., Dunham A.L., Anderson C.T., Wang J.Z. Automated 3D segmentation of guard cells enables volumetric analysis of stomatal biomechanics. Patterns. 2022;3 doi: 10.1016/j.patter.2022.100627. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Huang Q., Chen Y., Liu S., Xu C., Cao T., Xu Y., Wang X., Rao G., Li A., Zeng S., Quan T. Weakly supervised learning of 3D deep network for neuron reconstruction. Front. Neuroanat. 2020;14:38. doi: 10.3389/fnana.2020.00038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Tan Y., Liu M., Chen W., Wang X., Peng H., Wang Y. DeepBranch: Deep Neural Networks for Branch Point Detection in Biomedical Images. IEEE Trans. Med. Imag. 2020;39:1195–1205. doi: 10.1109/TMI.2019.2945980. [DOI] [PubMed] [Google Scholar]
  • 31.Li R., Zeng T., Peng H., Ji S. Deep Learning segmentation of optical microscopy images improves 3-D neuron reconstruction. IEEE Trans. Med. Imag. 2017;36:1533–1541. doi: 10.1109/TMI.2017.2679713. [DOI] [PubMed] [Google Scholar]
  • 32.Liu Y., Zhong Y., Zhao X., Liu L., Ding L., Peng H. Tracing weak neuron fibers. Bioinformatics. 2023;39 doi: 10.1093/bioinformatics/btac816. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Zhu T., Yao G., Hu D., Xie C., Li P., Yang X., Gong H., Luo Q., Li A. Data-driven Morphological Feature Perception of Single Neuron with Graph Neural Network. IEEE Trans. Med. Imag. 2023;42:3069–3079. doi: 10.1109/TMI.2023.3275209. [DOI] [PubMed] [Google Scholar]
  • 34.Peng H., Hawrylycz M., Roskams J., Hill S., Spruston N., Meijering E., Ascoli G.A. BigNeuron: large-scale 3D neuron reconstruction from optical microscopy images. Neuron. 2015;87:252–256. doi: 10.1016/j.neuron.2015.06.036. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Brown K.M., Barrionuevo G., Canty A.J., De Paola V., Hirsch J.A., Jefferis G.S.X.E., Lu J., Snippe M., Sugihara I., Ascoli G.A. The DIADEM data sets: representative light microscopy images of neuronal morphology to advance automation of digital reconstructions. Neuroinformatics. 2011;9:143–157. doi: 10.1007/s12021-010-9095-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Feng L., Zhao T., Kim J. neuTube 1.0: A New Design for Efficient Neuron Reconstruction Software Based on the SWC Format. eNeuro. 2015;2 doi: 10.1523/ENEURO.0049-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Weigert M., Schmidt U., Boothe T., Müller A., Dibrov A., Jain A., Wilhelm B., Schmidt D., Broaddus C., Culley S., et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods. 2018;15:1090–1097. doi: 10.1038/s41592-018-0216-7. [DOI] [PubMed] [Google Scholar]
  • 38.Plenge E., Poot D.H.J., Niessen W.J., Meijering E. 2013 Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2013. Super-Resolution Reconstruction Using Cross-Scale Self-similarity in Multi-slice MRI; pp. 123–130. [DOI] [PubMed] [Google Scholar]
  • 39.Ronneberger O., Fischer P., Brox T. 2015 Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2015. U-Net: convolutional networks for biomedical image segmentation; pp. 234–241. [DOI] [Google Scholar]
  • 40.Pietzsch T., Saalfeld S., Preibisch S., Tomancak P. BigDataViewer: visualization and processing for large image data sets. Nat. Methods. 2015;12:481–483. doi: 10.1038/nmeth.3392. [DOI] [PubMed] [Google Scholar]
  • 41.Diel E.E., Lichtman J.W., Richardson D.S. Tutorial: avoiding and correcting sample-induced spherical aberration artifacts in 3D fluorescence microscopy. Nat. Protoc. 2020;15:2773–2784. doi: 10.1038/s41596-020-0360-2. [DOI] [PubMed] [Google Scholar]
  • 42.He K., Zhang X., Ren S., Sun J. Identity Mappings in Deep Residual Networks. arXiv. 2016 doi: 10.48550/arXiv.1603.05027. Preprint at. [DOI] [Google Scholar]
  • 43.Rodriguez A., Ehlenberger D.B., Hof P.R., Wearne S.L. Three-dimensional neuron tracing by voxel scooping. J. Neurosci. Methods. 2009;184:169–175. doi: 10.1016/j.jneumeth.2009.07.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Chen H., Dou Q., Yu L., Qin J., Heng P.-A. VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images. Neuroimage. 2018;170:446–455. doi: 10.1016/j.neuroimage.2017.04.041. [DOI] [PubMed] [Google Scholar]
  • 45.Isensee F., Jaeger P.F., Kohl S.A.A., Petersen J., Maier-Hein K.H. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods. 2021;18:203–211. doi: 10.1038/s41592-020-01008-z. [DOI] [PubMed] [Google Scholar]
  • 46.Srinidhi C.L., Ciga O., Martel A.L. Deep neural network models for computational histopathology: A survey. Med. Image Anal. 2021;67 doi: 10.1016/j.media.2020.101813. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Wu C. Zenodo; 2024. Hierarchically Annotated Dataset Drives Tangled Filament Recognition in Digital Neuron Reconstruction. [DOI] [Google Scholar]
  • 48.Li Y., Gong H., Yang X., Yuan J., Jiang T., Li X., Sun Q., Zhu D., Wang Z., Luo Q., Li A. TDat: An Efficient Platform for Processing Petabyte-Scale Whole-Brain Volumetric Images. Front. Neural Circ. 2017;11:51. doi: 10.3389/fncir.2017.00051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Zhou H., Li S., Li A., Huang Q., Xiong F., Li N., Han J., Kang H., Chen Y., Li Y., et al. GTree: an Open-source Tool for Dense Reconstruction of Brain-wide Neuronal Population. Neuroinformatics. 2021;19:305–317. doi: 10.1007/s12021-020-09484-6. [DOI] [PubMed] [Google Scholar]
  • 50.Kingma D.P., Ba J. Adam: A Method for Stochastic Optimization. arXiv. 2014 doi: 10.48550/arXiv.1412.6980. Preprint at. [DOI] [Google Scholar]
  • 51.Abadi M., Barham P., Chen J., Chen Z., Davis A., Dean J., Devin M., Ghemawat S., Irving G., Isard M., et al. TensorFlow: a system for large-scale machine learning. arXiv. 2016 doi: 10.48550/arXiv.1605.08695. Preprint at. [DOI] [Google Scholar]
  • 52.Wang Z., Bovik A.C., Lu L. 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing. 2002. Why is image quality assessment so difficult? pp. IV-3313–IV-3316. [DOI] [Google Scholar]
  • 53.Powers D.M.W. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv. 2011 doi: 10.48550/arXiv.2010.16061. Preprint at. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Document S1. Figures S1–S14 and Tables S1–S9
mmc1.pdf (12.6MB, pdf)
Document S2. Article plus supplemental information
mmc2.pdf (17.8MB, pdf)

Data Availability Statement


Articles from Patterns are provided here courtesy of Elsevier

RESOURCES