Skip to main content
STAR Protocols logoLink to STAR Protocols
. 2024 May 11;5(2):103063. doi: 10.1016/j.xpro.2024.103063

Protocol for neuron tracing and analysis of dendritic structures from noisy microscopy images using Neuronalyzer

Yael Iosilevskii 1,3, Omer Yuval 2,3,5,, Tom Shemesh 1,4,∗∗, Benjamin Podbilewicz 1,4,6,∗∗∗
PMCID: PMC11101970  PMID: 38735040

Summary

Studying neuronal morphology requires imaging and accurate extraction of tree-like shapes from noisy microscopy data. Here, we present a protocol for automatic reconstruction of branched structures from microscopy images using Neuronalyzer software. We describe the steps for loading neuron images, denoising, segmentation, and tracing. We then detail feature extraction (e.g., branch curvature and junction angles), data analysis, and plotting. The software allows batch processing and statistical comparisons across datasets. Neuronalyzer is scale-free and handles noise and variation across images.

For complete details on the use and execution of this protocol, please refer to Yuval et al.1

Subject areas: Bioinformatics, Cell Biology, Microscopy, Neuroscience, Computer sciences

Graphical abstract

graphic file with name fx1.jpg

Highlights

  • Steps for extraction of neuronal features from noisy microscopy images

  • Automatic analysis of branch length, curvature, junction angles, and density

  • Instructions for analysis and plotting


Publisher’s note: Undertaking any experimental protocol requires adherence to local institutional guidelines for laboratory safety and ethics.


Studying neuronal morphology requires imaging and accurate extraction of tree-like shapes from noisy microscopy data. Here, we present a protocol for automatic reconstruction of branched structures from microscopy images using Neuronalyzer software. We describe the steps for loading neuron images, denoising, segmentation, and tracing. We then detail feature extraction (e.g., branch curvature and junction angles), data analysis, and plotting. The software allows batch processing and statistical comparisons across datasets. Neuronalyzer is scale-free and handles noise and variation across images.

Before you begin

Quantitative analysis of neuron structures can be time-consuming and inconsistent when performed manually. We describe the use of a custom-developed software for morphological analysis of neurons from microscopy images. Confocal images are recommended, either by capturing the neuron in a single z section, or as a two-dimensional projection of z sections. This protocol covers the use of the Neuronalyzer software, an automated tool for reconstructing neuronal morphology from noisy image data.1 The tool was designed to handle variation across neuron images and various types of noise, including auto-fluorescent gut granules, background noise, lighting and contrast, and non-uniform signal along neuronal branches. However, the method is general and is not limited to a specific type of noise, as users may train the tool using their own image dataset in order to account for other types of noise. First, the neuron is denoised and segmented using a deep convolutional neural network. This is followed by active contour tracing to find the geometry and connectivity of the neuron. The software was originally developed for reconstructing the geometry of the PVD neuron in the nematode Caenorhabditis elegans, which follows a stereotypical branching pattern.2,3 PVD-specific analysis of branch classes is included, as well as generic features such as the count, length and curvature of branches, morphological analysis of neuronal junctions, and statistical comparison between datasets. A protocol for imaging the PVD neuron has been previously covered,4 and serves as a basis for data acquisition as used here. The software described in this protocol may be similarly applied to other types of neurons at different scales, and is suitable for high-throughput analysis. Since the software works on 2D projections of neurons, it is most suitable for cylindrical or approximately planar neurons, with minimal overlaps between branches in the projection.

Image acquisition of the PVD neuron

Inline graphicTiming: 0.5–1 h

  • 1.

    A protocol for imaging the PVD neuron in C. elegans is provided by Wang and colleagues.4 It covers the steps necessary to acquire the images prior to analysis by the software.

  • 2.

    Use a 40x or higher magnification objective, with NA 1.4 or higher, for obtaining your images.

Inline graphicCRITICAL: Each image must contain a single neuron. For PVD, avoid using images that are not strictly lateral (i.e. the worm is twisted towards the dorsal or ventral sides).

Image pre-processing

Inline graphicTiming: 0.5–1 h

Note: Input images should be grayscale in .tif or .jpg format. The images must be two dimensional; if a neuron structure is captured using serial z-axis sections (‘z-series’), a maximum-intensity z-projection should be applied to obtain a 2D image (as in4).

  • 3.
    Merge x-y regions.
    Note: If the complete neuron structure you wish to analyze was obtained as multiple sections (for example, PVD images of adults which are obtained using a 40x objective),4 these will require merging into a single image by stitching adjacent sections together prior to the analysis in the software.
    • a.
      For assembling the complete PVD image, we recommend utilizing the Fiji (ImageJ, NIH) plugin for pairwise stitching.5 This plugin stitches two adjacent sections into a single image. If more than two positions require combining, we recommend saving all the positions, sequentially named, in a single folder and utilize the Fiji (ImageJ, NIH) plugin for Grid/Collection stitching5 using Type: Sequential images. This plugin automatically performs the pairwise stitching described above on all the positions to provide a single merged image.
    • b.
      Alternative approaches:
      • i.
        Manually position the images using the Fiji (NIH) Pairwise Stitching5 plugin by removing the ‘compute overlap’ option and entering the X,Y displacement between adjacent positions in pixels.
      • ii.
        Manually position the images using Photoshop (Adobe Inc.) by dragging each image. Add a black background layer and save the complete image in an 8-bit format. Note that if the original images are resized in the process, the scaling (pixel/μm) will change.
        Inline graphicCRITICAL: It is crucial that the matching of the images is as accurate as possible, to avoid modifying the neuronal structure (e.g., shortening branches, adding gaps within branches, making a branch discontinuous or distorting a junction). After stitching the various positions, make sure the scaling did not change. We recommend opening the images using Fiji to confirm they maintain the original scale.

Download the most recent version of neuronalyzer

Inline graphicTiming: 5 min

  • 4.

    Go to https://github.com/Omer1Yuval1/Neuronalyzer.

  • 5.

    Press the “Code” button and choose “Download ZIP”.

  • 6.

    Unzip the file by right clicking it and choosing “Extract here”. This will create a folder named “Neuronalyzer-master”. You may rename this folder.

Run the software

Inline graphicTiming: 1 min

  • 7.

    Open a new MATLAB session and change the current directory to the folder created in step 6. For example, “C:∖program files∖Neuronalyzer-master”.

  • 8.

    From within MATLAB, run the “index.m” file to start the Neuronalyzer user interface.

Key resources table

REAGENT or RESOURCE SOURCE IDENTIFIER
Experimental models: Organisms/strains

PVD marker strain Yuval et al.,1 Wang et al.,4 Li et al.,6 Kravtsov et al.,7 Smith et al.3 https://cgc.umn.edu/strain/NC1687

Software and algorithms

Neuronalyzer Yuval et al.1 https://github.com/Omer1Yuval1/Neuronalyzer/releases/tag/v1.1
https://doi.org/10.5281/zenodo.10938078
Fiji (ImageJ) Schindelin et al.8 https://imagej.net/software/fiji/downloads
Fiji (ImageJ) image stitching plugin Preibisch et al.5 https://imagej.net/plugins/image-stitching
MATLAB R2020b or higher (latest version tested: R2023a) MathWorks https://www.mathworks.com/products/matlab.html
MATLAB Toolboxes:
Computer Vision, Navigation, Robotics System, Image Processing, Curve Fitting, Signal Processing, Statistics and Machine Learning, Phased Array System, Parallel Computing
MATLAB, MathWorks N/A
Adobe Photoshop CS5 or higher Adobe, Inc. https://www.adobe.com/products/photoshop.html

Other

Confocal microscopy images of the neuron of interest Wang et al.4 N/A

Materials and equipment

Alternatives: Multiple C. elegans strains exist with a PVD-specific marker. Images in (1) utilize hmnIs133[ser2prom3::Kaede],7 however other strains may be similarly used, such as those containing wyIs592[ser2prom3::myr-GFP]4,6 or wdIs52[pF49H12.4::GFP].3

Step-by-step method details

Data loading and image processing

Inline graphicTiming: 1–5 min per image

This step covers loading of images into Neuronalyzer and creating a project file which contains your data, in preparation for image segmentation and tracing (see Figure 1). First, a set of neuron images is loaded into the software. Then, meta-data such as genotype, age and experimental conditions can be added. Waiting time varies depending on the amount of data and available computational resources. The times below are approximations based on images of the PVD neuron.1

  • 1.
    Load an image or a set of images into the software’s user interface (1 min). Images must be in .tif or .jpg format.
    Note: A sample image is included in the Neuronalyzer folder under Inputs → example images.
    • a.
      Click the “Load data” button.
      • i.
        Select one or more images to analyze.
      • ii.
        Repeat step (a) to add more images.
    • b.
      Use the “Project” menu to switch between images.
  • 2.
    Setting dataset parameters (scale factor, strain name etc.) (1 min per image).
    • a.
      Use the tables at the bottom-right corner to add meta-data.
      Note: This must be added for each image individually. These parameters can later be used to split the image dataset into subsets for statistical analysis (e.g. for comparing different genotypes). In each table, the first (leftmost) column contains parameter names, the second column contains the values, and the third column contains the units.
    • b.
      Set the scaling factor to the correct value for your image (this parameter is set to 1 by default).
      Note: The first table contains experimental information such as scale, treatment and genotype. The second table contains analysis information for reproducibility. This includes information about the software version used for analysis, as well as the date and the name of the person that performed the analysis. You may add your own custom parameters by filling empty rows in the table.
      Inline graphicCRITICAL: Setting the correct pixel size (scaling factor) is necessary for obtaining correctly scaled results in the analysis module. This must be done for each image individually.
  • 3.

    Click the “Save project” button to save your work. Choose a directory and a name for your project file and click “Save”.

Note: This exports your work into a Neuronalyzer project file - a .mat file that contains all image data, along with their meta-data and analysis (if available). It is recommended to save your work frequently to make sure that no information is lost. To learn more about the structure of the data stored in a project file, see step 16 “Understanding and accessing the data structure”.

Inline graphicPause point: If you would like to continue your work at a later time, the “Load project” button can be used to load a previously saved project file.

Figure 1.

Figure 1

Neuronalyzer software GUI and data upload

Corresponding to Image Processing steps 1‒5.

(A) Upload raw image files (closed arrow) or a previously saved project file (blunt arrow).

(B) Once data is loaded, you may navigate between images under the Project menu. Modify image information, including scaling, by double clicking the desired boxes (arrowhead, an example is highlighted). If required, modify image brightness by altering the pixel limit intensities and pressing ‘Apply changes’ (gold rectangles). Save the project (blunt arrow) as often as necessary to maintain additional information, including denoising, tracing etc.

Image denoising and segmentation

Inline graphicTiming: 1–10 min per image

In this step, a pre-trained convolutional neural network (CNN) is used to denoise and segment the neuron images, classifying pixels as either neuron or non-neuron (with the latter corresponding to the background and non-neuronal features, such as auto-fluorescent elements (e.g., C. elegans gut granules). For a detailed explanation of this process, please refer to (1).

  • 4.

    Click the “Segment image” button.

  • 5.

    Inspect each segmented image by selecting the “segmented image” option from the “Reconstructions” menu.

Note: The segmentation step includes the removal of the soma, typically the brightest object in the image (when using fluorescent cell markers). Background, soma and any type of noise should be ignored, leaving only the neuronal branches (see Figure 2; for more detail see (1). Other useful visualization options are the Binary Image and Skeleton views which are derived from the segmented image (Reconstructions → Binary Image and Reconstructions → Skeleton). See step 9 (Feature extraction) for more information on data visualization and validation.

Figure 2.

Figure 2

Image segmentation and optional manual modifications

(A) Segment image (arrow). If the image has been previously segmented, click “Overwrite” to replace the existing binary image.

(B) Viewing the segmentation result using Reconstructions → Binary Image → Binary image (RGB). Zoom in using the mouse scroll wheel or the magnifying glass icon to inspect the segmentation result. Unsegmented pixels in the binary image are shown in magenta in this reconstruction, segmented pixels are shown in cyan. Add missing pixels by pressing the Drawing mode radio button (closed arrow), setting the desired marker size (2 × 2 in this example, see blunt arrow), left-clicking once on the image and drawing to fill the required area (white arrowhead). Pixels may be similarly removed by right-clicking the image (see text for further details).

(C) Inset of panel A, visualizing the segmentation result under Reconstructions → Binary Image → CNN, with white circle around an example area where pixels were modified (panel D).

(D) Inset area as shown in panel C, following modification. Red: original; blue: manually added pixels; yellow: manually removed pixels. Note this correction is minor and will have little effect on the entire traced structure.

Neuron tracing

Inline graphicTiming: 5–15 min per image

Once the image(s) has been segmented, it is ready to be traced. In this step, the shape of the neuron is abstracted into a network of small rectangular elements that form neuronal segments, connected by circular junctions (see Figure 3). For a detailed explanation of this process, please refer to (1).

  • 6.

    Click the “Trace neuron” button to initiate automatic tracing of all images in the project. For tracing a single image, tick the ‘Selected project only’ box prior to tracing.

  • 7.

    Once done, the result can be viewed via the reconstruction menu (Reconstructions → Trace).

  • 8.

    Save your work (see step 3).

Figure 3.

Figure 3

Tracing and branch classification

(A) Following image segmentation, tracing is automatically performed on all images in the project file once the ‘Trace neuron’ button is pressed (blunt arrow). Once tracing has finished, click ‘Extract features’ to obtain additional information necessary for further analysis (open arrow). Individual images may be traced by ticking the ‘Selected project only’ box prior to tracing (arrowhead).

(B) PVD-specific axes approximate the midline (red), third branching order (blue) and outer cuticle of the animal (yellow); this is found under Reconstructions → Axes → Neuron Axes. Axes may be manipulated by applying the Annotation mode radio button (closed arrow), denoting the number of points which are movable along the axes (arrowhead), pressing ‘Apply changes’ (open arrow), and dragging individual points as required. Changes are automatically saved, however the ‘Extract features’ button should be pressed to update feature extraction (see text for details).

Feature extraction

Inline graphicTiming: 1–5 min per image

In this step, the shape of the neuron is analyzed, and key morphological features are extracted. These include branch count, length, and curvature, as well as angles of neuronal junctions. These features (and their combinations) can be used to quantify neuronal complexity, and identify deviations from normal, wild-type morphology. This allows the user to perform quantitative analysis of the neuron’s shape, as well as statistical analysis and comparisons across datasets (i.e., sets of projects separated by a selected feature).

  • 9.

    Click the “Extract features” button to initiate automatic feature extraction.

Note: Following the neuron tracing and feature extraction steps, additional feature visualizations become available in the “Reconstructions” menu. These include features such as junction angles and branch curvature, as well as PVD-specific features such as axes mapping and segment classification. These can be used as qualitative validation to ensure the processing of the images produced meaningful results. Examples of available feature overlays are traced neuron segments (Reconstructions → Segments → Segmentation), branch curvature (Reconstructions → Curvature) and junction positions (Reconstructions → Vertices → Vertex positions)

Inline graphicCRITICAL: For C. elegans PVD images, we recommend validating the mapping of neuron axes (Reconstructions → Axes → Neuron Axes) and segments (Reconstructions → Segments → Segmentation). If these seem incorrect or outside your own error tolerance, see “PVD-specific validation” (troubleshootingproblem 4) to learn how you can improve these.

Quantitative analysis

Inline graphicTiming: <1 min per analysis plot

In addition to visual validation, the software includes different plots of the extracted features (see Figure 4). These can be applied either to a single image or to an entire dataset.

  • 10.
    To plot a feature for a specific image:
    • a.
      Select a project from the “Project” menu.
    • b.
      Tick the “Selected project only” checkbox in the control panel at the bottom.
    • c.
      Select an option from the “Plots” menu (as a simple example, select Length → Total length).
  • 11.

    To plot the same feature for a different image, select a different image from the “Project” menu. The information presented applies to the individual image (“Project”) chosen.

  • 12.
    To plot the same feature for the entire dataset:
    • a.
      Untick the “Selected project only” checkbox.
    • b.
      Click “Apply changes”. This applies to all the images loaded in the software.
  • 13.
    To change the normalization method of a plotted feature, for example division by total neuron length:
    • a.
      Select an option from the “Normalization” dropdown menu.
    • b.
      Click “Apply changes”.
  • 14.

    To change to a different plot, choose a different option from the “Plots” menu. Examples of analysis and the overall pipeline are shown in Figure 5.

  • 15.
    To compare between different characteristics (i.e., subsets of images, for example two genotypes):
    • a.
      Select a feature from the “Group by” menu.
    • b.
      Press “Apply changes”. This enables the full dataset to be divided into subsets according to the metadata attached to each image.

Figure 4.

Figure 4

Analysis of tracing and additional features

(A) The Plots submenu contains various analysis parameters including branch length, distribution of vertices, classification of traced points and more. Each plot may be normalized by features such as midline length, for animals of different sizes (closed arrow), individual points displayed (open arrow) and branch classes merged together (arrowhead). Press ‘Apply changes’ to update any change to the plot.

(B) Reconstructions based on data analysis are accessible from the Reconstructions menu; for example, branch classes of the PVD neuron are designated by clustering traced points by their angle and relative distance from the midline (see (1)).

Figure 5.

Figure 5

Layered structure of the software pipeline, for representative patches of the PVD dendrite

(A) Raw image.

(B) CNN binary image after validation; red: original; blue: manually added pixels.

(C) Skeleton representation.

(D) Automatic tracing result.

(E and F) Example features extracted from the traced neuron, including curvature (E) and segment classification (F) (see 1).

Advanced options and pipelines

Inline graphicTiming: 5 min

In addition to utilizing the main pipeline for analyzing your data, you may choose to access the data structure directly from the MATLAB Workspace, if you wish to perform additional modifications or analyses. You can also export tabular data to csv files.

  • 16.
    Understanding and accessing the data structure.
    • a.
      Load a project by clicking “Load project” and choosing a project file.
    • b.
      Double-click the “Project” variable in MATLAB Workspace.
      Note: This opens the structure variable containing two fields: “Data” and “GUI_Handles”. The “Data” field is itself a structure, and each row in it represents a single image. The first four fields (“Segments”, “Vertices”, “Points” and “Axes”) are empty by default and will contain information once the image has been traced. The “Info” field contains meta-data specified by the user (see step 2), the raw image data, and images generated during the analysis process (e.g., classified image).
    • c.
      To access neuron tracing data information, double-click the desired field (“Segments”, “Vertices”, “Points” and “Axes”).
    • d.
      To access image processing parameters, double-click the “Parameters” field.
    • e.
      To access the meta-data associated with each image (see step 2) double-click the “Info” field and then double-click “Experiment” or “Analysis”.
    • f.
      To access the raw image data and analysis images, double-click the “Info” field and then double-click “Files”.
      Note: When saving a project file, only the “Data” field is being saved into the file (while GUI_Handles is not). The “GUI_Handles” field contains user-interface parameters.
  • 17.
    Exporting tabular project data to a .csv file:
    Note: The following is an example code that can be run from the command line to export the table of traced elements to a .csv file. The same can be done similarly for other tabular data.
    • a.
      Copy the following code into the command window.
      >writetable(struct2table(Project.Data(i).Points),['Project_',num2str(i),'_',Project.Data(i).Info.Experiment(1).Identifier,'.csv'],'Delimiter',',')
    • b.
      Replace ‘i’ with desired the image index in the project dataset (e.g., 1 for the first image).
    • c.
      Press enter.

Training a CNN for image segmentation

Inline graphicTiming: 5 min setup; run time varies depending on the size of the training set and the available hardware, and can be significantly improved by using a GPU

If the segmentation accuracy of the available CNNs is not satisfactory, you may train a new CNN using your own image dataset.

  • 18.
    Assemble and save a set of annotated images to use for training the CNN.
    CRUCIAL: Select a subset of images that captures well the variation in your dataset in terms of neuronal morphology, contrast, sharpness/blurriness, noise and non-neuronal features (e.g., gut granules in C. elegans). This is important for the learning algorithm to generalize well to images not included in the training set (for more details see1).
    • a.
      Load the selected images into the GUI and apply an existing pre-trained CNN to the chosen images (steps 1–5). This is done to obtain an initial estimation for the segmentation.
    • b.
      Complete the annotation manually (see manual annotation section below).
    • c.
      Save the annotated training set into a single project file.
      Note: During training, each image is divided into smaller, partially overlapping patches, used as training samples. This enables the software to generate a large amount of distinct, possibly partially overlapping, training examples based on a single image.
  • 19.

    Load the project file of your annotated training set.

  • 20.

    Select from the top menus Advanced → Train a CNN for segmentation.

  • 21.

    Set the training parameters.

Note: When setting training parameters you should consider segmentation performance, training duration and available hardware (CPU or GPU). Hover the cursor over the various parameters for more information.

  • 22.

    Click “Start training”.

Note: The training process is fully automated, however its duration varies significantly based on input data (e.g., number and size of images), chosen parameters (e.g., number of iterations and number of training samples taken from each image), and available hardware. Using a GPU (requires the Parallel Computing Toolbox) may be used to reduce training time, however it is not necessary. Training duration may range from several minutes to several days, depending on training parameters, the size and number of images in the training set, as well as the available hardware.

  • 23.

    Once training is completed, select the newly trained CNN by clicking Advanced → Choose a pretrained CNN for segmentation.

  • 24.

    Click “Segment image” to apply it to the selected image.

Inline graphicCRITICAL: Inspect the performance of the new CNN on a new image (which the CNN has never encountered) to determine whether further modifications are required.

Note: when applying a CNN to images that have previously been segmented, click “Overwrite” to overwrite the existing segmentation result. Otherwise, press “Keep existing binary images”. If you are not sure how many training images to include, it is recommended to begin with a small training set of 3 images. You can then modify training parameters to optimize the resulting CNN performance. If this still does not produce satisfactory results, consider tweaking training parameters and adding images to the training set.

Expected outcomes

The result of the pipeline described here is the denoising, tracing and reconstruction of neurons in image datasets. This enables batch-processing of multiple neuron images and the comparison of morphological features across datasets. Different genotypes, ages, and treatments can be easily compared for their effect on neuronal morphology. In particular, the software automatically extracts features which are difficult to manually quantify, such as cumulative branch length, segment curvature, and angles of neuronal junctions (Figure 5). Following the application of this protocol, the resulting data structure contains the raw images, their metadata, the abstracted representation of the neurons, and their associated features. The neuron’s structure is stored in the form of an undirected graph in which each node corresponds to a rectangular element associated with width, height and orientation. In addition to viewing the feature overlays for every image in the dataset, each feature is quantitatively plotted and compared between chosen sub-groups (Figure 4A). Different normalization options are available for each plot, as well as statistical comparisons across datasets. Finally, the software offers several advanced options for custom data manipulation and for dealing with noise. In particular, users can use their own dataset to train a custom CNN that better captures features and noise in their image dataset. This optional step smoothly integrates into the rest of the pipeline.

Limitations

While there is ongoing work to add support for 3D neuron analysis from z-stacks, the software currently accepts two-dimensional image data only. As such, it is suited for neurons with a morphology that can be reconstructed from a 2D projection. The full neuronal morphology is obtained by simultaneously tracing all branches, using junctions as starting points.1 Since the tool was originally developed for use on the C. elegans PVD neuron, some features, such as branch classes, are unique to the stereotypical PVD morphology, while others are generic, such as branch length and curvature and the angles of junctions, and can be applied to any neuron independent of its morphology.

Troubleshooting

Problem 1

The image contrast seems poor when it is loaded (related to step 1).

Potential solution

Check the images are saved in 8-bit format, as other formats may appear too contrasted due to different pixel values. Further modifications can be made by adjusting the lower and upper limits of the pixel values (by default, the range is set to [0,255]). This can be done from the bottom panel in the raw image view (Reconstructions → Raw Image), by changing the desired value and clicking “Apply changes”.

Problem 2

Image segmentation is incomplete or captures non-neuron objects (related to steps 4‒5).

See Figures 2B and 2C for examples of incomplete neuron segmentation.

Potential solution

If you are not satisfied with the result of the segmentation, there are two options to improve this.

  • For minor modifications, the segmented image can be manually edited. To do this, see manual annotation section below.

  • For major corrections spanning multiple images, we recommend training a CNN using annotated images from your own dataset. This can be done from the software’s user interface, and does not require previous knowledge about machine learning. The process involves annotating a separate subset of images, which are then fed into the CNN training algorithm. Once done, the new CNN is saved, and can be used to re-segment the dataset. For more information regarding manual correction and training your own CNN, see Advanced Pipeline section above.

Manual annotation

Following segmentation, you can review and modify the result to ensure the neuron structure is reliably captured.

  • Go to Reconstructions → Binary Image → Binary image + RGB. A cyan overlay of the pixels classified as neuron should faithfully capture the neuron structure and ignore background noise (compare against the original image in magenta).

  • To modify the classified image:

  • Go to Reconstructions → Binary Image → Binary image + RGB.

  • The “Default Mode” in the bottom control panel allows you to use the pan and zoom functions to view/examine specific image regions.

  • The software offers two ways to modify the image. “Annotation Mode” should be used to mark or unmark individual pixels as either neuron or non-neuron, while “Drawing Mode” should be used for larger regions.

  • “Drawing Mode” is a paintbrush/eraser-like tool:
    • Set the “Marker Size” to the desired size of the brush in pixels. ‘2’ corresponds to a 2×2 pixel square.
    • To mark pixels as neuron:
      • -
        Click the left mouse button somewhere within the image to initiate the brush tool. The cursor becomes cross-hair shaped.
      • -
        Using the cross-hair cursor, click and drag the mouse over the region to be marked. The line will appear in blue as you drag the mouse. Once the mouse button is released marked pixels will be shown in cyan.
    • To unmark pixels:
      • -
        Click the right mouse button somewhere within the image to initiate the eraser tool. The cursor becomes cross-hair shaped.
      • -
        Using the cross-hair cursor, click and drag the mouse over the region to be unmarked. The line will appear in blue as you drag the mouse. Once the mouse button is released any pixels which were previously marked will be removed (any cyan will disappear, leaving the magenta background).
  • The “Annotation Mode” modifies a single square at a time.
    • Set the “Marker Size” to indicate the size of the square brush (in pixels). For example, 2 corresponds to a 2×2 pixel square.
    • The mouse will not change shape. Click the left mouse button to mark pixels as neuron, or right-click to unmark pixels.
  • Remember to save your work.

  • It is recommended to check the skeleton view to ensure proper connectivity (Reconstructions → Skeleton → Skeleton (connected components)). Disconnected elements will appear in a different color.

Problem 3

The tracing result is poor upon visual inspection (related to step 7).

Potential solution

  • Make sure the scale bar is correctly set in the metadata (see step 2).

  • Review segmentation and classification results and fill in gaps which interfere with tracing connectivity.

  • Consider training a new CNN optimized for your particular dataset.

Problem 4

Following the tracing of PVD images, branch classes are incorrectly classified (related to step 9): The wild-type young adult PVD branch classification typically follows four classes, which the software marks in red, green, blue and yellow (Figure 4B). When these are mis-classified, multiple areas appear gray in the PVD order reconstruction view (Plots → Menorah Orders → Menorah Orders Classification).

Potential solution

It is likely that something has gone wrong with the detection of the neuron’s axes (Figure 3B). See PVD-specific axes validation.

PVD-specific axes validation

  • To validate the detection of the axes, go to Reconstructions → Axes → Neuron Axes.

  • The red line marks the centerline of the neuron, whereas the yellow lines mark its boundary. The blue lines are PVD-specific and mark the third branching order.

  • If the axes require refining, they can be modified manually: select “Annotation mode” in the Axes view, and click “Apply changes”.

  • Next, drag individual points to modify the axes. The default of 50 points can be changed in the bottom panel, and will be updated when “Apply changes” is clicked.

  • Changes are automatically updated even if the reconstruction view is changed.

  • To apply any changes to the axes with regard to the branching classes, click “Extract features” → “Keep existing axes”

  • Once done, save your work.

  • Return to Plots → Menorah Orders → Menorah Orders Classification to confirm the branch orders are now correctly represented and save the project using the “Save project” button).

Resource availability

Lead contact

Requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Benjamin Podbilewicz (podbilew@technion.ac.il).

Technical contact

Technical questions should be directed to the technical contact, Omer Yuval (omer1yuval1@gmail.com).

Materials availability

This study did not generate new unique reagents.

Data and code availability

The code generated during this study is available at GitHub (April 2024):

https://github.com/Omer1Yuval1/Neuronalyzer/releases/tag/v1.1

https://doi.org/10.5281/zenodo.10938078.

Acknowledgments

The authors wish to thank members of the Shemesh and Podbilewicz labs for useful discussions regarding the software and its implementation. This work was supported by the Israel Science Foundation (grant 1575/21 to B.P. and 2751/20 to T.S.). B.P. thanks Diego Gonzalez-Halphen and Universidad Nacional Autonoma de Mexico, Direccion General de Asuntos del Personal Academico, Programa de Estancias de Investigacion (PREI), UNAM.

Author contributions

O.Y. and T.S. conceptualized the algorithm, O.Y. wrote the code, and Y.I. and B.P. provided image samples, validation, and user feedback. O.Y. and Y.I. wrote the protocol with input from all of the authors.

Declaration of interests

The authors declare no competing interests.

Contributor Information

Omer Yuval, Email: omer1yuval1@gmail.com.

Tom Shemesh, Email: tomsh@technion.ac.il.

Benjamin Podbilewicz, Email: podbilew@technion.ac.il.

References

  • 1.Yuval O., Iosilevskii Y., Meledin A., Podbilewicz B., Shemesh T. Neuron tracing and quantitative analyses of dendritic architecture reveal symmetrical three-way-junctions and phenotypes of git-1 in C. elegans. PLoS Comput. Biol. 2021;17 doi: 10.1371/journal.pcbi.1009185. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Oren-Suissa M., Hall D.H., Treinin M., Shemer G., Podbilewicz B. The fusogen EFF-1 controls sculpting of mechanosensory dendrites. Science. 2010;328:1285–1288. doi: 10.1126/science.1189095. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Smith C.J., Watson J.D., Spencer W.C., O'Brien T., Cha B., Albeg A., Treinin M., Miller D.M., 3rd Time-lapse imaging and cell-specific expression profiling reveal dynamic branching and molecular determinants of a multi-dendritic nociceptor in C. elegans. Dev. Biol. 2010;345:18–33. doi: 10.1016/j.ydbio.2010.05.502. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Wang X., Li T., Hu J., Feng Z., Zhong R., Nie W., Yang X., Zou Y. In vivo imaging of a PVD neuron in Caenorhabditis elegans. STAR Protoc. 2021;2 doi: 10.1016/j.xpro.2021.100309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Preibisch S., Saalfeld S., Tomancak P. Globally optimal stitching of tiled 3D microscopic image acquisitions. Bioinformatics. 2009;25:1463–1465. doi: 10.1093/bioinformatics/btp184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Li T., Wang X., Feng Z., Zou Y. Live imaging of postembryonic developmental processes in C. elegans. STAR Protoc. 2022;3 doi: 10.1016/j.xpro.2022.101336. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kravtsov V., Oren-Suissa M., Podbilewicz B. The fusogen AFF-1 can rejuvenate the regenerative potential of adult dendritic trees by self-fusion. Development. 2017;144:2364–2374. doi: 10.1242/dev.150037. [DOI] [PubMed] [Google Scholar]
  • 8.Schindelin J., Arganda-Carreras I., Frise E., Kaynig V., Longair M., Pietzsch T., Preibisch S., Rueden C., Saalfeld S., Schmid B., et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods. 2012;9:676–682. doi: 10.1038/nmeth.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The code generated during this study is available at GitHub (April 2024):

https://github.com/Omer1Yuval1/Neuronalyzer/releases/tag/v1.1

https://doi.org/10.5281/zenodo.10938078.


Articles from STAR Protocols are provided here courtesy of Elsevier

RESOURCES