Skip to main content
Computational and Structural Biotechnology Journal logoLink to Computational and Structural Biotechnology Journal
. 2021 Apr 15;19:2106–2120. doi: 10.1016/j.csbj.2021.04.019

Automated and semi-automated enhancement, segmentation and tracing of cytoskeletal networks in microscopic images: A review

Bugra Özdemir a,b, Ralf Reski a,b,c,
PMCID: PMC8085673  PMID: 33995906

Abstract

Cytoskeletal filaments are structures of utmost importance to biological cells and organisms due to their versatility and the significant functions they perform. These biopolymers are most often organised into network-like scaffolds with a complex morphology. Understanding the geometrical and topological organisation of these networks provides key insights into their functional roles. However, this non-trivial task requires a combination of high-resolution microscopy and sophisticated image processing/analysis software. The correct analysis of the network structure and connectivity needs precise segmentation of microscopic images. While segmentation of filament-like objects is a well-studied concept in biomedical imaging, where tracing of neurons and blood vessels is routine, there are comparatively fewer studies focusing on the segmentation of cytoskeletal filaments and networks from microscopic images. The developments in the fields of microscopy, computer vision and deep learning, however, began to facilitate the task, as reflected by an increase in the recent literature on the topic. Here, we aim to provide a short summary of the research on the (semi-)automated enhancement, segmentation and tracing methods that are particularly designed and developed for microscopic images of cytoskeletal networks. In addition to providing an overview of the conventional methods, we cover the recently introduced, deep-learning-assisted methods alongside the advantages they offer over classical methods.

Keywords: Cytoskeleton, Deep learning, Image processing, Actin filaments, Microtubules, Intermediate filaments, Curvilinear objects, Physcomitrella, Plastoskeleton

1. Introduction

Imaging is an indispensable tool for discovery/diagnostics in biology and biomedicine. It is also a fast-evolving field, with frequent emergence of novel techniques that focus on specific questions. Bioimaging techniques can be broadly categorized into two groups: i) optical and electron microscopy-based techniques for micro- and nano-scale imaging, and ii) magnetic resonance (MR)-based techniques for tissue- and organ-scale imaging. These techniques are further sub-branched into multiple modalities, each depending on its own set of instruments and software. Consequently, the different modes of image acquisition generate images with inherently different features (e.g., point spread function, noise distribution, spatiotemporal resolution). These acquisition-based differences, as well as the diverse structural and dynamic properties of the biological specimens have restricted the widespread adoption and utilisation of generic methods in bioimage processing/analysis. Rather, researchers often need to devise individual project- and question-based solutions for their analyses. Automation has therefore been an important challenge in the field of bioimage analysis.

Automation is particularly important for image segmentation, a technique that is central to many image analysis pipelines. Manual segmentation of objects from images is tedious and prone to user-to-user variability. Recent advances in the field of computer vision have made an impact on bioimage analysis with the introduction of a number of segmentation tools that found widespread use among biologists. These tools have a higher degree of automation compared to classical segmentation methods owing to the higher extent of generalising power they acquire through training with large amounts of image data.

In the process of designing a segmentation pipeline, the structural properties of the objects to be segmented play an important part. Particularly in the field of cell biology, the redundancy of certain morphological features (blobs, tubules, vesicles, etc.) prompts scientists to develop automated object detection and segmentation algorithms that are tailored to capture these patterns. Curvilinearity is one such redundant morphology. Networks of curvilinear objects are ubiquitous in biology, ranging from the cytoskeletal filaments at micro- and nano-scale to blood vessel networks at tissue/organ level. The prevalence of this type of morphology across scales in biological systems suggests that it provides certain functional advantages. As a result, one focus of the image processing efforts in biology and biomedicine is on the automation of segmentation and tracing of curvilinear structures. In the context of vascular and neuronal networks, a large body of literature addresses this topic (reviewed in [1], [51], [59], [29], [71]). The algorithms and methods developed in these studies are in principle applicable to microscopic images of cytoskeletal filaments due to their similar morphology. In practice, however, the microscopic images of cytoskeletal networks often require more customised techniques, since i) the different imaging techniques and modalities generate different signal distributions, ii) the cytoskeletal filament thickness is smaller than the resolution limit of most microscopy methods, a point that has to be addressed during the segmentation.

In this regard, this review focuses on recent literature regarding segmentation/tracing cytoskeletal filament networks in microscopic images. While the focus is on the segmentation/tracing, we also discuss a few studies covering other related image-processing tasks such as vesselness-enhancement, time-tracking, morphological network analysis and handling of microscopic limitations. The review starts with a section introducing the different microscopy categories that are frequently used for imaging of cytoskeletal filaments. In the subsequent section, different categories of segmentation are briefly explained. The review then continues with two sections introducing some of the recent research on the classical and deep learning-based methods for enhancement and segmentation of cytoskeleton, before concluding with a section for summary and outlook.

2. Categorisation of microscopy methods

Imaging of cytoskeletal filaments is performed using a variety of microscopy methods, ranging from fluorescence microscopy to cryo-electron tomography. Each of these methods produces images with different spatiotemporal resolution, signal-to-noise ratio and contrast properties. Studies targeting particular aspects of cytoskeleton structure and function must first choose a microscopy method capable of providing that information. For instance, a study aiming for a very high resolution can resort to an electron microscopy method, which can achieve a spatial resolution of around 5 nm. Electron microscopy images, however, typically have very low signal-to-noise ratio and low contrast, which makes visualisation, processing and analysis of these images difficult, especially in 3D [36]. Other studies, which focus on the dynamics of the filaments, or co-localisation of filaments with particular other proteins, often apply fluorescence-microscopy-based methods (e.g. confocal and widefield systems). One advantage of these methods is their speed, which enables them to be applied in a high-throughput manner [53]. Speed of imaging is also particularly important for capturing the cytoskeletal dynamics from time-series datasets. A major limitation of the diffraction-limited fluorescence microscopy in general is its low resolution and the artifacts/blur associated with the out-of-focus fluorescence. These limitations are particularly aggravated in widefield systems, where the out-of-focus fluorescence is not filtered at all. By rejecting the out-of-focus fluorescence via a pinhole, laser-scanning confocal microscopy (LSCM) can achieve better contrast and resolution (roughly 250 nm in xy and 750 nm in z), albeit at the cost of imaging speed. A particular limitation of the LSCM is that repetitive irradiation of the specimen with the excitation light during the image acquisition increases the chances of phototoxicity (with live cells) and photobleaching - which reduces the signal-to-noise ratio in the output images. Improvements can be gained to achieve a better trade-off between speed, sensitivity and resolution using more specialised set-ups such as spinning disk system or widefield deconvolution microscopy. A special case is Total Internal Reflection Fluorescence Microscopy (TIRFM), a method that exploits evanescent wave for the selective excitation of fluorophores that are very close to the coverslip. TIRFM offers improved contrast and reduced photodamage due to the restricted area of excitation, which eliminates the signal from the bulk of the cell. This makes the technique advantageous for time-series imaging. It is, therefore, frequently used for studying cytoskeleton dynamics near the plasma membrane [85], [55]. None of the diffraction-limited methods, however, are capable of a resolution that permits the precise localisation of the cytoskeletal filaments, which have a lateral width (24 nm for microtubules, 7 nm for actin filaments) far beyond the resolution limit of these techniques. This poses an important challenge for the segmentation and the subsequent quantitative analysis of the cytoskeleton images acquired via these methods.

An ensemble of techniques called super-resolution techniques can overcome the diffraction barrier to achieve better resolution [73]). Common super-resolution methods include Stimulated Emission Depletion Microscopy (STED), Single Molecule Localisation Microscopy (SMLM) and Structured Illumination Microscopy (SIM). STED is an optics-only approach that can in principle achieve about 20 nm resolution in xy and 50 nm in z. However, STED requires irradiation of the specimen with very high laser power, which raises serious phototoxicity issues with living cells, especially in time-series imaging, which necessitates prolonged irradiation. SMLM is an umbrella term for several super-resolution techniques using a common principle [73], [37], including Photo-Activated Localisation Microscopy (PALM), Stochastic Optical Reconstruction Microscopy (STORM) and DNA-PAINT methods. These methods are not optics-only; rather, the super-resolution image is reconstructed from the “point clouds”, coordinates of sparse molecular localisations that are deduced from sequential imaging of random subsets of fluorophores. The image reconstruction from the localisation data is achieved using complex postprocessing algorithms. SMLM methods can typically achieve a resolution of 10–30 nm, and thus can achieve nearly-precise localisation of cytoskeletal filaments. However, the successful reconstruction of super-resolved images requires acquisition of extremely large numbers of frames [26]. This is an inherently slow process that hinders the applicability of these techniques for live-cell imaging and tracking of dynamic processes such as growth/shrinking of cytoskeletal filaments. Long illumination times also increase the chances of photobleaching. Ongoing research strives to develop algorithms that can reconstruct the super-resolved image from increasingly sparser localisation data, hence speeding up the process [63]. SIM is a versatile category of the super-resolution methods, which offers faster image acquisition. SIM utilises non-uniform, patterned illumination of the specimen. The interactions between the high-frequency illumination pattern and the high-frequency variations in the specimen produce what is called a Moiré interference pattern, which contains the super-resolution information. A series of images with Moiré patterns are thus collected, changing the orientation of the illumination with each image. The collection of images are then processed with specialised algorithms to reconstruct the super-resolved image. Despite its similarity to SMLM in terms of the dependency on the acquisition of multiple image frames and their digital postprocessing, SIM is much faster because it requires much fewer raw images (typically 9 in 2D and 15 in 3D), and thus facilitates time-series imaging. On the other hand, SIM resolution is typically around 100 nm in xy and 250 nm in z, which is not high enough for precise localisation of cytoskeletal filaments.

3. Categorisation of segmentation methods

Segmentation is the process of partitioning an image into multiple meaningful regions based on certain coherence criteria. The output of the segmentation process is an image, where each pixel/voxel is replaced by a label, which is determined based on these criteria. In this respect, image segmentation is a pixel/voxel classification problem.

Segmentation methods can be categorised based on the goal of this classification process. Binary segmentation simply aims to partition an image into foreground and background regions; thus the aim of classification is simply to distinguish the foreground pixels/voxels from the background. This is the simplest segmentation method, and the most common approach for segmentation of microscopic images. Very often, the binarised microscopic image is subjected to a “connected component analysis”, whereby the isolated foreground regions are given specific labels. Each labelled region can then be analysed separately. This approach to obtaining a multi-labelled image, however, is naïve due to two main reasons. First of all, if any objects in the binary image overlap with each other, these objects cannot be isolated based on connectivity, and thus cannot be properly labelled. Second, even if the objects can be properly labelled, these labels do not carry any information about what the labelled objects represent, but only report that the objects are spatially disconnected from each other. However, an analyst often wishes to categorise objects based on other criteria, rather than merely spatial connectivity. Certain segmentation algorithms tackle this problem by exploiting prior knowledge about the objects of the image under investigation. This group of segmentation methods, called semantic segmentation, are able to partition an image into semantically meaningful regions[58]. In other words, these methods allocate the pixels/voxels into pre-defined categories of object identity so that each label in the segmented image reports the class-membership of all pixels/voxels lying under that label. While being able to categorically identify different objects in an image, semantic segmentation still lacks the ability to distinguish different object instances belonging to the same category. Instead, multiple different objects falling into the same category are treated as a single entity and given a single label. This problem is addressed by another class of methods referred to as instance segmentation [58]. These methods are capable of distinguishing the different object instances belonging to the same category. Instance segmentation is particularly important for analysis of time-series microscopy images, because the ability to recognise and delineate objects at the instance level can substantially improve the performance of time-tracking of individual objects. Panoptic segmentation [38] unifies semantic segmentation and instance segmentation, assigning a categorical label to each pixel/voxel while also distinguishing different instances of each category.

Segmentation methods, therefore, are different approaches to solving the aforementioned classification problems. Based on the requirement for annotated data, segmentation methods can be divided into two broad categories: unsupervised and supervised segmentation methods. Unsupervised methods need no training data to segment images, whereas supervised methods fit a segmentation model based on a ground-truth dataset, which comprise examples of segmentations, usually created via manual annotation by domain experts.

Most of the classical segmentation methods belong to the category of unsupervised methods. These can be roughly divided into the following subcategories i) thresholding, ii) edge-based, iii) region-growing, iv) clustering, v) model-based. Thresholding methods aim to automatically find the optimal intensity threshold to divide an image into foreground and background regions. Edge-based methods seek to segment the image based on the detection of object boundaries. Region-growing methods are based on the assumption that pixels/voxels localised closely to each other are likely to belong to the same object. Clustering methods utilise various statistical clustering techniques to group pixels/voxels. Model-based methods typically solve a constrained optimisation problem to segment the image.

Supervised segmentation strategies are mostly based on deep neural networks (DNNs, [77], [69], [9], [17], [28]), although, more seldom, other machine learning algorithms are also used for this purpose [76], [2]. Ground-truth images are obtained either through direct manual annotation of the images, or manual correction following a rough segmentation via an unsupervised approach. Supervised learning itself is also further subcategorised based on the degrees and types of supervision. In the frame of image segmentation, strong (or full) supervision refers to the labelling of each pixel/voxel of each image in the training dataset. This is a time- and labour-intensive process. Efforts to tackle this particular problem often employ the semi-supervision strategy, which involves partial labelling of the pixels/voxels in the training dataset. As an example, Çiçek et al. [18] develop a neural network that can learn volumetric segmentation from 2D annotated slices. Another supervision mechanism, even further reducing the data-labelling workload is termed weak supervision. For an excellent review on the concept of weakly supervised learning in general, we refer the reader to Zhou et al. [96]. In the context of supervised image segmentation, a weakly annotated image can be obtained by marking individual objects in the image via bounding boxes, scribbles, points or at image level [95]. The weakest form of supervision is the image-level supervision, where the image is tagged for the presence/absence of a particular object category without specifying the localisation [95]. For an in-depth discussion and comparative evaluation of weakly supervised segmentation methods in different image domains, we refer the reader to Chan et al. [16].

In the field of cytoskeleton image analysis, segmentation tasks are usually more complicated than simple binary segmentation. Many studies aim to detect and extract the centrelines (or ridges) of the filaments in an accurate manner [91], a process termed filament tracing. Moreover, segmentation/tracing must often be performed on time-series images in order to facilitate a quantitative analysis of the growth/collapse dynamics of cytoskeletal polymers. This type of analysis requires techniques for the detection and labelling of each different filament in a network and the tracking of the individual labels over time. This task is also studied in the frame of instance segmentation [54]. Therefore, a wide spectrum of segmentation methods, ranging from the traditional threshold-based methods to the complex deep learning approaches, are utilised for the segmentation of cytoskeletal filaments. These are also accompanied by image pre-processing/enhancement strategies that are helpful (and often required) for the segmentation process. Below, we will summarise some of the recent studies focusing on these methods.

4. Classical methods for enhancement and segmentation of cytoskeletons

4.1. Conventional segmentation methods

Many of the conventional segmentation methods typically involve two steps: i) a pre-processing/feature enhancement step that denoises the image and enhances certain geometrical features of the objects to be segmented, and ii) a labelling step that groups the voxels into different categories based on the features extracted in the first step. In case of curvilinear structures, the pre-processing step aims particularly to enhance the vessel-like structures in the image using specific image filters, while suppressing structures that deviate from this curvilinear geometry. These vessel-enhanced images are then used for extraction of the filaments in the second step of the segmentation, which, in the simplest case, involves intensity-thresholding of the filtered image to obtain a binary mask of the curvilinear structures. An example of this approach is used by Alioscha et al. [3], who segment actin filament networks from fluorescence microscopy images. Their method starts with an image decomposition operation, which yields a cartoon image component and a noise/texture image component. The cartoon component is then used as input for the computation of a multiscale line-response image (via a method proposed originally by [61], where each pixel holds a score of belonging to a line. The authors then threshold this response image using a local adaptive thresholding method [86] to segment the actin network. Similar studies use filtering methods suggested originally by Sandberg & Brega [72], called line filter transform (LFT) and orientation filter transform (OFT). Among these, Zhang et al. [94] develop a software tool (SIFNE) for extraction of filament networks from images acquired via Single Molecule Localisation Microscopy (SMLM), a category of super-resolution microscopy techniques. In the segmentation stage of their analysis pipeline, they transform the images with LFT and OFT before extracting the binary mask from the vessel-enhanced images via Otsu thresholding. Finally, the binary images acquired are skeletonised via morphological thinning. Xia et al. [87] develop a method for quantitative analysis of cortical actin network using images acquired via STORM (Stochastic Optical Reconstruction Microscopy), a subcategory of SMLM imaging. Their segmentation involves LFT and OFT filters to enhance the curvilinear features in the image, followed by H-minima transform for further denoising, and finally binarisation of this pre-processed image via Meyer watershed transform [57]. Breuer et al. [14] investigate the structural, organisational and dynamic properties of actin networks in Arabidopsis in order to gain a quantitative understanding of the actin-based organelle transport. To be able to perform such an analysis, the authors develop a pipeline that can extract actin networks from confocal microscopic images of the actin cytoskeleton. In the segmentation stage of their pipeline, they implement a tubeness filter that uses multiscale Hessian matrix eigenvalues [74] before thresholding this filtered image with an adaptive median threshold. Subsequently, they skeletonise the binary image before continuing with the quantitative analysis. In another study focusing on the quantitative analysis of a plant cytoskeleton, Faulkner et al. [22] develop the software tool CellArchitect, which can detect drug-induced changes in the microtubule network organisation in Arabidopsis cells. Prior to microtubule segmentation, the authors identify and mask cell borders in the image, hence revealing each cell object. To segment the microtubule networks, they first apply a Gaussian filter for smoothing and denoising, and then binarise the filtered image via a percentile-based local threshold calculated separately for each cell. Binarised networks are then skeletonised before morphological quantification.

Other studies focusing on the morphological analysis of FtsZ scaffolds in Physcomitrella plastids, the plastoskeleton, apply deconvolution as a pre-processing step to 3D confocal microscopy images of FtsZ networks, followed by adaptive thresholding [6], [64]. The thresholded images are subjected to various global and local statistical analyses leading to extraction of a series of descriptive features. Another study by Asgharzadeh et al. [7] add modifications to the segmentation and quantification techniques introduced in Asgharzadeh et al. [6] and Özdemir et al. [64]. After segmentation and morphological analysis of the networks in 3D image form, the binary network images are transformed into volume meshes and their mechanical behaviour is analysed using theoretical material parameters and finite-element modelling. The relationships between the cytoskeletal network morphology and the mechanical behaviour of the 3D simulations are examined using supervised machine learning. Finally, a comparative evaluation of this analysis for two closely related FtsZ isoforms from Physcomitrella is presented [7].

To illustrate the “filter-and-threshold” strategies, we here present an exemplary segmentation procedure for 3D images, explaining the multiple steps of a semi-automated pipeline (Fig. 1). The input image to this procedure is a confocal microscopy z-stack image of a filamentous network assembly belonging to the FtsZ1-1 isoform, a protein that localises to the chloroplasts of the moss Physcomitrella (Fig. 1A). This image was acquired using the molecular biology and imaging protocols described in Asgharzadeh et al. [6], Özdemir et al. [64] and Asgharzadeh et al. [7]. At the beginning of the pipeline, the image’s grey values are rescaled to a range of 0 and 1. The image is first subjected to a pre-processing step (Fig. 1A, Step-1) consisting of the following chain of operations: i) Gaussian filtering, ii) H-minima transformation, iii) Hessian-based vesselness enhancement filter [23], [4], iv) morphological grey-closing, v) adaptive histogram equalisation (CLAHE), vi) median filtering. The output of this process is an image, which is denoised, smoothed and enhanced for the vessel-like features (Fig. 1B). The enhanced image is segmented using hysteresis thresholding (Fig. 1, Step 2). The resulting binary image contains the network scaffolds corresponding to a multitude of chloroplasts (Fig. 1C). To isolate the scaffold corresponding to a single chloroplast, we mask the binary network image with a chloroplast mask, which corresponds to the largest central chloroplast in the image (Fig. 1, Step-3) (the chloroplast mask is separately generated from the chlorophyll fluorescence and not shown due to space limits). The resulting binary network image (Fig. 1D) is subjected to a tubular enhancement procedure (Fig. 1, Step-4) based on the following consecutive operations: i) Euclidean distance transform, ii) Hessian-based line filter [74], iii) adaptive histogram equalisation (CLAHE), iv) local adaptive thresholding following Bernsen’s method [10], v) morphological binary-closing, vi) removal of binary noise with a size filter. Output of this procedure is a binary image that has been substantially thinned and has a uniform width (Fig. 1E). Therefore, this image is already suitable for the detection of interest points (nodes, endpoints, etc.). Optionally, the image can be skeletonised via morphological thinning algorithms (since the image at this stage has a uniform width, morphological thinning can be exerted on it without causing artefacts). In our example here, the image is skeletonised based on Lee et al. [43] (Fig. 1, Step-5). The resulting skeleton image (Fig. 1F) is further processed with an algorithm (Fig. 1, Step-6) that transforms the raw skeleton into an annotated network (Fig. 1G), specifically labelling the internal nodes (red), the endpoints (yellow) and the connecting edges (green) of the skeleton. The representation of a filament network in the form of such an annotated network offers a wide range of quantitative analysis options. For instance, geometrical features such as edge directionality, lengths, tortuosity, etc. can be directly computed from this representation. The annotated network can also be easily transformed into a morphological graph to implement graph-theoretical algorithms on it (for instance, to compute shortest paths between selected nodes). Finally, an overlay of the annotated network and the binary region mask is shown in Fig. 1H. Parameters used in the described implementation are given in Table 1.

Fig. 1.

Fig. 1

Illustration of a typical filter-and-threshold procedure in a network extraction procedure applied to a scaffold of FtsZ1-1 filaments in the chloroplasts of Physcomitrella. (A) Input is a raw 3D image of FtsZ1-1 produced by confocal microscopy. (B) Enhanced image after step-1. Step-1 consists of the following chain of operations: i) Gaussian filtering, ii) H-minima transformation, iii) Hessian-based vesselness enhancement filter, iv) morphological grey-closing, v) adaptive histogram equalisation (CLAHE), vi) median filtering. (C) Binary image produced by Step-2, which segments the enhanced image using hysteresis thresholding. (D) Binary image produced by Step-3, which masks the image in (C) to isolate the FtsZ scaffold corresponding to a single chloroplast. (E) Binary tubular image after Step-4, which consists of the following chain of operations: i) Euclidean distance transform, ii) Hessian-based line filter, iii) adaptive histogram equalisation (CLAHE), iv) local adaptive thresholding, v) morphological binary-closing. vi) removal of binary noise with a size filter. (F) Binary skeleton image produced by Step-5, which applies a 3D skeletonisation algorithm to the tubular image in (E). (G) Feature image after Step-6, which finds the nodes, endpoints and edges of the skeleton image in (F). (H) Overlay of the images in (D) and (G). Intensity values in (A) and (B) are represented with the “viridis” colourmap, which has a spectrum from purple (lower values) to yellow (higher values). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

Table 1.

Parameters used in different steps of the pipeline implemented in Fig. 1. Local windows and structuring elements are given with the order of dimensions (z, x, y). Note that the steps that need no manual parameter input are excluded from this table.

Process Step Parameters
Pre-processing Gaussian filter sigma = 1.0
H-minima transform h = 0.03
Multiscale Hessian-based vesselness enhancement filter sigma minimum = 0.8, sigma maximum = 5.0, sigma number = 9.0, alpha = 2.0, beta = 0.5, gamma = 0.005, objectness scaling = True
Morphological grey-closing structuring element = ellipsoid with diameters (3, 7, 7)
CLAHE window = (9, 9, 9), contrast limit = 0.1
Median filtering window = (1, 5, 5)
Binarisation Hysteresis threshold high threshold = Otsu threshold, low threshold = 0.75 × Otsu threshold
Skeletonisation Hessian-based line filter sigma = 1.5, alpha1 = 0.5, alpha2 = 2.0
CLAHE window = (7, 7, 7), contrast limit = 0.05
Local adaptive thresholding window = (5, 5, 5), contrast threshold = 0.1
Morphological binary-closing structuring element = ellipsoid with diameters (3, 7, 7)
Size filtering size threshold = 20 voxels

4.2. Limitations of conventional segmentation methods for cytoskeleton images

The threshold-based segmentation methods such as exemplified above are still popular due to their simplicity, especially when the aim of the segmentation task is simply to obtain a binary mask for the foreground voxels corresponding to the object. When the sample images already have a high signal-to-noise ratio, the preliminary denoising/feature-enhancement filters can sufficiently improve the image so that a simple thresholding can yield a decent representative binary mask. With more challenging images, for instance, images with low signal-to-noise ratio, or images with discontinuous or blurred contours, more complex strategies are required for segmentation. In addition to these general issues, a particular problem related to cytoskeleton imaging is that the physical width of the filaments is usually smaller than the resolution limit of many microscope modalities (e.g., confocal laser scanning microscopy) [88], a limitation that obscures the correct filament tracks in the image. In connection with this, multiple adjacent filaments are unresolvable within the diffraction limit of light. In such cases, quantitative measurements are jeopardised by the lack of precision in localisation of the filament tracks in the image. The aforementioned threshold-based methods are incapable of handling the problem of imprecise localisation, since they usually yield a pixel/voxel mask that is wider than the physical width of the polymer. Furthermore, these pixel/voxel masks are not convenient for direct quantitative analysis and usually subjected to a morphological thinning operation (skeletonisation) [14], [87]. Morphological thinning, however, does not guarantee accurate localisation of the physical filaments and, depending on the degree of local variations in object thickness, tends to generate artefacts such as incorrect nodes and endpoints. A more accurate localisation and extraction of the filaments in the images involves detection of local maxima (ridges) along the length of the filaments (a process referred to as tracing). Multiple approaches are being used to tackle the tracing task. Rigort et al. [67] develop a method to extract actin filaments from cryo-electron tomograms based on generic template matching. Template matching yields a local cross correlation map and an orientation field, both of which are used to develop a tracing algorithm that extracts the filament centrelines. An advantage of this method is that the user has the flexibility to modify the cylindrical template in order to detect different cytoskeletal filament types such as intermediate filaments and microtubules. This segmentation strategy was also employed by Jasnin & Crevenna [31]. Rogge et al. [68] propose a method for segmentation and quantification of F-actin filaments from 2D fluorescence microscopy images. Their segmentation relies on an iterative process consisting of a pre-processing step (involving Gaussian filter, morphological operations and thresholding), a tracing step for centreline extraction and a step for fibre connection. With each consecutive cycle of the process, filaments with larger width are extracted. The segmented images are then subjected to quantitative analysis. They also provide an open-source GUI, FSegment, which performs the presented method. Costigliola et al. [19] use a method (originally proposed by Gan et al. [25] that combines multi-scale steerable filters and a non-maximum suppression operator for direct extraction of vimentin network skeletons from spinning disk confocal microscopy (SDCM) images. The multiple skeleton segments are merged into a full network based on their proximity and orientations. In the second step of the analysis pipeline, the authors apply a length threshold to distinguish the longer, more bundled vimentin filaments from the shorter fragments. Finally, they subject the resulting vimentin networks to various quantitative analyses, which include an analysis of the filament orientation in relation to the direction of cell movement. Tsugawa et al. [82] develop a method for extraction of local anisotropy vectors and orientation of fibrillary structures in 2D images based on nematic tensor analysis (NTA). After the development and validation of their method using synthetic images, they demonstrate its performance with real data by applying it to real confocal microscopy images of cortical microtubules in a giant cell of an Arabidopsis sepal.

4.3. Model-based approaches

To attain a greater accuracy in localisation of the filament ridges in the images, model-based strategies using various optimisation algorithms are often employed in the segmentation/tracing tasks. In the specific case of cytoskeleton image segmentation, the objective of most model-based methods is to fit a smooth curve to the centrelines of the filaments with as high precision as possible [78], [70], [84], [91], [88].

To this end, a group of methods, namely deformable models, are popular. These methods are based on deformable contours (curves for 2D images or surfaces for 3D images) that are defined within the sample image frame, and that undergo gradual deformation under the influence of internal and external forces. This so-called “contour evolution” involves iterative modification of the contour so that it approximates desired image features, such as object boundaries (or ridges in the specific case of filaments) while being constrained by certain boundary conditions. The internal forces are defined within the contour, and are responsible for constraining the evolving contour according to prior knowledge about the object shape (e.g., object’s local smoothness or curvature). The external forces, on the other hand, are calculated from the image and specify the directions and speeds that are used to drive the contour. The ability to design the internal and external forces gives the user control over the model, which can thus be tuned to segment, for example, images with different modalities.

Since the seminal paper by Kass et al. [35], a large body of research was conducted on deformable models, and a wide range of segmentation techniques were developed. For a deep introduction into the field, the reader can refer to Tsechpenakis [81] and Jayadevappa et al. [32]. In the following, we will introduce several recent studies that use deformable model-based strategies to segment/trace cytoskeletal filaments in microscopic images.

Regarding the application of deformable models in the frame of segmentation and tracking of cytoskeletal networks, a particular subcategory of deformable models, namely parametric deformable models (active contours or snakes), is often preferred. In one of the early examples of these studies, Kong et al. [39] employ particle filters to track the microtubule tip, and then, based on the identified tip locations, they segment the microtubule filaments via open active contours. In another study focusing on microtubule segmentation, Nurgaliev et al. [62] combine active contours with Monte Carlo simulations to identify microtubule trajectories in 3D electron tomograms.

In a series of original publications [45], [44], [46], the authors introduce “stretching open active contours” (SOACs), which have since developed into popular tools for the extraction and quantification of cytoskeletal networks. Differing from the typical active contour models, which use closed curves that evolve to find the object boundaries, SOACs are open curves, which start stretching from snake tips and eventually delineate the central lines (ridges) of the filaments in the image. Smith et al. [78] develop a user-interactive software based on SOACs, named JFilament, which does not only segment and track filaments in microscopic images but can also be used to extract certain static and dynamic quantitative features of the filaments. Xu et al. [90] develop a method that achieves automated, simultaneous initialisation and evolution of multiple SOACs. This method automatically extracts the centrelines of an entire filament network. Furthermore, this method uses graph partitioning strategy to reorganise the evolved SOACs in order to dissect and label individual filaments in the network. Xu et al. [91] improve the SOACs-based segmentation algorithms and extend them to 3D images. In particular, they introduce an adaptive stretching force that results in robust contour evolution under high intensity variations and noise. Xu et al. [92] develop the open-source user-interactive platform SOAX that uses SOACs algorithms to extract cytoskeletal networks from images, and additionally provides an option to perform quantitative analysis on the segmented objects based on a set of input parameters from the user. Xu et al. [89] adds temporal dimension to the SOAX software. The final version of the software (TSOAX) is capable of tracking and analysing cytoskeletal networks in time-series movies consisting of both 2D and 3D images. Kotsur et al. [40] address some drawbacks of SOACs in tracking of individual intermediate filaments in confocal time-series images and propose a modified method that solves these issues. Their method can accurately track individual filaments within their branched network due to a reconfigured active-contour algorithm that has a better control on the snake endpoint growth.

Another user-interactive application, designed to segment and analyse filamentous and fibrous objects in microscopic images, is proposed by Usov & Mezzenga [83]. Their tool FiberApp works on images acquired by any type of microscope, although the presented results are mostly based on atomic force microscopy (AFM) or electron microscopy (EM) images. They use a combination of the A* pathfinding algorithm and the active contour models to extract the contours of fibrous objects from the images (the “contour” here referring to the fibre centrelines). FiberApp enables an option for the user to specify heterogeneous stiffness values for different regions of the image. This can be used for instance, to remove undesirable fluctuations in the extracted contours. It also offers a range of options for quantitative analysis, which can be automatically applied to the traced fibres.

Further model-based strategies include Valdman et al. [84] who focus on devising methods for inferring material properties of cytoskeletal filaments from exhibited filament shapes, using simulated, noisy biopolymer images with known stiffness. They test their methods also on real Taxol-stabilised microscopic images of microtubules. Their approach relies on optimising an objective function that quantifies the overlap between an open smooth curve designed in the orthogonal polynomial basis and the fluorescence intensity in the image. Xiao et al. [88] combine curve-fitting and level-set strategies for the centreline extraction from filament images. By representing the open curves with two B-spline vector level sets, they can formulate filament centreline extraction as a global convex optimisation problem. Their method achieves sub-pixel accuracy without prior knowledge of the filament number in the image. Similar approaches are applied to the analysis of cytoskeletal dynamics as well. In such a study, Kapoor et al. [34] develop the multi-purpose software MTrack for microtubule detection, tracking and analysis in time-series images acquired via TIRF microscopy. They use Maximally Stable Extremal Regions (MSER) algorithm to obtain restricted image areas containing the microtubule seeds. Subsequently, the seed endpoints, from which the microtubules will grow and shrink, are detected by fitting a Sum of Gaussians (SoG) model to each of the detected seed regions. This cycle of implementing MSER and fitting a SoG model is repeated for consecutive time frames to track the filament endpoints (hence the growth/shrinking of filaments) over time, using the information from a successfully segmented time frame as starting point for the model-fitting in the next time frame. By representing the SoG path with a 3rd order polynomial function, the authors enable their method to robustly track straight, curved and crossing filaments. MTrack also offers an analysis option for microtubule length over time.

Another popular tool for filament tracing and tracking is proposed by Ruhnow et al. [70] who introduce the semi-automatic software application FIESTA, which is capable of centreline and tip extraction from filament images with sub-pixel precision as well as their tracking in time-series images. FIESTA relies on an initial binarisation, thinning and region-of-interest definition, which is followed by fitting a 2D model based on Gaussian distributions to identify the centre positions of filaments and tips. Finally, a spline interpolation joins the segments together to obtain the centrelines. After the centreline extraction from each timeframe in a time-series, the algorithm establishes the temporal connections between the detected objects to complete the tracking. Other related studies focusing on filament tip tracking include Hadjidemetriou et al. [27], Demchouk et al. [20], Prahl et al. [66], Maurer et al. [56], [12]. Importantly, the methods described in these studies aim to identify and track microtubule tips, rather than tracking the end-binding proteins.

Breuer and Nikoloski [13] introduce an open-source software for automatic decomposition of a network into its constituent filaments. They treat filamentous networks as weighted geometric graphs, where the edges correspond to segments of the filaments and the nodes correspond to the segment endpoints. As an objective of the optimisation process, they define a “roughness” term that represents the variability in filament thickness. Their proposed method seeks to assign segments to paths of filaments while minimising the total roughness as the entire network is covered. In addition to the total roughness, their method can also implement alternative optimisation objectives such as average roughness or curvature-related measures. The authors transform their method into a software tool, DeFiNe, and test it on a series of image-based network data, including LSCM images of the Arabidopsis actin cytoskeleton.

Park [65] proposes a method for segmentation, skeletonisation and quantitative geometrical analysis of 3D filament networks from LSCM images. For segmentation, this author combines structure tensor eigenvalues and Otsu threshold calculation to derive an energy functional, which is minimised using the graph-cuts method. After binarisation, a skeleton-based seed selection is applied, followed by multiple hypothesis template tracking [24] for accurate centreline extraction. Finally, for a quantitative analysis, Park [65] calculates a distribution of the fibre diameters over their reconstructed fibrillary network.

Model-based approaches are also used for the segmentation of cytoskeletal filaments from electron microscopy images. Kervrann et al. [36] propose a probabilistic model in the frame of conditional random fields (CRFs) for segmentation of microtubules from 2D sections of cryo-electron tomograms. In another study focusing on extraction of filaments from cryo-tomograms, Sazzed et al. [75] introduce the software tool BundleTrac for semi-automatic tracing of individual actin filaments in filament bundles in noisy cryo-tomograms. BundleTrac first detects the main axis of the bundle and applies longitudinal averaging along this axis for denoising. For tracing of the filaments, BundleTrac optimises a 2D seven-peak Gaussian convolution. Yue et al. [93] propose a segmentation method for cryo-EM images of microtubules. This method denoises and enhances the cryo-EM image through extensive pre-processing, including an improved diffusion filter, and then applies an adapted Chan-Vese [15] algorithm for segmentation of the filtered image.

We here demonstrate the model-based approaches with two illustrative implementations (Fig. 2), where two different deformable model-based strategies are applied to a confocal microscopy image of the Physcomitrella FtsZ1-1 isoform. The first method is binary region segmentation using a technique called Morphological Active Contours without Edges (morphological ACWE), which is an adaptation of the Chan-Vese method by Márquez-Neila et al. [52]. We demonstrate this method by using the open-source python code provided by the authors. The second method is filament centreline extraction by using the SOAC method, which we implement by using the JFilament software [78]. For the demonstration of these methods, we use the same input image as the one in Fig. 1 (for the details about the molecular biology and the image acquisition, we refer the reader to Asgharzadeh et al. [6], Özdemir et al. [64] and Asgharzadeh et al. [7]). Fig. 2A outlines the implementation of the morphological ACWE on the 3D image. The pre-processing that transforms the raw image into a state with reduced noise and enhanced tubular structures comprises the same sequence of steps as described in Fig. 1B. Subsequently, morphological ACWE is performed on this enhanced image. The iterative segmentation process is initialised with a binary checkerboard pattern (Fig. 2A, iteration 0). The contour evolves under external and internal forces, eventually approximating the shape of the objects in the image. The different states of the contour evolution are demonstrated (Fig. 2A, Iteration 10, 15 and 20). It is important to note that in this case the pre-processing is essential for the segmentation to be successful, whereas the method yields a poor segmentation if the raw image is chosen as the initial input. The end product of this pipeline is a region mask, which can be thinned and skeletonised to perform a quantitative network analysis (as in Step-4 to Step-6 in Fig. 1). An alternative to the approach described in Fig. 2A would be a direct centreline extraction by curve-fitting to the raw data. This would more accurately predict the true filament ridges. Fig. 2B shows such an implementation of the centreline extraction from the raw image using the JFilament software [78]. For this example, we select a 2D slice from our image (Fig. 2B, left), since the strong anisotropy in the resolution of our 3D image hinders a successful implementation. In this semi-automated process, we select the initial positions of the snakes and let them deform so that they delineate the ridges of the filaments. The traced centrelines are shown in Fig. 2B (middle) and their overlay with the raw image is shown in Fig. 2B (right). The parameters that we use for the implementations in Fig. 2A and Fig. 2B (except for the pre-processing of the input image in Fig. 2A, which is the same as in Fig. 1) are given in Table 2.

Fig. 2.

Fig. 2

Illustration of model-based approaches to a scaffold of Physcomitrella FtsZ1-1 filaments. (A) Implementation of the morphological ACWE method for binary region segmentation. Raw image is first subjected to a pre-processing step leading to a denoised and vessel-enhanced form of the image. The contour is initialised as a checkerboard level set (iteration 0), which evolves under internal and external forces. The different states of the evolving contour are shown (Iterations 10, 15 and 20). (B) Implementation of the SOAC method using the JFilament software. The input is a 2D slice (left plane) from the 3D raw image. Snakes are initialised manually and then allowed to deform to delineate the ridges of the filaments (middle plane). An overlay of the fitted centrelines and the raw image is shown in the right plane.

Table 2.

Parameters used in the two methods implemented in Fig. 2.

Process Parameters
Morphological ACWE smoothing = 1.0, lambda1 = 1.0, lambda2 = 1.6
SOAC stretch = 100.0, spacing = 5.0, background = 10.0, alpha = 5.0, weight = 5.0, smoothing = 1.5, foreground = 64.0, beta = 0.2, gamma = 10000.0

5. Deep learning methods for enhancement and segmentation of cytoskeletons

5.1. Deep learning for segmentation

The recent advances in the field of deep learning (DL from now on) and its implementations in computer vision had a great impact on bioimage segmentation, resulting in the development of an array of neural networks specialised for segmentation tasks [69], [18]. The accumulation of microscopic images of the cytoskeleton along with the invention of various automated and semi-automated software tools for filament segmentation/tracing leads to faster and more accurate cytoskeleton image annotation, and thus the accumulation of ground-truth image data for supervised learning studies. This in turn leads to the development of DL-based segmentation methods tailored to cytoskeleton images. In addition to enhancing the segmentation accuracy, these methods also aim to find solutions to the problems that are frequently encountered in filament segmentation but cannot be robustly overcome using the unsupervised segmentation methods discussed above.

Asgharzadeh et al. [8] demonstrate an application of U-net for the segmentation of FtsZ networks in Physcomitrella chloroplasts, the plastoskeleton, from confocal microscopy images. In another study based on U-net, Liu et al. [48] develop a segmentation method for cytoskeleton images (actin filaments and microtubules) acquired using confocal microscopy. To generate the ground-truth images, they rely on a combination of i) SOAX-based segmentation, ii) a single U-net module fit to the SOAX output and iii) manual correction of the segmentation by this U-net module. Their proposed network is based on multiple modified U-net modules, which are stacked in an end-to-end manner. This network, when trained on the ground-truth, performs slightly better than a single U-net module on confocal microscopy images of actin and microtubule networks.

Obtaining high-quality ground-truth images is a general challenge for most projects using DL-assisted segmentation. To reduce the labour associated with image annotation, weakly supervised approaches are often used. In a recent example of such studies, Lavoie-Cardinal et al. [41] use a modified U-net, trained on weak annotations in the form of polygonal bounding boxes, for segmentation of F-actin nanostructures from STED images. Bilodeau et al. [11] take a further step and introduce MICRA-Net, a neural network designed to be trained on image-level classification annotations in order to perform multiple microscopy tasks, including semantic segmentation of images. In addition to other cell biological use cases, the authors demonstrate efficient segmentation of F-actin nanostructures from STED images using a model trained on image-level annotations.

One of the tricky tasks relating to the segmentation of cytoskeletal networks is the correct identification of filaments at the intersection points of the networks. This issue is especially complicated with dense networks and bending filaments. Liu et al. [47] tackle this challenge by proposing a U-net-based neural network, which accepts binary network images as inputs, and dissects the network into individual filament instances. Since the network is trained on orientation-associated ground-truth, it does not confuse the filament identity at the intersection points. The downside of this approach is that any filaments that are kinked/curved will be fragmented into different orientation groups. The method also handles this issue by using an algorithm that repairs these fragmented filaments based on the orientation vectors at the filament termini. The authors test their method on microtubule images, which they first segment using the method by Liu et al. [48] to obtain the binary input data.

Liu et al. [49] propose a DL-based method for geometrical and topological characterisation of actin filament networks. For an initial binary segmentation, they use the U-net based method by Liu et al. [48]. For the topological characterisation, they use the ResNet platform to train a network that accepts the binary images as inputs and generates heatmaps to highlight the junctions and endpoints of the networks. For the quantification of the filament lengths, they employ a fast-marching algorithm, which calculates a geodesic distance map by using the key points (junctions and endpoints) as the seeds. The number and lengths of the filaments can then be acquired from the local peak values obtained from this distance map.

In another recent study, Eckstein et al. [21] propose a microtubule tracking method for large electron microscopy volumes. Their workflow, implemented on Drosophila neural tissue, starts with training of a 3D U-net (based on ground-truth skeleton annotations) that predicts scores for each voxel’s belonging to a microtubule. From the resulting probability map, they extract candidate points for microtubule segments through non-maximum suppression and thresholding. These candidates are then translated into a graph representation, where the nodes of the graph represent the candidates and the edges show potential links between them. Finally, they formulate the identification of the true microtubule track as a constraint optimisation problem, which aims to find the most appropriate subset of edges in the graph. Solving this optimisation reveals the correct microtubule tracks.

DL-based strategies are also beginning to be applied to filament tracking in time-series data. Masoudi et al. [54] develop a method for tracking of microtubules in 2D time-series images and measuring the velocities of microtubules. They tailor their method particularly to combat the challenges of the TIRF imaging method. Their method relies on instance-level segmentation of microtubules at each frame of the time course and then generating a trajectory for each microtubule instance over time. Prior to the segmentation, their method uses a visual attention module, which contains a CNN and a Recurrent Neural Network (RNN) that processes the image and suggests where to focus for segmentation. Then the segmentation module, which is an encoder-decoder system, segments the image within this attention region. After the segmentation is finished, the individual microtubule instances are associated by analysing successive pairs of time frames in order to get the trajectory for each microtubule.

5.2. Deep learning for enhancement

In addition to boosting the general image processing tasks such as segmentation and tracking in a broad sense, DL contributes to the handling of microscopy-specific problems. Enhancement of microscopic resolution is a topic, which attracts much interest in recent years and benefits greatly from DL-based technologies. One important contribution to this field was made by Ouyang et al. [63], who introduce ANNA-PALM, a DL strategy that generates super-resolution images from sparse localisation-microscopy images and/or widefield images. Localisation microscopy techniques include PALM/STORM and DNA-PAINT, both of which are used by Ouyang et al. [63] to acquire microtubule images. The ground truth for the learning task is composed of dense, super-resolution PALM images. To build the training image set, under-sampled, sparse versions of the same PALM images (and optionally a widefield image) are used. The training is performed via pix2pix image-to-image translation architecture [30] and aims to enable the neural network to reconstruct a super-resolution image from a sparse localisation image (or even a widefield image if included in the training). The authors demonstrate that their method performs well on a variety of image datasets representing different subcellular structures, including microtubules. As a result, the method is a suitable option for a tubeness-enhancement step prior to filament segmentation/tracing studies.

Indeed, Nanguneri et al. [60] employ this approach as they utilise ANNA-PALM as a part of their computational analysis framework focusing on F-actin network in dendritic spines. They develop a pipeline for segmentation and quantification of actin filament networks in spine-specific regions of neurons using super-resolution image datasets. They first use a supervised machine-learning tool, named trainable Weka segmentation [5], to identify actin-rich regions in the images. To achieve fine segmentation of the actin filaments within these regions, they compute a tubular model of their super-resolution images using ANNA-PALM [63], in particular using the microtubule models offered by that study. Subsequently, they apply certain masks on the tubular model to obtain region-of-interests, which are actin-rich areas spatially correlating with a postsynaptic marker. Within these specific regions, they finally apply a ridge detection algorithm [79] to extract actin skeletons, which are then subjected to quantifications. Concurrent with the analysis of the actin cytoskeleton, supervised learning algorithms are used to perform a morphological characterisation of the spines.

Lee et al. [42] propose another DL-based technique for tubular image enhancement. They train a CycleGAN [97] model that is capable of enhancing the quality of low-resolution microtubule network images, which then can be effectively segmented simply using Otsu thresholding. They train their model on two sets of images. One set, acquired from the Human Protein Atlas [80], consists of high-resolution confocal images, which represent the reference images for training. The second set, acquired from the Broad Bioimage Benchmark Collection [50] are low-resolution widefield images representing targets for enhancement. The CycleGAN model being trained on these two sets learns the data distributions in both the low and high-resolution image sets and can eventually transform an input image from a low-resolution distribution to an output with a high-resolution distribution.

A study combining the two technologies of super-resolution imaging and DL is conducted by Jin et al. [33], who propose deep-learning-assisted structured illumination microscopy (DL-SIM). This method uses DL to boost the performance of SIM by enhancing the image reconstruction step so that the image reconstruction can be performed with only few raw images and using images acquired under low-light conditions. Using the U-net architecture, the authors train two networks on raw SIM images of microtubule and actin networks, in addition to other subcellular structures. For training of the first network, they use raw SIM data as input and the standard SIM reconstruction images as the ground truth. This network can achieve successful reconstruction with drastically reduced numbers of raw images compared to the conventional SIM reconstruction methods. The second network, consisting of two U-nets, is trained to handle noisy images. For this network, the images acquired under low-light conditions are used as training input, whereas the corresponding SIM reconstructions from images acquired under normal-light conditions are used as the ground truth. The resulting network is capable of SIM reconstruction from raw images acquired under extremely low-light conditions. Chaining of the two U-nets yields a tool that can reconstruct SIM images both using fewer raw images than needed by the conventional SIM reconstruction methods and achieves better reconstruction quality with noisy raw data. Therefore, the method potentially reduces the photobleaching problems associated with prolonged imaging and strong illumination.

6. Summary and outlook

High-resolution imaging combined with the advanced computational methods is boosting the information acquired from bioimage analysis. Enhancement, segmentation, tracing and tracking methods are being developed for cytoskeleton images with several user-interactive software tools introduced already, as covered in this review. Table 3 summarises the reviewed works, linking each study with the respective bioimaging method, the biopolymer type, the segmentation/enhancement tasks, key strategies employed and, if available, the name of the user-interactive tool.

Table 3.

A summary of the publications covered in this review.

Category Publication Bioimaging Technique Biopolymer Type Main Segmentation/Tracing/Enhancement Tasks Key Strategies Relevant User-Interactive Tool
Intensity threshold [36] Cryo-Electron Tomography Microtubules Region segmentation (pixel/voxel mask) Conditional Random Fields-Maximum a Posteriori estimation None
[3] Fluorescence Microscopy Actin filaments Region segmentation (pixel/voxel mask) Image decomposition, multiscale line filters, adaptive thresholding None
[94] SMLM Microtubules Region segmentation (pixel/voxel mask) LFT, OFT SIFNE (SMLM Image Filament Network Extractor)
[14] Confocal Microscopy Actin filaments Region segmentation (pixel/voxel mask) Multiscale Hessian-based tubeness filter, adaptive thresholding None
[22] High Throughput Confocal Microscopy Microtubules Region segmentation (pixel/voxel mask) Gaussian filter, local thresholding CellArchitect
[6] Confocal Microscopy FtsZ Region segmentation (pixel/voxel mask) Deconvolution, local thresholding None
[64] Confocal Microscopy FtsZ Region segmentation (pixel/voxel mask) Deconvolution, local thresholding None
[87] STORM Actin filaments Region segmentation (pixel/voxel mask) LFT, OFT, H-minima transform, Meyer Watershed transform None
[7] Confocal Microscopy FtsZ Region segmentation (pixel/voxel mask) Deconvolution, local thresholding None
Filter/template-based tracing [67] Cryo-Electron Tomography Actin filaments Centreline extraction Template matching,tracing Actin Segmentation (an AMIRA extension package)
[31] Cryo-Electron Tomography Actin filaments Centreline extraction Template matching, tracing None
[25] SDCM/SIM Microtubules and Intermediate Filaments Centreline extraction Steerable filters None
[82] Confocal Microscopy Microtubules Estimation of local anisotropy and orientation Nematic tensor analysis None
[19] SDCM Intermediate Filaments Centreline extraction Steerable filters None
[68] Widefield Fluorescence Microscopy Actin filaments Centreline extraction Filtering and tracing FSegment
Conventional model-based [39] SDCM Microtubules Tracking over time Open active contour model None
[27] Epifluorescence Microscopy, Confocal Microscopy Microtubules Centreline extraction, tip tracking over time Optimisation using consecutive level-sets method None
[45] TIRFM Actin filaments Centreline extraction,tracking over time Open active contour model None
[44] TIRFM Actin filaments Centreline extraction,tracking over time Open active contour model None
[46] TIRFM Actin filaments Centreline extraction,tracking over time Open active contour model None
[62] Cryo-Electron Tomography Microtubules Centreline extraction Open active contour model None
[78] TIRFM, SDCM Actin filaments Centreline extraction,tracking over time Open active contour model JFilament (an ImageJ plugin)
[20] Digital Fluorescence Microscopy Microtubules Tip tracking over time Gaussian fitting, Gaussian survival functions None
[70] TIRFM, Epifluorescence Microscopy Microtubules, Kinesin motors Centreline extraction,tracking over time Least squares withGaussian distribution models FIESTA
[90] TIRFM, SDCM Actin filaments Centreline extraction Open active contour model None
[84] TIRFM Microtubules Centreline extraction Active contour model None
[56] TIRFM Microtubules Tip tracking over time 2D model fitting, modifications to Ruhnow et al. [70] None
[66] Epifluorescence Microscopy Microtubules Tip tracking over time Gaussian fitting, Gaussian survival functions TipTracker
[91] TIRFM, SDCM Actin filaments Centreline extraction Open active contour model None
[12] TIRFM Microtubules Tip tracking over time 2D model fitting, modifications to Ruhnow et al. [70] None
[13] Confocal Microscopy Actin filaments Decomposition of network into constituent filaments Graph partitioning optimisation DeFiNe
[92] TIRFM, SDCM Microtubules, Actin filaments Centreline extraction Open active contour model SOAX
[83] AFM, EM Fibrils of nanocellulose, BSA, polysaccharides,amyloid, beta-lactoglobulin Centreline extraction A* Pathfinding Algorithm, curve fitting FiberApp
[88] Fluorescence Microscopy, Phase-Contrast Microscopy,Darkfield Microscopy Microtubules, Axonomes Centreline extraction B-spline vector level sets, generalised linear model None
[93] Cryo-Electron Microscopy Microtubules Region segmentation (pixel/voxel mask) Chan-Vese model None
[65] Confocal Microscopy Not specified Region segmentation, fiber reconstruction Graph-cuts, template fitting, multiple hypothesis tracking None
[75] Cryo-Electron Tomography Actin filaments Centreline extraction 2D convolutional optimisation using Gaussian kernels BundleTrac
[89] TIRFM, SDCM Actin filaments, myosin rings, fibrin bundles Centreline extraction,network tracking over time Open active contour model TSOAX
[34] TIRFM Microtubules Centreline extraction,tracking over time Sum of Gaussian (SoG) and polynomial models MTrack (an ImageJ plugin)
[40] Confocal Microscopy Intermediate Filaments Centreline extraction and tracking over time Open active contour model None
DL-assisted model-based [63] PALM, DNA-PAINT, Widefield Microscopy Microtubules Superresolution reconstruction U-net and GANs ANNA-PALM (ImageJ plugin and web application)
[42] Confocal Microscopy, Widefield Microscopy Microtubules Resolution enhancement Cycle-GAN None
[47] Not specified Microtubules Instance segmentation of filaments of a network Deep learning model based on U-net architecture None
[48] Confocal Microscopy Microtubules, Actin filaments Centreline extraction Deep learning model based on U-net architecture None
[60] dSTORM Actin filaments Region segmentation, centreline extraction ANNA-PALM, supervised learning, Gaussian derivatives None
[8] Confocal Microscopy FtsZ Region segmentation (pixel/voxel mask) Deep learning model based on U-net architecture None
[11] STED Actin filaments Region segmentation (pixel/voxel mask) CNN, weak supervision,latent learning None
[21] Electron Microscopy Microtubules Reconstruction of filament tracks 3D U-net, non-maximum suppression, integer linear programming None
[33] SIM, TIRFM Microtubules, Actin filaments Superresolution reconstruction U-net None
[41] STED Actin filaments Region segmentation (pixel/voxel mask) U-net, weak supervision None
[49] Not specified Actin filaments Junction and endpoint detection in a network Deep learning model based on ResNet architecture None
[54] TIRFM Microtubules Instance segmentation, time-tracking of filaments Convolutional Neural Networks, Recurrent Neural Networks None

These methods and tools enable morphological characterisation of cytoskeletal filaments and networks, which in turn reveal insights into their functional characteristics, such as efficiency of transport and adaptive mechanical behaviour. At the current stage, many existing methods and tools are still semi-automated, requiring extensive manual parameter configuration. Table 3 shows that there is an increasing trend towards using DL-assisted methods to automate various tasks in microscopic image analysis. One of the main limitations of using supervised DL methods in microscopic image processing/analysis is the difficulty and cost of data annotation. With large amounts of 3D/4D image data being now routinely produced in life sciences, this problem becomes even more restrictive. It is, therefore, reasonable to expect that more research will be focusing on unsupervised as well as semi- and weakly supervised learning strategies to tackle this problem. Many of the existing methods are still restricted to 2D static images. Considering the importance of 3D cytoskeletal dynamics for many cellular processes, future research will probably address robust tracking of individual filaments in cytoskeletal networks in 3D time-series images from the viewpoint of instance segmentation.

With the fast accumulation of image data, availability of advanced DL algorithms and GPU-accelerated hardware, further progress is likely to be achieved in the field, especially on the aspects of automation and 3D/4D extension.

CRediT authorship contribution statement

Bugra Özdemir: Investigation, Software, Writing - original draft. Ralf Reski: Supervision, Funding acquisition, Project administration.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

We apologize to all authors whose work could not be covered due to space constraints. We gratefully acknowledge funding of the laboratory by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC-2189 (CIBSS) and EXC-2193 (livMatS).

References

  • 1.Acciai L., Soda P., Iannello G. Automated neuron tracing methods: an updated account. Neuroinformatics. 2016;14(4):353–367. doi: 10.1007/s12021-016-9310-0. [DOI] [PubMed] [Google Scholar]
  • 2.Adams J., Qiu Y., Xu Y., Schnable J.C. Plant segmentation by supervised machine learning methods. Plant Phenome J. 2020;3(1) doi: 10.1002/ppj2.v3.110.1002/ppj2.20001. [DOI] [Google Scholar]
  • 3.Alioscha-Perez M., Benadiba C., Goossens K., Kasas S., Dietler G., Willaert R. A robust actin filaments image analysis framework. PLoS Comput Biol. 2016;12(8):e1005063. doi: 10.1371/journal.pcbi.1005063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Antiga, L. (2007). Generalizing vesselness with respect to dimensionality and shape. The Insight Journal, 14. http://hdl.handle.net/1926/576
  • 5.Arganda-Carreras, I., Kaynig, V., Rueden, C., Eliceiri, K. W., Schindelin, J., Cardona, A., & Seung, H.S. (2017). Trainable Weka Segmentation: A machine learning tool for microscopy pixel classification. Bioinformatics, 33, 2424–2426. https://doi.org/10.1093/bioinformatics/btx180 [DOI] [PubMed]
  • 6.Asgharzadeh P., Özdemir B., Reski R., Röhrle O., Birkhold A.I. Computational 3D imaging to quantify structural components and assembly of protein networks. Acta Biomater. 2018;69:206–217. doi: 10.1016/j.actbio.2018.01.020. [DOI] [PubMed] [Google Scholar]
  • 7.Asgharzadeh P., Birkhold A.I., Trivedi Z., Özdemir B., Reski R., Röhrle O. A NanoFE simulation-based surrogate machine learning model to predict mechanical functionality of protein networks from live confocal imaging. Comput Struct Biotechnol J. 2020;18:2774–2788. doi: 10.1016/j.csbj.2020.09.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Asgharzadeh P., Birkhold A.I., Özdemir B., Reski R., Röhrle O. Biopolymer segmentation from CLSM microscopy images using a convolutional neural network. Proc Appl Math Mech PAMM. 2021;20(1) doi: 10.1002/pamm.v20.110.1002/pamm.202000188. [DOI] [Google Scholar]
  • 9.Badrinarayanan V., Kendall A., Cipolla R. SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell. 2017;39(12):2481–2495. doi: 10.1109/TPAMI.3410.1109/TPAMI.2016.2644615. [DOI] [PubMed] [Google Scholar]
  • 10.Bernsen, J (1986), “Dynamic Thresholding of Grey-Level Images”, Proc. of the 8th Int. Conf. on Pattern Recognition, 1251-1255
  • 11.Bilodeau A., Delmas C., Parent M., De Koninck P., Durand A., Lavoie-Cardinal F. MICRA-Net: MICRoscopy Analysis Neural Network to solve detection, classification, and segmentation from a single simple auxiliary task. Research Square. 2020 10.21203/rs.3.rs-95613/v1. [Google Scholar]
  • 12.Bohner G., Gustafsson N., Cade N.I., Maurer S.P., Griffin L.D., Surrey T. Important factors determining the nanoscale tracking precision of dynamic microtubule ends. J Microsc. 2016;261(1):67–78. doi: 10.1111/jmi.2016.261.issue-110.1111/jmi.12316. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Breuer D., Nikoloski Z. DeFiNe: An optimisation-based method for robust disentangling of filamentous networks. Sci Rep. 2015;5:18267. doi: 10.1038/srep18267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Breuer D., Nowak J., Ivakov A., Somssich M., Persson S., Nikoloski Z. System-wide organization of actin cytoskeleton determines organelle transport in hypocotyl plant cells. Proc Natl Acad Sci. 2017;114(28):E5741–E5749. doi: 10.1073/pnas.1706711114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Chan T.F., Vese L.A. Active contours without edges. IEEE Trans Image Process. 2001;10:266–277. doi: 10.1109/83.902291. [DOI] [PubMed] [Google Scholar]
  • 16.Chan L., Hosseini M.S., Plataniotis K.N. A comprehensive analysis of weakly-supervised semantic segmentation in different image domains. Int J Comput Vision. 2021;129(2):361–384. doi: 10.1007/s11263-020-01373-4. [DOI] [Google Scholar]
  • 17.Chen L., Papandreou G., Kokkinos I., Murphy K., Yuille A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans Pattern Anal Mach Intell. 2018;40(4):834–848. doi: 10.1109/TPAMI.2017.2699184. [DOI] [PubMed] [Google Scholar]
  • 18.Çiçek Ö., Abdulkadir A., Lienkamp S.S., Brox T., Ronneberger O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In: Ourselin S., Joskowicz L., Sabuncu M.R., Unal G., Wells W., editors. Medical Image Computing and Computer-Assisted Intervention –. Springer International Publishing; 2016. pp. 424–432. MICCAI 2016. [Google Scholar]
  • 19.Costigliola N., Ding L., Burckhardt C.J., Han S.J., Gutierrez E., Mota A. Vimentin fibers orient traction stress. Proc Natl Acad Sci USA. 2017;114(20):5195–5200. doi: 10.1073/pnas.1614610114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Demchouk A.O., Gardner M.K., Odde D.J. Microtubule tip tracking and tip structures at the nanometer scale using digital fluorescence microscopy. Cell Mol Bioeng. 2011;4(2):192–204. doi: 10.1007/s12195-010-0155-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Eckstein N., Buhmann J., Cook M., Funke J. Microtubule Tracking in Electron Microscopy Volumes. In: Martel A.L., Abolmaesumi P., Stoyanov D., Mateus D., Zuluaga M.A., Zhou S.K., Racoceanu D., Joskowicz L., editors. Medical Image Computing and Computer Assisted Intervention –. Springer International Publishing; 2020. pp. 99–108. MICCAI 2020. [Google Scholar]
  • 22.Faulkner C., Zhou J.i., Evrard A., Bourdais G., MacLean D., Häweker H. An automated quantitative image analysis tool for the identification of microtubule patterns in plants. Traffic. 2017;18(10):683–693. doi: 10.1111/tra.12505. [DOI] [PubMed] [Google Scholar]
  • 23.Frangi A.F., Niessen W.J., Vincken K.L., Viergever M.A. Multiscale vessel enhancement filtering. In: Wells W.M., Colchester A., Delp S., editors. Vol. 1496. Springer; Berlin Heidelberg: 1998. pp. 130–137. (Medical Image Computing and Computer-Assisted Intervention—MICCAI’98). [DOI] [Google Scholar]
  • 24.Friman O., Hindennach M., Kühnel C., Peitgen H.-O. Multiple hypothesis template tracking of small 3D vessel structures. Med Image Anal. 2010;14(2):160–171. doi: 10.1016/j.media.2009.12.003. [DOI] [PubMed] [Google Scholar]
  • 25.Gan Z., Ding L., Burckhardt C., Lowery J., Zaritsky A., Sitterley K. Vimentin intermediate filaments template microtubule networks to enhance persistence in cell polarity and directed migration. Cell Systems. 2016;3(3):252–263.e8. doi: 10.1016/j.cels.2016.08.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Gaire S.K., Wang Y., Zhang H.F., Liang D., Ying L. Accelerating 3D single-molecule localization microscopy using blind sparse inpainting. J Biomed Opt. 2021;26(02) doi: 10.1117/1.JBO.26.2.02650110.1117/1.JBO.26.2.026501.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Hadjidemetriou S., Toomre D., Duncan J. Motion tracking of the outer tips of microtubules. Med Image Anal. 2008;12(6):689–702. doi: 10.1016/j.media.2008.04.004. [DOI] [PubMed] [Google Scholar]
  • 28.He K., Gkioxari G., Dollár P., Girshick R. Mask R-CNN. IEEE International Conference on Computer Vision (ICCV) 2017;2980–2988 doi: 10.1109/ICCV.2017.322. [DOI] [Google Scholar]
  • 29.Imran A., Li J., Pei Y., Yang J.-J., Wang Q. Comparative analysis of vessel segmentation techniques in retinal images. IEEE Access. 2019;7:114862–114887. doi: 10.1109/Access.6287639. [DOI] [Google Scholar]
  • 30.Isola P., Zhu J.-Y., Zhou T., Efros A.A. Image-to-Image Translation with Conditional Adversarial Networks. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017;5967–5976 doi: 10.1109/CVPR.2017.632. [DOI] [Google Scholar]
  • 31.Jasnin M., Crevenna A. Quantitative analysis of filament branch orientation in listeria actin comet tails. Biophys J. 2016;110(4):817–826. doi: 10.1016/j.bpj.2015.07.053. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Jayadevappa D., Kumar S.s., Murty D. Medical image segmentation algorithms using deformable models: a review. IETE Tech Rev. 2011;28(3):248. doi: 10.4103/0256-4602.81244. [DOI] [Google Scholar]
  • 33.Jin, L., Liu, B., Zhao, F., Hahn, S., Dong, B., Song, R., Elston, T. C., Xu, Y., & Hahn, K. M. (2020). Deep learning enables structured illumination microscopy with low light levels and enhanced speed. Nature Communications, 11, 1934. https://doi.org/10.1038/s41467-020-15784-x. [DOI] [PMC free article] [PubMed]
  • 34.Kapoor V., Hirst W.G., Hentschel C., Preibisch S., Reber S. MTrack: automated detection, tracking, and analysis of dynamic microtubules. Sci Rep. 2019;9:3794. doi: 10.1038/s41598-018-37767-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Kass M., Witkin A., Terzopoulos D. Snakes: active contour models. Int J Comput Vision. 1988;1(4):321–331. doi: 10.1007/BF00133570. [DOI] [Google Scholar]
  • 36.Kervrann C., Blestel S., Chretien D. Conditional random fields for tubulin-microtubule segmentation in cryo-electron tomography. IEEE Int Conf Image Processing (ICIP) 2014;2014:2080–2084. doi: 10.1109/ICIP.2014.7025417. [DOI] [Google Scholar]
  • 37.Khater I.M., Nabi I.R., Hamarneh G. A review of super-resolution single-molecule localization microscopy cluster analysis and quantification methods. Patterns. 2020;1(3):100038. doi: 10.1016/j.patter.2020.100038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Kirillov A., He K., Girshick R., Rother C., Dollar P. Panoptic segmentation. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019;2019:9396–9405. doi: 10.1109/CVPR.2019.00963. [DOI] [Google Scholar]
  • 39.Kong, K. Y., Marcus, A. I., Giannakakou, P., & Wang, M. D. (2007). Using Particle Filter to Track and Model Microtubule Dynamics. 2007 IEEE International Conference on Image Processing, V-517-V–520. https://doi.org/10.1109/ICIP.2007.4379879
  • 40.Kotsur D., Yakobenchuk R., Leube R.E., Windoffer R., Mattes J. An Algorithm for Individual Intermediate Filament Tracking. In: Lepore N., Brieva J., Romero E., Racoceanu D., Joskowicz L., editors. Vol. 11379. Springer International Publishing; 2019. pp. 66–74. (Processing and Analysis of Biomedical Information). [DOI] [Google Scholar]
  • 41.Lavoie-Cardinal F., Bilodeau A., Lemieux M., Gardner M.-A., Wiesner T., Laramée G. Neuronal activity remodels the F-actin based submembrane lattice in dendrites but not axons of hippocampal neurons. Sci Rep. 2020;10(1) doi: 10.1038/s41598-020-68180-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Lee, H.-C., Cherng, S. T., Miotto, R., & Dudley, J. T. (2019). Enhancing high-content imaging for studying microtubule networks at large-scale. In F. Doshi-Velez, J. Fackler, K. Jung, D. C. Kale, R. Ranganath, B. C. Wallace, & J. Wiens (Eds.), Proceedings of the Machine Learning for Healthcare Conference, MLHC 2019, 9-10 August 2019, Ann Arbor, Michigan, USA (Vol. 106, pp. 592–613). PMLR. http://proceedings.mlr.press/v106/lee19a.html
  • 43.Lee T.C., Kashyap R.L., Chu C.N. Building skeleton models via 3-D medial surface axis thinning algorithms. CVGIP Graphical Models and Image Processing. 1994;56(6):462–478. doi: 10.1006/cgip.1994.1042. [DOI] [Google Scholar]
  • 44.Li H., Shen T., Vavylonis D., Huang X. Actin Filament Tracking Based on Particle Filters and Stretching Open Active Contour Models. In: Yang G.-.-Z., Hawkes D., Rueckert D., Noble A., Taylor C., editors. Vol. 5762. Springer; Berlin Heidelberg: 2009. pp. 673–681. (Medical Image Computing and Computer-Assisted Intervention – MICCAI 2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Li H., Shen T., Smith M.B., Fujiwara I., Vavylonis D., Huang X. Automated actin filament segmentation, tracking and tip elongation measurements based on open active contour models. IEEE International Symposium on Biomedical Imaging: From Nano to Macro. 2009;2009:1302–1305. doi: 10.1109/ISBI.2009.5193303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Li H., Shen T., Vavylonis D., Huang X. Vol. 6361. Springer; Berlin Heidelberg: 2010. Actin Filament Segmentation Using Spatiotemporal Active-Surface and Active-Contour Models; pp. 86–94. (Medical Image Computing and Computer-Assisted Intervention – MICCAI 2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Liu Y., Kolagunda A., Treible W., Nedo A., Caplan J., Kambhamettu C. Intersection to Overpass: Instance Segmentation on Filamentous Structures With an Orientation-Aware Neural Network and Terminus Pairing Algorithm. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019;2019:125–133. doi: 10.1109/CVPRW.2019.00021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Liu Y., Treible W., Kolagunda A., Nedo A., Saponaro P., Caplan J. Densely Connected Stacked U-network for Filament Segmentation in Microscopy Images. In: Leal-Taixé L., Roth S., editors. Vol. 11134. Springer International Publishing; 2019. pp. 403–411. (Computer Vision –). ECCV 2018 Workshops. [DOI] [Google Scholar]
  • 49.Liu Y., Nedo A., Seward K., Caplan J., Kambhamettu C. Quantifying actin filaments in microscopic images using keypoint detection techniques and a fast marching algorithm. IEEE International Conference on Image Processing (ICIP) 2020;2020:2506–2510. doi: 10.1109/ICIP40778.2020.9191337. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Ljosa, V., Sokolnicki, K. L., & Carpenter, A. E. (2012). Annotated high-throughput microscopy image sets for validation. Nature Methods, 9, 637–637. https://doi.org/10.1038/nmeth.2083 [DOI] [PMC free article] [PubMed]
  • 51.Magliaro C., Callara A.L., Vanello N., Ahluwalia A. Gotta Trace ‘em All: a mini-review on tools and procedures for segmenting single neurons toward deciphering the structural connectome. Front Bioeng Biotechnol. 2019;7:202. doi: 10.3389/fbioe.2019.00202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Marquez-Neila P., Baumela L., Alvarez L. A morphological approach to curvature-based evolution of curves and surfaces. IEEE Trans Pattern Anal Mach Intell. 2014;36(1):2–17. doi: 10.1109/TPAMI.2013.106. [DOI] [PubMed] [Google Scholar]
  • 53.Martinez N.J., Titus S.A., Wagner A.K., Simeonov A. High-throughput fluorescence imaging approaches for drug discovery using in vitro and in vivo three-dimensional models. Expert Opin Drug Discov. 2015;10(12):1347–1361. doi: 10.1517/17460441.2015.1091814. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Masoudi Samira, Razi Afsaneh, Wright Cameron H.G., Gatlin Jesse C., Bagci Ulas. Instance-level microtubule tracking. IEEE Trans Med Imaging. 2020;39(6):2061–2075. doi: 10.1109/TMI.4210.1109/TMI.2019.2963865. [DOI] [PubMed] [Google Scholar]
  • 55.Mattheyses A.L., Simon S.M., Rappoport J.Z. Imaging with total internal reflection fluorescence microscopy for the cell biologist. J Cell Sci. 2010;123(21):3621–3628. doi: 10.1242/jcs.056218. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Maurer Sebastian P., Cade Nicholas I., Bohner Gergő, Gustafsson Nils, Boutant Emmanuel, Surrey Thomas. EB1 accelerates two conformational transitions important for microtubule maturation and dynamics. Curr Biol. 2014;24(4):372–384. doi: 10.1016/j.cub.2013.12.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Meyer, F. (1991). Un algorithme optimal pour la ligne de partage des eaux. In 8ème Congrès de Reconnaissance Des Forces et Intelligence Artificielle, 2, 847–857.
  • 58.Minaee S., Boykov Y.Y., Porikli F., Plaza A.J., Kehtarnavaz N., Terzopoulos D. Image segmentation using deep learning: a survey. IEEE Trans Pattern Anal Mach Intell. 2021;1–1 doi: 10.1109/TPAMI.2021.3059968. [DOI] [PubMed] [Google Scholar]
  • 59.Moccia S., De Momi E., El Hadji S., Mattos L.S. Blood vessel segmentation algorithms—review of methods, datasets and evaluation metrics. Comput Methods Programs Biomed. 2018;158:71–91. doi: 10.1016/j.cmpb.2018.02.001. [DOI] [PubMed] [Google Scholar]
  • 60.Nanguneri, S., Pramod, R. T., Efimova, N., Das, D., Jose, M., Svitkina, T., & Nair, D. (2019). Characterization of Nanoscale Organization of F-Actin in Morphologically Distinct Dendritic Spines In Vitro Using Supervised Learning. Eneuro, 6, ENEURO.0425-18.2019. https://doi.org/10.1523/ENEURO.0425-18.2019 [DOI] [PMC free article] [PubMed]
  • 61.Nguyen Uyen T.V., Bhuiyan Alauddin, Park Laurence A.F., Ramamohanarao Kotagiri. An effective retinal blood vessel segmentation method using multi-scale line detection. Pattern Recogn. 2013;46(3):703–715. doi: 10.1016/j.patcog.2012.08.009. [DOI] [Google Scholar]
  • 62.Nurgaliev, D., Gatanov, T., & Needleman, D. J. (2010). Automated Identification of Microtubules in Cellular Electron Tomography. In Methods in Cell Biology (Vol. 97, pp. 475–495). Elsevier. https://doi.org/10.1016/S0091-679X(10)97025-8 [DOI] [PubMed]
  • 63.Ouyang Wei, Aristov Andrey, Lelek Mickaël, Hao Xian, Zimmer Christophe. Deep learning massively accelerates super-resolution localization microscopy. Nat Biotechnol. 2018;36(5):460–468. doi: 10.1038/nbt.4106. [DOI] [PubMed] [Google Scholar]
  • 64.Özdemir B., Asgharzadeh P., Birkhold A.I., Mueller S.J., Röhrle O., Reski R. Cytological analysis and structural quantification of FtsZ1-2 and FtsZ2-1 network characteristics in Physcomitrella patens. Sci Rep. 2018;8:11165. doi: 10.1038/s41598-018-29284-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Park D. Quantification of fibers through automatic fiber reconstruction from 3D fluorescence confocal images. J Adv Inform Technol Convergence. 2020;10:25–36. doi: 10.14801/JAITC.2020.10.1.25. [DOI] [Google Scholar]
  • 66.Prahl, L. S., Castle, B. T., Gardner, M. K., & Odde, D. J. (2014). Quantitative Analysis of Microtubule Self-assembly Kinetics and Tip Structure. In Methods in Enzymology (Vol. 540, pp. 35–52). Elsevier. https://doi.org/10.1016/B978-0-12-397924-7.00003-0 [DOI] [PubMed]
  • 67.Rigort Alexander, Günther David, Hegerl Reiner, Baum Daniel, Weber Britta, Prohaska Steffen. Automated segmentation of electron tomograms for a quantitative description of actin filament networks. J Struct Biol. 2012;177(1):135–144. doi: 10.1016/j.jsb:2011.08.012. [DOI] [PubMed] [Google Scholar]
  • 68.Rogge H., Artelt N., Endlich N., Endlich K. Automated segmentation and quantification of actin stress fibres undergoing experimentally induced changes. J Microsc. 2017;268(2):129–140. doi: 10.1111/jmi.2017.268.issue-210.1111/jmi.12593. [DOI] [PubMed] [Google Scholar]
  • 69.Ronneberger O., Fischer P., Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N., Hornegger J., Wells W.M., Frangi A.F., editors. Vol. 9351. Springer International Publishing; 2015. pp. 234–241. (Medical Image Computing and Computer-Assisted Intervention –). [DOI] [Google Scholar]
  • 70.Ruhnow Felix, Zwicker David, Diez Stefan. Tracking single particles and elongated filaments with nanometer precision. Biophys J. 2011;100(11):2820–2828. doi: 10.1016/j.bpj.2011.04.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Samuel, P. M., & Veeramalai, T. Review on retinal blood vessel segmentation – an algorithmic perspective. International Journal of Biomedical Engineering and Technology, 34, 31. https://doi.org/10.1504/IJBET.2020.110362
  • 72.Sandberg Kristian, Brega Moorea. Segmentation of thin structures in electron micrographs using orientation fields. J Struct Biol. 2007;157(2):403–415. doi: 10.1016/j.jsb:2006.09.007. [DOI] [PubMed] [Google Scholar]
  • 73.Sahl S.J., Hell S.W., Jakobs S. Fluorescence nanoscopy in cell biology. Nat Rev Mol Cell Biol. 2017;18(11):685–701. doi: 10.1038/nrm.2017.71. [DOI] [PubMed] [Google Scholar]
  • 74.Sato Yoshinobu, Nakajima Shin, Shiraga Nobuyuki, Atsumi Hideki, Yoshida Shigeyuki, Koller Thomas. Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images. Med Image Anal. 1998;2(2):143–168. doi: 10.1016/S1361-8415(98)80009-1. [DOI] [PubMed] [Google Scholar]
  • 75.Sazzed S., Song J., Kovacs J., Wriggers W., Auer M., He J. Tracing actin filament bundles in three-dimensional electron tomography density maps of hair cell stereocilia. Molecules. 2018;23:882. doi: 10.3390/molecules23040882. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Serag Ahmed, Wilkinson Alastair G., Telford Emma J., Pataky Rozalia, Sparrow Sarah A., Anblagan Devasuda. SEGMA: an automatic SEGMentation approach for human brain MRI using sliding window and random forests. Front Neuroinf. 2017;11 doi: 10.3389/fninf.2017.00002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Shelhamer E., Long J., Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell. 2017;39(4):640–651. doi: 10.1109/TPAMI.2016.2572683. [DOI] [PubMed] [Google Scholar]
  • 78.Smith Matthew B., Li Hongsheng, Shen Tian, Huang Xiaolei, Yusuf Eddy, Vavylonis Dimitrios. Segmentation and tracking of cytoskeletal filaments using open active contours. Cytoskeleton. 2010;67(11):693–705. doi: 10.1002/cm.v67:1110.1002/cm.20481. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Steger, C. (1998). An Unbiased Detector of Curvilinear Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(2), 13. https://doi.org/DOI: 10.1109/34.659930
  • 80.Thul Peter J., Lindskog Cecilia. The human protein atlas: a spatial map of the human proteome: the Human Protein Atlas. Protein Sci. 2018;27(1):233–244. doi: 10.1002/pro.3307. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Tsechpenakis, G. (2011). Deformable Model-Based Medical Image Segmentation. In A. S. El-Baz, R. Acharya U, M. Mirmehdi, & J. S. Suri (Eds.), Multi Modality State-of-the-Art Medical Image Segmentation and Registration Methodologies (pp. 33–67). Springer US. https://doi.org/10.1007/978-1-4419-8195-0_2.
  • 82.Tsugawa Satoru, Hervieux Nathan, Hamant Oliver, Boudaoud Arezki, Smith Richard S., Li Chun-Biu. Extracting subcellular fibrillar alignment with error estimation: application to microtubules. Biophys J. 2016;110(8):1836–1844. doi: 10.1016/j.bpj.2016.03.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Usov Ivan, Mezzenga Raffaele. FiberApp: an open-source software for tracking and analyzing polymers, filaments, biomacromolecules, and fibrous objects. Macromolecules. 2015;48(5):1269–1280. doi: 10.1021/ma502264c. [DOI] [Google Scholar]
  • 84.Valdman David, Atzberger Paul J., Yu Dezhi, Kuei Steve, Valentine Megan T. Spectral analysis methods for the robust measurement of the flexural rigidity of biopolymers. Biophys J. 2012;102(5):1144–1153. doi: 10.1016/j.bpj.2012.01.045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Webb Rebecca L., Rozov Orr, Watkins Simon C., McCartney Brooke M. Using total internal reflection fluorescence (TIRF) microscopy to visualize cortical actin and microtubules in the Drosophila syncytial embryo. Dev Dyn. 2009;238(10):2622–2632. doi: 10.1002/dvdy.v238:1010.1002/dvdy.22076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Wellner, P. D. (1993). Adaptive Thresholding for the DigitalDesk (p. 19) [EuroPARC Technical Report EPC-93-110]. Rank Xerox Research Centre.
  • 87.Xia Shumin, Lim Ying Bena, Zhang Zhen, Wang Yilin, Zhang Shan, Lim Chwee Teck. Nanoscale architecture of the cortical actin cytoskeleton in embryonic stem cells. Cell Rep. 2019;28(5):1251–1267.e7. doi: 10.1016/j.celrep.2019.06.089. [DOI] [PubMed] [Google Scholar]
  • 88.Xiao X., Geyer V.F., Bowne-Anderson H., Howard J., Sbalzarini I.F. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets. Med Image Anal. 2016;32:157–172. doi: 10.1016/j.media.2016.03.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Xu Ting, Langouras Christos, Koudehi Maral Adeli, Vos Bart E., Wang Ning, Koenderink Gijsje H. Automated tracking of biopolymer growth and network deformation with TSOAX. Sci Rep. 2019;9(1) doi: 10.1038/s41598-018-37182-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Xu T., Li H., Shen T., Ojkic N., Vavylonis D., Huang X. Extraction and analysis of actin networks based on Open Active Contour models. IEEE International Symposium on Biomedical Imaging: From Nano to Macro. 2011;2011:1334–1340. doi: 10.1109/ISBI.2011.5872647. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Xu Ting, Vavylonis Dimitrios, Huang Xiaolei. 3D actin network centerline extraction with multiple active contours. Med Image Anal. 2014;18(2):272–284. doi: 10.1016/j.media.2013.10.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Xu Ting, Vavylonis Dimitrios, Tsai Feng-Ching, Koenderink Gijsje H., Nie Wei, Yusuf Eddy. SOAX: a software for quantification of 3D biopolymer networks. Sci Rep. 2015;5(1) doi: 10.1038/srep09081. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93.Yue G., Jiang L., Liu C., Yang G., Ai J., Chen X. Automated Segmentation of Microtubules in Cryo-EM Images with Excessive White Noise. In: Kim K.J., Joukov N., editors. Vol. 376. Information Science and Applications (ICISA); 2016. (Springer Singapore). [DOI] [Google Scholar]
  • 94.Zhang Zhen, Nishimura Yukako, Kanchanawong Pakorn, Lippincott-Schwartz Jennifer. Extracting microtubule networks from superresolution single-molecule localization microscopy data. Mol Biol Cell. 2017;28(2):333–345. doi: 10.1091/mbc.e16-06-0421. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Zhang M., Zhou Y., Zhao J., Man Y., Liu B., Yao R. A survey of semi- and weakly supervised semantic segmentation of images. Artif Intell Rev. 2020;53(6):4259–4288. doi: 10.1007/s10462-019-09792-7. [DOI] [Google Scholar]
  • 96.Zhou Z.-H. A brief introduction to weakly supervised learning. Natl Sci Rev. 2018;5(1):44–53. doi: 10.1093/nsr/nwx106. [DOI] [Google Scholar]
  • 97.Zhu J.-Y., Park T., Isola P., Efros A.A. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. IEEE International Conference on Computer Vision (ICCV) 2017;2242–2251 doi: 10.1109/ICCV.2017.244. [DOI] [Google Scholar]

Articles from Computational and Structural Biotechnology Journal are provided here courtesy of Research Network of Computational and Structural Biotechnology

RESOURCES