Abstract
Simple Summary
Microscopy is central to many areas of biomedical science research, including cancer research, and is critical for understanding basic pathophysiology, mechanisms of action, and treatment response. However, analysis of the numerous images generated from microscopy readouts is usually performed manually, a process that is tedious and time-consuming. Moreover, manual analysis of microscopy images may limit both accuracy and reproducibility. Here, we used an artificial intelligence approach to analyze tunnelling nanotubes (TNTs), a feature of cancer cells that may contribute to their aggressiveness, but which are hard to identify and count. Our approach labeled and detected TNTs and cancer cells from microscopy images and generated TNT-to-cell ratios comparable to those of human experts. Continued refinement of this process will provide a new approach to the analysis of TNTs. Additionally, this approach has the potential to enhance drug screens intended to assess therapeutic efficacy of experimental agents and to reproducibly assess TNTs as a potential biomarker of response to cancer therapy.
Abstract
Background: Tunneling nanotubes (TNTs) are cellular structures connecting cell membranes and mediating intercellular communication. TNTs are manually identified and counted by a trained investigator; however, this process is time-intensive. We therefore sought to develop an automated approach for quantitative analysis of TNTs. Methods: We used a convolutional neural network (U-Net) deep learning model to segment phase contrast microscopy images of both cancer and non-cancer cells. Our method was composed of preprocessing and model development. We developed a new preprocessing method to label TNTs on a pixel-wise basis. Two sequential models were employed to detect TNTs. First, we identified the regions of images with TNTs by implementing a classification algorithm. Second, we fed parts of the image classified as TNT-containing into a modified U-Net model to estimate TNTs on a pixel-wise basis. Results: The algorithm detected 49.9% of human expert-identified TNTs, counted TNTs, and calculated the number of TNTs per cell, or TNT-to-cell ratio (TCR); it detected TNTs that were not originally detected by the experts. The model had 0.41 precision, 0.26 recall, and 0.32 f-1 score on a test dataset. The predicted and true TCRs were not significantly different across the training and test datasets (p = 0.78). Conclusions: Our automated approach labeled and detected TNTs and cells imaged in culture, resulting in comparable TCRs to those determined by human experts. Future studies will aim to improve on the accuracy, precision, and recall of the algorithm.
Keywords: artificial intelligence, automated cell counting, biomarker, cancer, cells, deep learning, machine learning, microscopy, TNT, tunneling nanotubes
1. Introduction
Microscopy is central to many areas of biomedical science research, including cancer. Microscopy allows researchers to understand basic pathophysiology, mechanisms of action, and also treatment response. However, analysis of the numerous images generated from microscopy readouts is usually performed manually, a process that is tedious and time-consuming. Manual analysis is also a process that may limit both accuracy and reproducibility. Machine learning (ML) and artificial intelligence (AI) approaches are emerging as a means to efficiently analyze large imaging datasets and thereby accelerate the capacity for data interpretation [1,2,3,4,5,6,7,8]. With the advent of ML/AI approaches, novel morphological features of cells that were previously not detectable, analyzable, or quantifiable in microscopy images can now be assessed for utility as emerging imaging-based biomarkers [9,10,11,12,13,14,15,16,17,18].
The field of intercellular communication has gained significant traction and interest over the past decade, catalyzed by characterization and improvement in methods of identification of extracellular vesicles and other modes of cell–cell signaling [19,20,21,22,23,24,25]. The niche of contact-dependent cell signaling mechanisms represents an emerging aspect of this field, led by (a) advances in understanding the role of tunneling nanotubes (TNTs) and their role in normal and pathologic physiology across health and disease and (b) discoveries related to tumor microtubes in glioblastoma and other cancer types as well [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43]. The current study focuses on TNTs, which are long membranous F-actin based cell protrusions that connect cells at short and long distances and which are capable of acting as conduits for direct (often bi-directional) signaling between connected cells [44,45,46,47]. TNTs were first identified in the PC12 cell line (rat pheochromocytoma) and the term coined in 2004 by Rustom et al. [44]. Since then, this unique form of cellular protrusion has been identified in many cell types, including but not limited to immune cells, cancer cells, and neuronal cells [35,45,46,48,49,50,51,52,53]. While TNTs are ubiquitous across many cell types, we and others have shown that they are upregulated in invasive forms of cancer [45,53,54]. There is no current validated method to differentiate between TNTs from cancer as compared to non-cancer derived cells; however, the description of a longer and wider form of cell protrusion shown in an orthotopic model of malignant gliomas termed ‘tumor microtubes’ has shed light on the possible differences of this class of protrusions amongst malignant cell populations [30,36,43,55].
The function, ultrastructural characteristics, and mechanisms of TNTs are all under active investigation by many investigators [22,43,56,57,58,59,60,61]. Nonetheless, a distinct, specific, and reproducibly testable structural biomarker of TNTs has yet to be identified. This lack of a distinct biomarker has presented a challenge to this emerging field of cell biology. Thus, identification of TNTs has relied on visual identification of structural characteristics, including connections between two or more cells and the non-adherent nature of their protrusion ‘bridges’ when cells are cultured in vitro [29,54,59,60,62,63,64,65,66,67]; this latter feature helps to distinguish TNTs from other actin-based protrusions that adhere to the substratum in in vitro tissue culture and are more often associated with cell motility rather than cell–cell communication [67,68,69]. Manual visual identification of TNTs is a tedious and arduous process that also introduces the potential for lack of reproducibility. A more optimal approach to maximize reproducibility across the field would be validation and application of artificial intelligence-based approaches that could identify TNTs with high specificity and sensitivity, with excellent ability to also distinguish TNTs accurately from other forms of membrane-based extracellular extensions. A precise quantitative analysis of TNTs will help to gain statistical information to monitor the progression of various diseases. In this study we sought to construct an algorithm that accomplishes this by adopting the well-known U-Net deep learning model to segment images and detect TNTs [6].
2. Materials and Methods
2.1. Cell Lines
We used the human MSTO-211H (malignant pleural mesothelioma of biphasic histology) cell line, which was purchased from American Type Culture Collection in 2019 (ATCC, Rockville, MD, USA). Hereafter, “MSTO” will be used to refer to this cell line. The cells were grown in RPMI-1640. The culture media was supplemented with 10% fetal bovine serum (FBS), 1% penicillin-streptomycin, 1× GlutaMAX (all from Gibco Life Technologies, Gaithersburg, MD, USA), and 0.1% NormocinTM anti-mycoplasma reagent (Invivogen, San Diego, CA, USA). Cells were maintained in a humidified incubator at 37 °C, with 5% carbon dioxide. We chose to plate the cells on regular tissue culture-treated plastic so that the AI training would have to overcome the inherent scratches present on plastic dishes.
2.2. Microscopy Imaging
Images were taken when the cells were 30–40% confluent, and individual TNTs and cells could easily be distinguished. Phase contrast images were acquired on a Zeiss AxioObserver M1 Microscope using a 20× PlanApo-Chromat objective with a numerical aperture of 0.8. A 5 × 5 set of tiled images were taken using a Zeiss Axio Cam MR camera with a pixel size of 6.7 × 6.7 µm resulting in a spatial resolution (dx = dy) at 20× of 0.335 µm/pixel. Tiled images were stitched into one image with Zen2 Blue software (Carl Zeiss Microscopy, White Plains, NY, USA).
2.3. Manual Identification of TNTs
TNTs were identified as previously described by our group and others [29,44,54,70]. Identification is based on three parameters: (i) lack of adherence to the substratum of tissue culture plates, including visualization of TNTs passing over adherent cells; (ii) TNTs connecting two cells or extending from one cell if the width of the extension is estimated to be <1000 nm; and (iii) a narrow base at the site of extrusion from the plasma membrane. Fiji [71] was used for creating the training images. TNTs were traced manually using the line tool and the set of annotations converted to a mask.
2.4. Initial Verification of TNTs Using Current Standard Methodology: Visual Identification
TNTs seen in phase contrast images appear to be elongated structures no thicker than 1 µm and ranging in length from 10 μm to over 100 µm. TNTs connect at least two cells as a straight line, occasionally making angles but not usually sinusoidal or wave-like when cells are cultured in vitro. TNTs can be comparably thinner than the cell walls in the images and occasionally become invisible in the image background. They tend to have a fairly uniform thickness from end-end, although portions along the tubes may bulge due to size of larger cargo trafficking internally; the term ‘gondola’ has been applied to describe this phenomenon in some previously published studies in the field [46,72]. TNTs often protrude from the membrane interface with a characteristically narrow or minimally cone-shaped base, in contrast to other thicker forms of cell-based podia protrusions [67]. In comparison to other cellular protrusions, TNTs uniquely have a 3-dimensional suspended nature in the substratum in vitro; these suspended TNTs can cross over other adherent cells. Although the basic TNT characteristics are familiar to researchers focused in the field of TNT cell biology, these features are not readily identifiable in previously utilized general machine learning algorithms.
2.5. Human Expert Review of Stitched MSTO Images and Identification of TNTs
Four human experts independently reviewed the images to detect structures meeting criteria as TNTs. The role of the human experts was to identify the presence (i.e., yes or no) of TNTs that connected two cells, rather than trying to label the TNTs on a pixel-by-pixel basis (this was left to the machine learning algorithm). After independent review, structures identified by three or four of the experts were classified by consensus as actual TNTs for analysis purposes; structures identified by two of the four experts were reviewed together by all experts and a consensus decision was made whether to classify them as actual TNTs or not; structures identified by one of the experts were not classified as actual TNTs. Next, we combined the knowledge from the human experts (the structures classified by consensus as actual TNTs) with the computational abilities of the deep learning model. We used an automated method to label TNTs on a pixel-by-pixel basis. This method was guided by the initial human-based labeling of the TNTs. Further details are provided in the Supplementary Materials.
3. Results
Supplementary Table S1 summarizes the results of inter-rater agreement for TNT identification among the four human experts using the Cohen’s kappa statistic. This reflects the pseudo-objective nature of TNT identification by human experts, and therefore the need for a deep learning-based algorithm to perform and quantitate TNT detection in a more reproducible manner.
3.1. General Approach to the Automated Detection of TNTs
We used the free version of Google Colab with hardware specifications of 12–14 GB of RAM, a CPU of Intel® Xeon® at 2.20 GHz, 30–35 GB of available disk space and Nvidia K80/T4 GPU with 12GB/16GB RAM (https://colab.research.google.com/drive/151805XTDg--dgHb3-AXJCpnWaqRhop_2, last accessed: 5 July 2022). Figure 1 depicts the possible outcomes of ML algorithms, the detection of TNTs (Figure 1A,B), and the mislabeling other cellular features as TNTs (Figure 1C). Due to the presence of noise, uneven illumination, irregular cellular shapes, and thinness of TNT lines with respect to cellular membranes, the visibility of TNTs is significantly reduced. TNTs are surrounded by darker intercellular regions, and occasionally the TNT lines become invisible, merging with the darker background. Our method is implemented on 2D phase-contrast images and consists of three main components: a preprocessing step to prepare the dataset in terms of de-noising the dataset and enhancing label quality; a sequence of two deep learning models to detect the TNTs; and a final step to count the TNTs and cells to provide a measure of the TNT-to-cell ratio (TCR) in the images. The TCR metric is essentially the same as our previous reports using the term ‘TNT Index’ to indicate the average number of TNTs per cell across multiple fields of view for a given set of cell culture conditions [45,53,54]. The TCR or TNT index can be used to monitor changes in cell culture over time and/or following drug exposure or other forms of treatment [45,53,54].
Figure 1.
(A) Two TNTs that were successfully captured by the deep learning model (true positives). (B) The image from (A) is enhanced for improved TNT visibility. (C) A TNT-appearing structure that was mistakenly identified as a TNT by the model (false positive). Images (B,C) were generated with Fiji software and were adjusted for their brightness and contrast by setting minimum and maximum displayed value to 20 and 100, respectively, for improved visibility of the structures (this image modification is not necessary for the deep learning model to work).
3.2. Pre-Processing
3.2.1. Removal of Tile Shadows
The original images were created by taking a grid of 5 × 5 tiled images, each measuring 1388 × 1040 pixels, and then stitching them together (Figure 2). This process resulted in shadows along the stitched edges, which significantly degraded the model performance at later stages. To remove those shadows, we used BaSiC, an image correction method for background and shading correction for image sequences, available as a Fiji/ImageJ plugin [73].
Figure 2.
(A) Tiled image with shadows at edges of the tiles and (B) the same image with the shadows removed to prevent a high false-positive detection rate.
3.2.2. Label Correction
To train an automated model, it is critical to obtain accurately labeled TNTs on the images in the training set. Since TNTs will be automatically identified pixel wise in later stages of the model, it is essential to label the TNTs in fine detail. However, when labeling visible TNTs, the human-marked TNTs are not fully capturing the width of the TNTs pixel wise. This, in turn, degrades model performance.
Figure 3 Step 1 shows the general outline of the preprocessing workflow, not including the removal of stitching shadows that is shown in Figure 2. To improve the quality of the labels on an image, two copies of that image are created. One of the copies is deblurred using Richardson-Lucy deconvolution with a Gaussian kernel of 7 × 7 and a standard deviation of 20 [74,75]. The deblurred copy is then subtracted from the original image. The resulting image is turned into a black and white 8-bit binary format and is once again duplicated. In one of these images, all visible TNTs, including their entire width, are colored with black ink. An XOR (bitwise exclusive or) operation is performed between the TNT-marked image and the duplicate unmarked image. The resulting image yielded the TNT masks [76].
Figure 3.
Flow diagram of AI-based TNT detection. Images were (Step 1) pre-processed for label correction and (Step 2) subdivided into a matrix of smaller image regions (‘patches’) that were classified as either containing or not containing any TNT structures, and pixel-wise classified regarding whether each pixel belonged to a TNT structure or not (see Supplementary Figures S1 and S2 and Supplementary Table S2). In (Step 3), the numbers of TNTs and cells were counted, and the TNT-to-cell ratio (TCR) was calculated (each colored object is an individual cell) and confusion matrix was reported (see Table 2 and Supplementary Table S3). XOR = bitwise exclusive or operator.
3.3. Detecting TNT Regions
This section introduces our deep learning pipeline approach to detect and count TNTs.
3.3.1. Classifying TNT-Inclusive Regions
With respect to the total area of an image, TNTs constitute a smaller percentage of the pixels. We approached the TNT detection problem in two steps: First, we trained a deep learning based classification model to rule out the large pockets of TNT-free spaces in the images. Our aim was to reduce the computational burden of detecting and segmenting TNTs in the next step, where we trained a second deep learning model to identify the TNT pixels (Figure 3 Step 2). The first step in our method also helped us break a single large image into smaller pieces and thus increased training data points for our models (Figure 4, Supplementary Figures S1 and S2).
Figure 4.
(A) Original image containing large pockets of TNT-free spaces. (B) After correcting edge artefacts as shown in Figure 2, the TNT-containing “patches” (yellow squares) showed where TNTs were captured within the matrix of smaller image regions. See Supplementary Figures S1 and S2.
The original images in the training dataset were stitched together resulting in an image size of 6283 × 4687 pixels. The images were then scanned with a sliding window of size 512 × 512 pixels with a stride of 10 pixels, extracting patches containing the TNT regions using a bounding box. The enclosed image region is then extracted. The area covered by the sliding window is labeled as “1” if there were a certain number of prelabeled TNT pixels within that window and also if those pixels were located closer to the center of the window. The reasoning behind checking whether the TNT pixels are closer to the patch center is to avoid partitioning of TNTs across sequential windows and thus losing the integrity of a TNT in a training data point. We repeat the same procedure with a sliding window size of 256 × 256 for images that are labeled as “1”. That is, we first identify the TNT including images with a bigger window, crop them from the original image, and then scan for TNTs with a smaller window inside the cropped images. Thus, we form two sets of images: a training set of 512 × 512 and another with 256 × 256. It is important to note that our method generated thousands of sub-images and sub-subimages from each of the four image sets studied here. For extensive details, please refer to the Supplementary Materials, Supplementary Figures S1 and S2, and Supplementary Table S2.
To train a classification algorithm to detect TNT-including images, we employed the VGGNet (16 layers) architecture, pre-trained on the ImageNet dataset [77]. Since the earlier layers of a pre-trained model are kept for learning the low-level image features, we replaced the VGGNet’s output layer with three hidden layers with 512, 170, and 70 nodes, respectively, and a binary output layer. We incrementally added these dense layers as we observed improvement in the performance of the classifier. To reduce overfitting, we also introduced a dropout layer of 60% dropout rate in between each pair of new fully connected layers. We trained two instances of this model, one for the images of size 512 × 512 pixels and another for those of size 256 × 256 pixels. We used the two models sequentially to identify image patches with TNTs. Only the images of size 256 × 256 pixels that included TNTs were fed into the U-Net model described below.
3.3.2. U-Net with Attention Architecture for Segmentation
Since manual labeling of medical images is a labor-intensive and cumbersome task, automated medical image segmentation has been an active research area in the image-processing domain. After the advent of convolutional neural networks (CNN), many variants of CNN-based models have been proposed, which have advanced the state-of-the-art in image classification and semantic segmentation [78,79]. U-Net [6,80] is one of the commonly used architectures for medical image segmentation tasks due to its efficient use of graphics processing unit memory and superior performance [81]. In this study, we used a variant of U-Net, AURA-net [82], which uses U-Net with transfer learning to accelerate training and attention mechanisms to help the network focus on relevant image features.
U-Net is an encoder-decoder CNN-based architecture, which is composed of downsampling (encoder network) and upsampling (decoder network) paths. The encoder network, which is a contracting path, consists of the repeated application of two 3 × 3 convolutions, each followed by a rectified linear unit (ReLU) [83] and a 2 × 2 max pooling operation [84] with stride 2. At each step in the downsampling path, the number of feature channels is doubled. When running the encoder part, the model reduces the spatial dimensions of the image at every layer while capturing the features contained in the image with the help of filters.
The decoder network consists of layers, with each having (i) an upsampling of the feature map followed by a 2 × 2 up-convolution that halves the number of feature channels, (ii) a concatenation with the correspondingly cropped feature map from the encoder network side of the model, and (iii) two 3 × 3 convolutions, each followed by a ReLU. When training the decoder part of the model, the spatial aspect of the images are restored to make a prediction for each pixel in the image.
Although U-Nets are efficient in terms of training on a small number of data points, they can also benefit from transfer learning [82]. The usual transfer learning approach is to copy a certain number of layers from a pre-trained network to a target network to reduce the training time and increase model efficiency [85]. Next, we replaced the encoder network with the layers from a pre-trained ResNET model [86]. ResNET is trained on ImageNet [77], a set of natural images very different from the microscopic images in our study; however, the first layers of the ResNET model detects the features of images at a higher abstraction level, and thus, the transferred layers can be used to generalize these features for images from other contexts.
Attention-based models [87] are used to suppress less relevant regions in an input image and focus on more salient features relevant for the task. Attention U-Nets are shown to consistently improve the prediction performance of U-Net networks for various biomedical image segmentation tasks while preserving the model’s computational efficiency [81].
We trained the U-Net model using the patches identified by the classification models described above. In training the models, we employed three loss functions, namely, binary cross-entropy (BCE) [88], Dice [89], and active contour (AC) loss [90]. Although Dice and BCE losses enforce the accuracy of predictions at the pixel level, the addition of AC loss allows consideration of area information. We adapted the use of these loss functions in our models from those used by Cohen and Uhlmann [82]. TNTs were segmented in the 256 × 256 pixel images by the U-Net model as shown in Figure 5. The AI-based model was able to recapitulate the human expert-based TNT identification.
Figure 5.
TNTs detected from two cropped images. (A,E) are the cropped raw images. (B,F) are the manually marked labels. (C,G) are the heatmap versions after prediction. (D,H) are the predicted TNTs.
3.4. Cell and TNT Counting
We used Cellpose, an anatomical segmentation algorithm [91], to count the number of cells in the images (Figure 3 Step 3). Cellpose utilizes a deep neural network with a U-Net style architecture and residual blocks, similar to the model used in this study for detecting TNTs. Moreover, Cellpose is trained on a dataset of various tissue types collected using fluorescence and phase contrast microscopy on different platforms, which made it an ideal candidate for this study.
To count TNTs, we first created an elliptical shaped kernel of size 5 × 5 pixels. We next performed a morphological transformation of the images, namely morphological gradient, which is the difference between the dilation and erosion of the structures in the images. Given the outline of the objects as an outcome of the transformation, we found the contours in the images, which are the curves joining contiguous points along a boundary between regions of different intensities. If the area of a contour was between 400 and 2500 pixels (44.89–280.56 µm2), it was counted as a TNT. We used OpenCV, an open-source library for computer vision, to process and analyze the images [92].
To evaluate the model performance, we used a separate test dataset that was not part of the training and tuning of the model. The “true” TNTs were those determined by consensus of the four human experts as described earlier.
The test image was partitioned into patches and then was fed sequentially into classification and U-Net models. Within each patch, a heatmap was generated. Next, the heatmaps of each patch were stitched together to form the overall heat map of the larger image. Following the counting rules described above, we counted and compared the number of TNTs predicted by the model vs. those identified manually by human experts (Table 1 and Supplementary Table S1). A pixel intensity threshold of 235 (range 0–255 in an 8-bit gray scale image) was chosen in the U-Net model because it maximized the sum of precision and recall (see Supplementary Figure S3). Our model was able to correctly identify 26.2% of the manually identified TNTs in the test dataset, whereas the identification rate was 49.9% for the test and training datasets combined. The precision for the test dataset was 41%. Our model generated more false-negative TNTs than false positive ones, hence a lower recall (sensitivity) compared to precision (positive predictive value). A few of the false positive TNTs were found to be true positives after double-checking the original images (see Supplementary Figure S4). Note that we report our performance evaluations without incorporating any adjustment for true positive numbers after double-checking. Next, we assessed the model’s ability to count predicted TNTs. For each image set, a human expert classified and counted the ML TNT predictions as FPs or TPs, and absence of ML TNT predictions as FNs, with respect to the human expert consensus “ground truth”. Supplementary Table S3 summarizes the human expert-based and ML-based counts. A fixed two-way ANOVA was performed (with factor 1 being the source of the count [i.e., human expert vs. ML model] and factor 2 being the image set evaluated [MSTO2-5]), using the F distribution (right-tailed). The results demonstrated no significant difference in human vs. ML counts of TNTs across the four image sets. For a detailed explanation of the three main reasons contributing to the generation of FPs and FNs by the model, see the end of the Supplementary Materials.
Table 1.
Results of TNT detection for three training sets and one test set. FP = false positive, PPV = positive predictive value. * True, identified by human experts. f-1 score = 2 × [PPV × sensitivity]/[PPV + sensitivity].
| Image Set | No. of TNTs (True *) | PPV (Precision) | Sensitivity (Recall) | No. of FPs | No. of Human Expert-Corrected FPs | f-1 Score |
|---|---|---|---|---|---|---|
| Training 1 (stitched image MSTO2) | 43 | 0.67 | 0.70 | 14 | 0 | 0.68 |
| Training 2 (stitched image MSTO3) | 18 | 0.38 | 0.61 | 17 | 1 | 0.47 |
| Training 3 (stitched image MSTO4) | 33 | 0.52 | 0.42 | 13 | 1 | 0.47 |
| Test 1 (stitched image MSTO5) | 42 | 0.41 | 0.26 | 16 | 2 | 0.32 |
We next developed a new metric to measure the TNT-to-cell ratio (TCR) in the images (Table 2). We counted TNTs and cells and computed the number of TNTs per 100 cells (TCR × 100). A two-tailed t-test analysis determined there was no significant difference (p = 0.78) between the means of true and predicted TCRs.
Table 2.
Results reporting the tunneling nanotube (TNT)-to-cell ratio (TCR, or TNT index). * True, identified by human experts. ** Predicted, detected by the model.
| Image Set | No. of TNTs (True *) | No. of TNTs (Predicted **) | No. of Cells (from Cellpose) | TCR × 100 (True *) | TCR × 100 (Predicted **) |
|---|---|---|---|---|---|
| Training 1 (stitched image MSTO2) | 43 | 45 | 897 | 4.79 | 5.02 |
| Training 2 (stitched image MSTO3) | 18 | 29 | 777 | 2.32 | 3.73 |
| Training 3 (stitched image MSTO4) | 33 | 27 | 754 | 4.38 | 3.58 |
| Test 1 (stitched image MSTO5) | 42 | 27 | 897 | 4.68 | 3.01 |
4. Discussion
The detection and classification of cells have been active research areas for more than a decade [93]. There are various open-source and commercial software packages for cell counting and characterization for clinical and research purposes [94]; however, there is a dearth of specialized models for detecting TNTs. Here, we applied a precise quantitative analysis to construct an algorithm that uses the well-known U-Net deep learning model [6] to segment images and detect TNTs in vitro.
The main goal of this study was to present the fully automated end-to-end segmentation, detection, and counting process of TNTs. Even to a trained eye, it may be hard to decide whether a structure is a TNT or not. Therefore, it is a challenging task to develop an automated method to detect TNTs. As a result, automatic detection of TNTs has not been studied extensively. Hodneland et al. presented an automated method to detect nanotubes with a rule-based algorithm [76]. In their study, TNTs were identified by a series of transformations including watershed segmentation, edge detection, and mathematical morphology. Their method for cell segmentation was 3D, and they used two channels of cell images stained with two dyes. On the other hand, phase contrast microscopy is a label-free technique, making it well-suited for live-cell imaging without the need for a fluorescence microscope, which in turn makes the deep learning model presented here amenable to general use.
During the past decade, the field has evolved from reporting descriptions of TNTs and their cell morphology and function, to identifying changes in the numbers of TNTs over time. TNTs are dynamic structures that exist for minutes to hours [37,44,46,62]. We and others have previously demonstrated that they represent a form of cellular stress response to outside stimuli, including drug treatment and viral infection [38,49,52,53,95]. The identification of TNTs currently still rests on identification of morphologic characteristics that distinguish them from other cell protrusions, including invadopodia, filopodia, and lamellopodia [68,96,97,98]. However, quantitation is limited as the process is laborious without validated TNT-specific markers and relies currently on manual identification. AI-based approaches that could reliably identify TNTs with high specificity and sensitivity would move the field of TNT biology forward significantly by providing a new tool for rapid identification of TNTs and their fluctuation over time. We report the results using MSTO-211H cells in this manuscript at this early stage of our investigation into AI-based approaches for TNT detection, because this cell line has served as one of our optimal models for in vitro investigation of TNTs for over a decade. As we continue to build on this foundation of work, our next set of studies will utilize other cell lines, cancer and non-cancer, to further confirm and validate the model across diverse cell types.
Software programs have been developed previously to classify and quantify cellular features and colonies for the purpose of reliable automated forms of detection. Specific examples of this approach include evaluation of embryonic stem cell differentiation and pluripotency analysis [99]. Perestrelo et al. utilized mouse embryonic stem cells as a model for their software, Pluri-IQ [99]. Their software was able to quantify the percentage of pluripotent, mixed, or differentiated cells; it was also able to analyze different magnification image sizes and measure pluripotency by the markers that were used for evaluation. This group also showed the pipeline used for segmentation, machine training, validation, and finally automatic data comparison. Pluri-IQ can learn, based on colony morphology, how to evaluate according to the classifier pool where the colony’s best features fit when a new colony is put through the software [99].
Another software program, FiloDetect, is the first automated tool for detecting filopodia in cancer cell images [100]. Filopodia are long F-actin-based cellular protrusions whose primary purpose is to mediate cell motility. The FiloDetect approach has been evaluated in Rat2 fibroblasts and B16F1 mouse melanoma cell images and has been applied to measure the effects of PI4KIIIβ’s expression on filopodia production in BT549 breast cancer cells [100]. FiloDetect uses intensity-based thresholding with a combination of morphological operations [100]. It also uses additional processing to detect combined filopodia-filopodia that are fixed at the base or cross over at the length [100]. A similar filopodia-focused software program to highlight is FiloQuant [101]. This software is an Image J-based tool used to extract quantifiable data on filopodia dynamics, density, and length from both fixed and live-cell microscopy images [101]. It is able to be used in different cell types, microenvironments, and image acquisition techniques. It uses edge detection, intensity detection, and skeletonization via the AnalyzeSkeleton algorithm [101]. FiloQuant has a step-by-step user validation method to achieve optimal settings when identifying filopodia. By using this tool, filopodia formation and invasive capacity have been linked in 3D tumor spheroids [101]. This method was developed after researching the unique attributes and shortcomings of other filopodia identification techniques, such as FiloDetect, CellGeo [102], and ADAPT [103]. Each of these techniques lacks requirements for proprietary software, lacks customizable options for improvement, is able to analyze only single cells, does not have a density quantification tool, and is not easy to navigate for non-experts. FiloQuant overcomes these limitations [101].
The method we describe here in its current form has potential limitations. Some of the predicted TNT structures looked broken into pieces and this resulted in counting the same TNT multiple times. Our model consisted of two sequential classification models and needed careful calibration to identify TNTs. Reducing and simplifying our model to a single step is left for future studies. Importantly, TNTs are 3-dimensional protrusions that extend from one cell to another, or to other groups of cells. A well-established morphologic characteristic is their ability to ‘hover’ in the 3-dimensional plane when cultured in vitro under 2-dimensional cell culture conditions. Thus the most ideal conditions to characterize TNTs consist of high resolution imaging that permit 3D renderings by stacking images taken in the z-plane. However, for more routine assessment in 2D cell culture conditions, and considering the lack of a testable validated structural or compositional marker specific to TNTs, identification remains reliant on visual identification. TNTs comprise a heterogeneous group of cellular channels, displaying a relatively wide range of widths and lengths that may vary based on cell type of origin, underlying molecular machinery, and other yet unknown factors that remain to be elucidated. Challenges of automated identification include differentiation of some TNTs from more adherent forms of long protrusions, identification of established TNTs vs. those that are forming but not yet attached to recipient cells, separation from dense clusters amidst confluent or semi-confluent groups of cells, and other factors. Among other questions to be determined in future studies is whether AI-based forms of evaluation would work more optimally in cells imaged live, as compared to cells imaged following chemical or other fixation, which may introduce artefactual or other changes that have potential to disrupt the natural state of TNTs in vitro. The model presented here will evolve over time and is adaptable to address these and future needs.
Our AI-based TNT detection method, TNTdetect.AI, provides three principal contributions to the field. First, we propose a novel way to improve the manual labeling of TNTs, which would help pixel-wise detection of TNTs. Second, we can sequentially train two classification models to detect TNTs, including regions and image pixels representing the TNTs, and third, we propose a new metric to quantify TNT intensity in an image, namely, the TNT-to-cell ratio (TCR). This metric can be used in evaluating, for example, the impact of treatments on cancer cells by capturing TCRs at different stages of therapy. Our automated TNT detection approach is different from Hodneland et al.’s method in two ways. First, we created a deep learning-based model that does not require the definition of if-then rule statements. Second, we trained our model with a single information channel, 2D phase contrast microscopy images.
5. Conclusions
In summary, we report the application of TNTdetect.AI, an automated model generated by deep learning and trained to label and detect TNTs and cells imaged in culture. The continued application and refinement of this process will provide a new approach to the analysis of TNTs, which form to connect cancer and other cells. This approach has the potential to enhance the drug screens intended to assess therapeutic efficacy of experimental agents, and to reproducibly assess TNTs as a potential biomarker of response to therapy in cancer.
Acknowledgments
We thank Akshat Sarkari for assistance with data analysis and Michael Franklin for editing assistance. C.B.P. is a McNair Scholar supported by the McNair Medical Institute at The Robert and Janice McNair Foundation.
Abbreviations
| AC | active contour |
| AI | artificial intelligence |
| AURA-net | modified U-Net architecture designed to answer the problem of segmentation for small datasets of phase contrast microscopy images |
| BCE | binary cross-entropy |
| CNN | convolutional neural network |
| FP | false positive |
| ML | machine learning |
| OpenCV | open-source computer vision library |
| PPV | positive predictive value |
| ReLU | rectified linear unit |
| ResNET | residual neural network |
| TCR | TNT-to-cell ratio |
| TNT | tunnelling nanotube |
| U-Net | a convolutional network architecture for fast and precise image segmentation |
| VGGNet | a convolutional neural network architecture to increase ML model performance |
Supplementary Materials
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/cancers14194958/s1, Figure S1: Each stitched image of MSTO-211H cells was subdivided into subimages (dark blue colored squares of size 512 × 512 pixels) or sub-subimages (cyan colored squares of size 256 × 256 pixels) via a sliding window. This allowed for one stitched image to generate thousands of subimages upon which the machine learning model could be trained.; Figure S2: the size of the patches (number of rows × number of columns within each patch) within the MSTO-211H cell stitched and padded images of size 6795 × 5199 pixels; Figure S3: the values of precision and recall as a function of varying the pixel intensity threshold (range 0–255) in Model 2 (U-Net). It shows that the pixel intensity threshold of 235 maximized the sum of precision and recall; Figure S4: illustrates circumstances in which there was agreement or disagreement between the identification of structures as TNTs, between the human expert consensus and the ML model.; Table S1: Results of inter-rater agreement for TNT identification in stitched MSTO-211H images among the four human experts, using the Cohen’s kappa statistic.; Table S2: Number of training subimages generated from each stitched image (image set) of MSTO-211H cells, used for Model 1; Table S3: Tabulated summary of the human expert-based and ML-based TNT counts. For each image set, a human expert classified and counted the ML TNT predictions as FPs or TPs, and absence of ML TNT predictions as FNs, with respect to the human expert consensus “ground truth”. Ref. [104] is cited in the Supplementary Materials.
Author Contributions
Conceptualization: Y.C., M.B., E.L. and C.B.P.; Methodology: Y.C. and H.E.; Software: Y.C., H.E.; Validation: Y.C., H.E., K.L., S.K., K.D., S.P., P.W. and T.P.; Formal Analysis: Y.C., H.E. and T.P.; Investigation: Y.C., H.E., K.L., S.K., K.D., S.P., P.W. and T.P.; Resources: Y.C., H.E., M.B., E.L. and C.B.P.; Data Curation: Y.C., H.E., K.L., S.K., K.D., S.P., P.W., E.L. and C.B.P.; Writing—Original Draft Preparation: Y.C., H.E., K.L., E.L. and C.B.P.; Writing—Review and Editing: Y.C., H.E., K.L., S.K., K.D., S.P., P.W., T.P., M.B., E.L. and C.B.P.; Visualization: Y.C., H.E. and C.B.P.; Supervision: Y.C., E.L. and C.B.P.; Project Administration: E.L. and C.B.P. All authors have read and agreed to the published version of the manuscript.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data that support the findings of this study are available from the corresponding authors, E.L. and C.B.P., upon reasonable request.
Conflicts of Interest
E.L. reports research grants from the American Association for Cancer Research (AACR-Novocure Tumor-Treating Fields Research Award) and the Minnesota Ovarian Cancer Alliance; honorarium and travel expenses for a research talk at GlaxoSmithKline, 2016; honoraria and travel expenses for lab-based research talks and equipment for laboratory-based research, Novocure, Ltd., 2018-present; honorarium (donated to lab) for panel discussion organized by Antidote Education for a CME module on diagnostics and treatment of HER2+ gastric and colorectal cancers, Daiichi-Sankyo, 2021; consultant, Nomocan Pharmaceuticals (unpaid); Scientific Advisory Board Member, Minnetronix, LLC, 2018-present (unpaid); consultant and speaker honorarium, Boston Scientific US, 2019; institutional Principal Investigator for clinical trials sponsored by Celgene, Novocure, Intima Biosciences, and the National Cancer Institute, and University of Minnesota membership in the Caris Life Sciences Precision Oncology Alliance (unpaid). M.B. is a co-founder, former CEO (2015–2017), and Chief Strategy Officer (2018-present) of Smartlens, Inc.; co-founder and board chair of Magnimind Academy (2016-present); co-founder and advisor of Wowso (2017–2020); co-founder of NanoEye, Inc. (2017-present); co-founder of Nanosight Diagnostic, Inc. (2017-present); and co-founder and CEO of TechDev Academy (2019-present). C.B.P. reports a research grant from the American Association for Cancer Research (AACR-Novocure Career Development Award for Tumor Treating Fields Research), 2022-present; equipment for laboratory-based research, Novocure, Ltd., 2022-present; consultant honoraria and travel expenses, Novocure, Ltd., 2017-present. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Funding Statement
McNair Medical Institute at The Robert and Janice McNair Foundation (McNair Scholar grant no. 05-Patel, Chirag to C.B.P.). The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Footnotes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Lau R.P., Kim T.H., Rao J. Advances in Imaging Modalities, Artificial Intelligence, and Single Cell Biomarker Analysis, and Their Applications in Cytopathology. Front. Med. 2021;8:689954. doi: 10.3389/fmed.2021.689954. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Nagaki K., Furuta T., Yamaji N., Kuniyoshi D., Ishihara M., Kishima Y., Murata M., Hoshino A., Takatsuka H. Effectiveness of Create ML in microscopy image classifications: A simple and inexpensive deep learning pipeline for non-data scientists. Chromosome Res. 2021;29:361–371. doi: 10.1007/s10577-021-09676-z. [DOI] [PubMed] [Google Scholar]
- 3.Durkee M.S., Abraham R., Clark M.R., Giger M.L. Artificial Intelligence and Cellular Segmentation in Tissue Microscopy Images. Am. J. Pathol. 2021;191:1693–1701. doi: 10.1016/j.ajpath.2021.05.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.von Chamier L., Laine R.F., Jukkala J., Spahn C., Krentzel D., Nehme E., Lerche M., Hernandez-Perez S., Mattila P.K., Karinou E., et al. Democratising deep learning for microscopy with ZeroCostDL4Mic. Nat. Commun. 2021;12:2276. doi: 10.1038/s41467-021-22518-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.von Chamier L., Laine R.F., Henriques R. Artificial intelligence for microscopy: What you should know. Biochem. Soc. Trans. 2019;47:1029–1040. doi: 10.1042/BST20180391. [DOI] [PubMed] [Google Scholar]
- 6.Falk T., Mai D., Bensch R., Cicek O., Abdulkadir A., Marrakchi Y., Bohm A., Deubner J., Jackel Z., Seiwald K., et al. U-Net: Deep learning for cell counting, detection, and morphometry. Nat. Methods. 2019;16:67–70. doi: 10.1038/s41592-018-0261-2. [DOI] [PubMed] [Google Scholar]
- 7.Zinchuk V., Grossenbacher-Zinchuk O. Machine Learning for Analysis of Microscopy Images: A Practical Guide. Curr. Protoc. Cell Biol. 2020;86:e101. doi: 10.1002/cpcb.101. [DOI] [PubMed] [Google Scholar]
- 8.Xing F., Yang L. Chapter 4—Machine learning and its application in microscopic image analysis. In: Wu G., Shen D., Sabuncu M.R., editors. Machine Learning and Medical Imaging. Academic Press; Cambridge, MA, USA: 2016. pp. 97–127. [Google Scholar]
- 9.Li Y., Nowak C.M., Pham U., Nguyen K., Bleris L. Cell morphology-based machine learning models for human cell state classification. NPJ Syst. Biol. Appl. 2021;7:23. doi: 10.1038/s41540-021-00180-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Chen D., Sarkar S., Candia J., Florczyk S.J., Bodhak S., Driscoll M.K., Simon C.G., Jr., Dunkers J.P., Losert W. Machine learning based methodology to identify cell shape phenotypes associated with microenvironmental cues. Biomaterials. 2016;104:104–118. doi: 10.1016/j.biomaterials.2016.06.040. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Kim D., Min Y., Oh J.M., Cho Y.K. AI-powered transmitted light microscopy for functional analysis of live cells. Sci. Rep. 2019;9:18428. doi: 10.1038/s41598-019-54961-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Li Y., Di J., Wang K., Wang S., Zhao J. Classification of cell morphology with quantitative phase microscopy and machine learning. Opt. Express. 2020;28:23916–23927. doi: 10.1364/OE.397029. [DOI] [PubMed] [Google Scholar]
- 13.Aida S., Okugawa J., Fujisaka S., Kasai T., Kameda H., Sugiyama T. Deep Learning of Cancer Stem Cell Morphology Using Conditional Generative Adversarial Networks. Biomolecules. 2020;10:931. doi: 10.3390/biom10060931. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Moen E., Bannon D., Kudo T., Graf W., Covert M., Van Valen D. Deep learning for cellular image analysis. Nat. Methods. 2019;16:1233–1246. doi: 10.1038/s41592-019-0403-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Scheeder C., Heigwer F., Boutros M. Machine learning and image-based profiling in drug discovery. Curr. Opin. Syst. Biol. 2018;10:43–52. doi: 10.1016/j.coisb.2018.05.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Chandrasekaran S.N., Ceulemans H., Boyd J.D., Carpenter A.E. Image-based profiling for drug discovery: Due for a machine-learning upgrade? Nat. Rev. Drug Discov. 2021;20:145–159. doi: 10.1038/s41573-020-00117-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Way G.P., Kost-Alimova M., Shibue T., Harrington W.F., Gill S., Piccioni F., Becker T., Shafqat-Abbasi H., Hahn W.C., Carpenter A.E., et al. Predicting cell health phenotypes using image-based morphology profiling. Mol. Biol. Cell. 2021;32:995–1005. doi: 10.1091/mbc.E20-12-0784. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Helgadottir S., Midtvedt B., Pineda J., Sabirsh A., Adiels C.B., Romeo S., Midtvedt D., Volpe G. Extracting quantitative biological information from bright-field cell images using deep learning. Biophys. Rev. 2021;2:031401. doi: 10.1063/5.0044782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Belting M., Wittrup A. Nanotubes, exosomes, and nucleic acid-binding peptides provide novel mechanisms of intercellular communication in eukaryotic cells: Implications in health and disease. J. Cell Biol. 2008;183:1187–1191. doi: 10.1083/jcb.200810038. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Bobrie A., Colombo M., Raposo G., Thery C. Exosome secretion: Molecular mechanisms and roles in immune responses. Traffic. 2011;12:1659–1668. doi: 10.1111/j.1600-0854.2011.01225.x. [DOI] [PubMed] [Google Scholar]
- 21.Zhang H.G., Grizzle W.E. Exosomes and cancer: A newly described pathway of immune suppression. Clin. Cancer Res. 2011;17:959–964. doi: 10.1158/1078-0432.CCR-10-1489. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Guescini M. Microvesicle and tunneling nanotube mediated intercellular transfer of g-protein coupled receptors in cell cultures. Exp. Cell Res. 2012;318:603–613. doi: 10.1016/j.yexcr.2012.01.005. [DOI] [PubMed] [Google Scholar]
- 23.Demory Beckler M., Higginbotham J.N., Franklin J.L., Ham A.J., Halvey P.J., Imasuen I.E., Whitwell C., Li M., Liebler D.C., Coffey R.J. Proteomic analysis of exosomes from mutant KRAS colon cancer cells identifies intercellular transfer of mutant KRAS. Mol. Cell. Proteom. 2013;12:343–355. doi: 10.1074/mcp.M112.022806. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Fedele C., Singh A., Zerlanko B.J., Iozzo R.V., Languino L.R. The alphavbeta6 integrin is transferred intercellularly via exosomes. J. Biol. Chem. 2015;290:4545–4551. doi: 10.1074/jbc.C114.617662. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Kalluri R. The biology and function of exosomes in cancer. J. Clin. Investig. 2016;126:1208–1215. doi: 10.1172/JCI81135. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Eugenin E.A., Gaskill P.J., Berman J.W. Tunneling nanotubes (TNT) are induced by HIV-infection of macrophages: A potential mechanism for intercellular HIV trafficking. Cell. Immunol. 2009;254:142–148. doi: 10.1016/j.cellimm.2008.08.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Wang X., Veruki M.L., Bukoreshtliev N.V., Hartveit E., Gerdes H.H. Animal cells connected by nanotubes can be electrically coupled through interposed gap-junction channels. Proc. Natl. Acad. Sci. USA. 2010;107:17194–17199. doi: 10.1073/pnas.1006785107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Wang Y., Cui J., Sun X., Zhang Y. Tunneling-nanotube development in astrocytes depends on p53 activation. Cell Death Differ. 2011;18:732–742. doi: 10.1038/cdd.2010.147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Ady J.W., Desir S., Thayanithy V., Vogel R.I., Moreira A.L., Downey R.J., Fong Y., Manova-Todorova K., Moore M.A., Lou E. Intercellular communication in malignant pleural mesothelioma: Properties of tunneling nanotubes. Front. Physiol. 2014;5:400. doi: 10.3389/fphys.2014.00400. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Osswald M., Jung E., Sahm F., Solecki G., Venkataramani V., Blaes J., Weil S., Horstmann H., Wiestler B., Syed M., et al. Brain tumour cells interconnect to a functional and resistant network. Nature. 2015;528:93–98. doi: 10.1038/nature16071. [DOI] [PubMed] [Google Scholar]
- 31.Buszczak M., Inaba M., Yamashita Y.M. Signaling by Cellular Protrusions: Keeping the Conversation Private. Trends Cell Biol. 2016;26:526–534. doi: 10.1016/j.tcb.2016.03.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Lou E. Intercellular conduits in tumours: The new social network. Trends Cancer. 2016;2:3–5. doi: 10.1016/j.trecan.2015.12.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Malik S., Eugenin E.A. Mechanisms of HIV Neuropathogenesis: Role of Cellular Communication Systems. Curr. HIV Res. 2016;14:400–411. doi: 10.2174/1570162X14666160324124558. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Osswald M., Solecki G., Wick W., Winkler F. A malignant cellular network in gliomas: Potential clinical implications. Neuro-Oncology. 2016;18:479–485. doi: 10.1093/neuonc/now014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Ariazi J., Benowitz A., De Biasi V., Den Boer M.L., Cherqui S., Cui H., Douillet N., Eugenin E.A., Favre D., Goodman S., et al. Tunneling Nanotubes and Gap Junctions-Their Role in Long-Range Intercellular Communication during Development, Health, and Disease Conditions. Front. Mol. Neurosci. 2017;10:333. doi: 10.3389/fnmol.2017.00333. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Jung E., Osswald M., Blaes J., Wiestler B., Sahm F., Schmenger T., Solecki G., Deumelandt K., Kurz F.T., Xie R., et al. Tweety-Homolog 1 Drives Brain Colonization of Gliomas. J. Neurosci. 2017;37:6837–6850. doi: 10.1523/JNEUROSCI.3532-16.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Lou E., Gholami S., Romin Y., Thayanithy V., Fujisawa S., Desir S., Steer C.J., Subramanian S., Fong Y., Manova-Todorova K., et al. Imaging Tunneling Membrane Tubes Elucidates Cell Communication in Tumors. Trends Cancer. 2017;3:678–685. doi: 10.1016/j.trecan.2017.08.001. [DOI] [PubMed] [Google Scholar]
- 38.Thayanithy V., O’Hare P., Wong P., Zhao X., Steer C.J., Subramanian S., Lou E. A transwell assay that excludes exosomes for assessment of tunneling nanotube-mediated intercellular communication. Cell Commun. Signal. 2017;15:46. doi: 10.1186/s12964-017-0201-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Weil S., Osswald M., Solecki G., Grosch J., Jung E., Lemke D., Ratliff M., Hanggi D., Wick W., Winkler F. Tumor microtubes convey resistance to surgical lesions and chemotherapy in gliomas. Neuro-Oncology. 2017;19:1316–1326. doi: 10.1093/neuonc/nox070. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Lou E., Zhai E., Sarkari A., Desir S., Wong P., Iizuka Y., Yang J., Subramanian S., McCarthy J., Bazzaro M., et al. Cellular and Molecular Networking within the Ecosystem of Cancer Cell Communication via Tunneling Nanotubes. Front. Cell Dev. Biol. 2018;6:95. doi: 10.3389/fcell.2018.00095. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Valdebenito S., Lou E., Baldoni J., Okafo G., Eugenin E. The Novel Roles of Connexin Channels and Tunneling Nanotubes in Cancer Pathogenesis. Int. J. Mol. Sci. 2018;19:1270. doi: 10.3390/ijms19051270. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Lou E. A Ticket to Ride: The Implications of Direct Intercellular Communication via Tunneling Nanotubes in Peritoneal and Other Invasive Malignancies. Front. Oncol. 2020;10:559548. doi: 10.3389/fonc.2020.559548. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Venkataramani V., Schneider M., Giordano F.A., Kuner T., Wick W., Herrlinger U., Winkler F. Disconnecting multicellular networks in brain tumours. Nat. Rev. Cancer. 2022;22:481–491. doi: 10.1038/s41568-022-00475-0. [DOI] [PubMed] [Google Scholar]
- 44.Rustom A., Saffrich R., Markovic I., Walther P., Gerdes H.H. Nanotubular highways for intercellular organelle transport. Science. 2004;303:1007–1010. doi: 10.1126/science.1093133. [DOI] [PubMed] [Google Scholar]
- 45.Desir S., Wong P., Turbyville T., Chen D., Shetty M., Clark C., Zhai E., Romin Y., Manova-Todorova K., Starr T.K., et al. Intercellular Transfer of Oncogenic KRAS via Tunneling Nanotubes Introduces Intracellular Mutational Heterogeneity in Colon Cancer Cells. Cancers. 2019;11:892. doi: 10.3390/cancers11070892. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Sartori-Rupp A., Cordero Cervantes D., Pepe A., Gousset K., Delage E., Corroyer-Dulmont S., Schmitt C., Krijnse-Locker J., Zurzolo C. Correlative cryo-electron microscopy reveals the structure of TNTs in neuronal cells. Nat. Commun. 2019;10:342. doi: 10.1038/s41467-018-08178-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Antanaviciute I., Rysevaite K., Liutkevicius V., Marandykina A., Rimkute L., Sveikatiene R., Uloza V., Skeberdis V.A. Long-Distance Communication between Laryngeal Carcinoma Cells. PLoS ONE. 2014;9:e99196. doi: 10.1371/journal.pone.0099196. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Onfelt B., Nedvetzki S., Yanagi K., Davis D.M. Cutting edge: Membrane nanotubes connect immune cells. J. Immunol. 2004;173:1511–1513. doi: 10.4049/jimmunol.173.3.1511. [DOI] [PubMed] [Google Scholar]
- 49.Rudnicka D., Feldmann J., Porrot F., Wietgrefe S., Guadagnini S., Prevost M.C., Estaquier J., Haase A.T., Sol-Foulon N., Schwartz O. Simultaneous cell-to-cell transmission of human immunodeficiency virus to multiple targets through polysynapses. J. Virol. 2009;83:6234–6246. doi: 10.1128/JVI.00282-09. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Naphade S. Brief reports: Lysosomal cross-correction by hematopoietic stem cell-derived macrophages via tunneling nanotubes. Stem Cells. 2015;33:301–309. doi: 10.1002/stem.1835. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Desir S., Dickson E.L., Vogel R.I., Thayanithy V., Wong P., Teoh D., Geller M.A., Steer C.J., Subramanian S., Lou E. Tunneling nanotube formation is stimulated by hypoxia in ovarian cancer cells. Oncotarget. 2016;7:43150–43161. doi: 10.18632/oncotarget.9504. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Omsland M., Bruserud O., Gjertsen B.T., Andresen V. Tunneling nanotube (TNT) formation is downregulated by cytarabine and NF-kappaB inhibition in acute myeloid leukemia (AML) Oncotarget. 2017;8:7946–7963. doi: 10.18632/oncotarget.13853. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Desir S., O’Hare P., Vogel R.I., Sperduto W., Sarkari A., Dickson E.L., Wong P., Nelson A.C., Fong Y., Steer C.J., et al. Chemotherapy-Induced Tunneling Nanotubes Mediate Intercellular Drug Efflux in Pancreatic Cancer. Sci. Rep. 2018;8:9484. doi: 10.1038/s41598-018-27649-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Lou E., Fujisawa S., Morozov A., Barlas A., Romin Y., Dogan Y., Gholami S., Moreira A.L., Manova-Todorova K., Moore M.A. Tunneling nanotubes provide a unique conduit for intercellular transfer of cellular contents in human malignant pleural mesothelioma. PLoS ONE. 2012;7:e33093. doi: 10.1371/journal.pone.0033093. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Azorin D.D., Winkler F. Two routes of direct intercellular communication in brain cancer. Biochem. J. 2021;478:1283–1286. doi: 10.1042/BCJ20200990. [DOI] [PubMed] [Google Scholar]
- 56.Chinnery H.R., Pearlman E., McMenamin P.G. Cutting edge: Membrane nanotubes in vivo: A feature of MHC class II+ cells in the mouse cornea. J. Immunol. 2008;180:5779–5783. doi: 10.4049/jimmunol.180.9.5779. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Hase K., Kimura S., Takatsu H., Ohmae M., Kawano S., Kitamura H., Ito M., Watarai H., Hazelett C.C., Yeaman C., et al. M-Sec promotes membrane nanotube formation by interacting with Ral and the exocyst complex. Nat. Cell Biol. 2009;11:1427–1432. doi: 10.1038/ncb1990. [DOI] [PubMed] [Google Scholar]
- 58.Islam M.N., Das S.R., Emin M.T., Wei M., Sun L., Westphalen K., Rowlands D.J., Quadri S.K., Bhattacharya S., Bhattacharya J. Mitochondrial transfer from bone-marrow-derived stromal cells to pulmonary alveoli protects against acute lung injury. Nat. Med. 2012;18:759–765. doi: 10.1038/nm.2736. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Pasquier J., Galas L., Boulange-Lecomte C., Rioult D., Bultelle F., Magal P., Webb G., Le Foll F. Different modalities of intercellular membrane exchanges mediate cell-to-cell p-glycoprotein transfers in MCF-7 breast cancer cells. J. Biol. Chem. 2012;287:7374–7387. doi: 10.1074/jbc.M111.312157. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Pasquier J., Guerrouahen B.S., Al Thawadi H., Ghiabi P., Maleki M., Abu-Kaoud N., Jacob A., Mirshahi M., Galas L., Rafii S., et al. Preferential transfer of mitochondria from endothelial to cancer cells through tunneling nanotubes modulates chemoresistance. J. Transl. Med. 2013;11:94. doi: 10.1186/1479-5876-11-94. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Guo L., Zhang Y., Yang Z., Peng H., Wei R., Wang C., Feng M. Tunneling Nanotubular Expressways for Ultrafast and Accurate M1 Macrophage Delivery of Anticancer Drugs to Metastatic Ovarian Carcinoma. ACS Nano. 2019;13:1078–1096. doi: 10.1021/acsnano.8b08872. [DOI] [PubMed] [Google Scholar]
- 62.Hurtig J., Chiu D.T., Onfelt B. Intercellular nanotubes: Insights from imaging studies and beyond. WIREs Nanomed. Nanobiotechnol. 2010;2:260–276. doi: 10.1002/wnan.80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Ady J., Thayanithy V., Mojica K., Wong P., Carson J., Rao P., Fong Y., Lou E. Tunneling nanotubes: An alternate route for propagation of the bystander effect following oncolytic viral infection. Mol. Ther. Oncolytics. 2016;3:16029. doi: 10.1038/mto.2016.29. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Lou E., Subramanian S. Tunneling Nanotubes: Intercellular Conduits for Direct Cell-to-Cell Communication in Cancer. Springer; Berlin/Heidelberg, Germany; New York, NY, USA: 2016. [DOI] [Google Scholar]
- 65.Dilsizoglu Senol A., Pepe A., Grudina C., Sassoon N., Reiko U., Bousset L., Melki R., Piel J., Gugger M., Zurzolo C. Effect of tolytoxin on tunneling nanotube formation and function. Sci. Rep. 2019;9:5741. doi: 10.1038/s41598-019-42161-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Kolba M.D., Dudka W., Zareba-Koziol M., Kominek A., Ronchi P., Turos L., Chroscicki P., Wlodarczyk J., Schwab Y., Klejman A., et al. Tunneling nanotube-mediated intercellular vesicle and protein transfer in the stroma-provided imatinib resistance in chronic myeloid leukemia cells. Cell Death Dis. 2019;10:817. doi: 10.1038/s41419-019-2045-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Jana A., Ladner K., Lou E., Nain A.S. Tunneling Nanotubes between Cells Migrating in ECM Mimicking Fibrous Environments. Cancers. 2022;14:1989. doi: 10.3390/cancers14081989. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Vignjevic D., Yarar D., Welch M.D., Peloquin J., Svitkina T., Borisy G.G. Formation of filopodia-like bundles in vitro from a dendritic network. J. Cell Biol. 2003;160:951–962. doi: 10.1083/jcb.200208059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Vignjevic D., Kojima S., Aratyn Y., Danciu O., Svitkina T., Borisy G.G. Role of fascin in filopodial protrusion. J. Cell Biol. 2006;174:863–875. doi: 10.1083/jcb.200603013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Thayanithy V., Babatunde V., Dickson E.L., Wong P., Oh S., Ke X., Barlas A., Fujisawa S., Romin Y., Moreira A.L., et al. Tumor exosomes induce tunneling nanotubes in lipid raft-enriched regions of human mesothelioma cells. Exp. Cell Res. 2014;323:178–188. doi: 10.1016/j.yexcr.2014.01.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Schindelin J., Arganda-Carreras I., Frise E., Kaynig V., Longair M., Pietzsch T., Preibisch S., Rueden C., Saalfeld S., Schmid B., et al. Fiji: An open-source platform for biological-image analysis. Nat. Methods. 2012;9:676–682. doi: 10.1038/nmeth.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Veranic P., Lokar M., Schutz G.J., Weghuber J., Wieser S., Hagerstrand H., Kralj-Iglic V., Iglic A. Different types of cell-to-cell connections mediated by nanotubular structures. Biophys. J. 2008;95:4416–4425. doi: 10.1529/biophysj.108.131375. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Peng T., Thorn K., Schroeder T., Wang L., Theis F.J., Marr C., Navab N. A BaSiC tool for background and shading correction of optical microscopy images. Nat. Commun. 2017;8:14836. doi: 10.1038/ncomms14836. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Ingaramo M., York A.G., Hoogendoorn E., Postma M., Shroff H., Patterson G.H. Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths. ChemPhysChem. 2014;15:794–800. doi: 10.1002/cphc.201300831. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Laasmaa M., Vendelin M., Peterson P. Application of regularized Richardson-Lucy algorithm for deconvolution of confocal microscopy images. J. Microsc. 2011;243:124–140. doi: 10.1111/j.1365-2818.2011.03486.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Hodneland E., Lundervold A., Gurke S., Tai X.C., Rustom A., Gerdes H.H. Automated detection of tunneling nanotubes in 3D images. Cytom. A. 2006;69:961–972. doi: 10.1002/cyto.a.20302. [DOI] [PubMed] [Google Scholar]
- 77.Deng J., Dong W., Socher R., Li L.J., Kai L., Li F.-F. ImageNet: A large-scale hierarchical image database; Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition; Miami, FL, USA. 20–25 June 2009; pp. 248–255. [Google Scholar]
- 78.Long J., Shelhamer E., Darrell T. Fully convolutional networks for semantic segmentation; Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Boston, MA, USA. 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- 79.Lee C.-Y., Xie S., Gallagher P., Zhang Z., Tu Z. Deeply-Supervised Nets; Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics; San Diego, CA, USA. 10–12 May 2015; pp. 562–570. [Google Scholar]
- 80.Ronneberger O., Fischer P., Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation; Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Munich, Germany. 5–9 October 2015; pp. 234–241. [Google Scholar]
- 81.Oktay O., Schlemper J., Folgoc L.L., Lee M., Heinrich M., Misawa K., Mori K., McDonagh S., Hammerla N.Y., Kainz B. Attention u-net: Learning where to look for the pancreas. arXiv. 20181804.03999 [Google Scholar]
- 82.Cohen E., Uhlmann V. aura-net: Robust segmentation of phase-contrast microscopy images with few annotations; Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI); Nice, France. 13–16 April 2021; pp. 640–644. [Google Scholar]
- 83.Rectified Linear Units. [(accessed on 20 August 2022)]. Available online: https://paperswithcode.com/method/relu.
- 84.Max Pooling. [(accessed on 20 August 2022)]. Available online: https://paperswithcode.com/method/max-pooling.
- 85.Yosinski J., Clune J., Bengio Y., Lipson H. Advances in Neural Information Processing Systems. Volume 27 Curran Associates, Inc.; Red Hook, NY, USA: 2014. How transferable are features in deep neural networks? [Google Scholar]
- 86.He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA. 27–30 June 2016; pp. 770–778. [Google Scholar]
- 87.Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser Ł., Polosukhin I. Advances in Neural Information Processing Systems. Volume 30 Curran Associates, Inc.; Red Hook, NY, USA: 2017. Attention is all you need. [Google Scholar]
- 88.Goodfellow I., Bengio Y., Courville A. Deep Learning. MIT Press; Cambridge, MA, USA: 2016. [Google Scholar]
- 89.Milletari F., Navab N., Ahmadi S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation; Proceedings of the 2016 fourth international conference on 3D vision (3DV); Stanford, CA, USA. 25–28 October 2016; pp. 565–571. [Google Scholar]
- 90.Chen X., Williams B.M., Vallabhaneni S.R., Czanner G., Williams R., Zheng Y. Learning Active Contour Models for Medical Image Segmentation; Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Long Beach, CA, USA. 15–20 June 2019; pp. 11624–11632. [Google Scholar]
- 91.Stringer C., Pachitariu M. Cellpose 2.0: How to train your own model. bioRxiv. 2022 doi: 10.1101/2022.04.01.486764. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Contours: Getting Started. [(accessed on 20 August 2022)]. Available online: https://docs.opencv.org/4.x/d4/d73/tutorial_py_contours_begin.html.
- 93.Meijering E. Cell Segmentation: 50 Years Down the Road [Life Sciences] IEEE Signal Process. Mag. 2012;29:140–145. doi: 10.1109/MSP.2012.2204190. [DOI] [Google Scholar]
- 94.Vembadi A., Menachery A., Qasaimeh M.A. Cell Cytometry: Review and Perspective on Biotechnological Advances. Front. Bioeng. Biotechnol. 2019;7:147. doi: 10.3389/fbioe.2019.00147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95.Zhao X., May A., Lou E., Subramanian S. Genotypic and phenotypic signatures to predict immune checkpoint blockade therapy response in patients with colorectal cancer. Transl. Res. 2018;196:62–70. doi: 10.1016/j.trsl.2018.02.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.Vignjevic D., Peloquin J., Borisy G.G. In vitro assembly of filopodia-like bundles. Methods Enzymol. 2006;406:727–739. doi: 10.1016/S0076-6879(06)06057-5. [DOI] [PubMed] [Google Scholar]
- 97.Bailey K.M., Airik M., Krook M.A., Pedersen E.A., Lawlor E.R. Micro-Environmental Stress Induces Src-Dependent Activation of Invadopodia and Cell Migration in Ewing Sarcoma. Neoplasia. 2016;18:480–488. doi: 10.1016/j.neo.2016.06.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.Schoumacher M., Goldman R.D., Louvard D., Vignjevic D.M. Actin, microtubules, and vimentin intermediate filaments cooperate for elongation of invadopodia. J. Cell Biol. 2010;189:541–556. doi: 10.1083/jcb.200909113. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 99.Perestrelo T., Chen W., Correia M., Le C., Pereira S., Rodrigues A.S., Sousa M.I., Ramalho-Santos J., Wirtz D. Pluri-IQ: Quantification of Embryonic Stem Cell Pluripotency through an Image-Based Analysis Software. Stem Cell Rep. 2017;9:697–709. doi: 10.1016/j.stemcr.2017.06.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100.Nilufar S., Morrow A.A., Lee J.M., Perkins T.J. FiloDetect: Automatic detection of filopodia from fluorescence microscopy images. BMC Syst. Biol. 2013;7:66. doi: 10.1186/1752-0509-7-66. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101.Jacquemet G., Paatero I., Carisey A.F., Padzik A., Orange J.S., Hamidi H., Ivaska J. FiloQuant reveals increased filopodia density during breast cancer progression. J. Cell Biol. 2017;216:3387–3403. doi: 10.1083/jcb.201704045. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 102.Tsygankov D., Bilancia C.G., Vitriol E.A., Hahn K.M., Peifer M., Elston T.C. CellGeo: A computational platform for the analysis of shape changes in cells with complex geometries. J. Cell Biol. 2014;204:443–460. doi: 10.1083/jcb.201306067. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103.Barry D.J., Durkin C.H., Abella J.V., Way M. Open source software for quantification of cell migration, protrusions, and fluorescence intensities. J. Cell Biol. 2015;209:163–180. doi: 10.1083/jcb.201501081. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 104.Seo J.H., Yang S., Kang M.S., Her N.G., Nam D.H., Choi J.H., Kim M.H. Automated stitching of microscope images of fluorescence in cells with minimal overlap. Micron. 2019;126:102718. doi: 10.1016/j.micron.2019.102718. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data that support the findings of this study are available from the corresponding authors, E.L. and C.B.P., upon reasonable request.





