Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Apr 21.
Published in final edited form as: Conf Comput Vis Pattern Recognit Workshops. 2020 Jul 28;2020:4262–4271. doi: 10.1109/cvprw50498.2020.00503

A topological encoding convolutional neural network for segmentation of 3D multiphoton images of brain vasculature using persistent homology

Mohammad Haft-Javaherian 1,2,*, Martin Villiger 1, Chris B Schaffer 3, Nozomi Nishimura 3, Polina Golland 2, Brett E Bouma 1,4
PMCID: PMC8059194  NIHMSID: NIHMS1689459  PMID: 33889437

Abstract

The clinical evidence suggests that cognitive disorders are associated with vasculature dysfunction and decreased blood flow in the brain. Hence, a functional understanding of the linkage between brain functionality and the vascular network is essential. However, methods to systematically and quantitatively describe and compare structures as complex as brain blood vessels are lacking. 3D imaging modalities such as multiphoton microscopy enables researchers to capture the network of brain vasculature with high spatial resolutions. Nonetheless, image processing and inference are some of the bottlenecks for biomedical research involving imaging, and any advancement in this area impacts many research groups. Here, we propose a topological encoding convolutional neural network based on persistent homology to segment 3D multiphoton images of brain vasculature. We demonstrate that our model out-performs state-of-the-art models in terms of the Dice coefficient and it is comparable in terms of other metrics such as sensitivity. Additionally, the topological characteristics of our model’s segmentation results mimic manual ground truth. Our code and model are open source at https://github.com/mhaft/DeepVess.

1. Introduction

The health of the brain and heart, as the most two critical organs, interconnect through the brain vasculature. Since the brain energy depends exclusively on the supply of oxygen and glucose through the bloodstream and the brain energy reserve is very limited, the brain requires steady and reliable blood perfusion [24]. Therefore, any blood flow interruption causes temporary or permanent functional impairments [38]. Researchers and physicians utilize various imaging modalities, such as multiphoton microscopy (MPM) [4] and optical coherence tomography [39], to study and examine the 3D geometrical, topological, and fluid dynamics characteristics of the brain vasculature, including capillaries in live animal models.

Accurate analysis of brain vasculature over large 3D volumes and multiple subjects within several controlled groups forms a considerable portion of the bottleneck processes in terms of time and personal allocation in biomedical research and medicine. For instance, researchers studied the effects of brain capillary stalling on Alzheimer’s disease in animal models and reported that the image analysis required order of magnitude higher time allocations in comparison to the actual image acquisitions [6, 5, 11, 13, 20]. Notably, the vessel segmentation and vectorization are the prevalent required image analysis tasks [26]. Researchers often consider the segmentation as a classification task and have proposed supervised, unsupervised, and semi-supervised solutions using classical computer vision methods [26] and recently deep learning (DL) methods [27].

In this work, we present a novel method for the segmentation of 3D in vivo MPM images of brain vasculature based on a topological encoding convolutional neural network (CNN) using persistent homology (PH) as an unsupervised loss term in addition to two other supervised loss terms. Our work is motivated by the fact that the-state-of-the-art MPM segmentation methods such as DeepVess [21] suffer from topological errors in the regions with low signal-to-noise (SNR) and by the recent successful integration of the algebraic topology metrics such as persistent homology into differentiable DL layers [7]. Even though persistent homology has been introduced and utilized in various fields such as computational topology, its application in the computer vision is limited to preprocessed feature selections [23]. Our algorithm improved DeepVess’ accuracy significantly, specifically in the low SNR regions, and by producing more plausible results, qualitatively.

2. Related Work

2.1. Vessel segmentation

The segmentation of vasculature networks or particular vessel segments, e.g. the aorta, is an essential task in most biomedical research applications and medical settings due to the fact that the original task requires enormous time and personal allocations. The vessel segmentation problem is tackled through the established computer vision methods, which commonly require feature engineering and accept/reject rules, as well as the DL methods, which are formulated as supervised, unsupervised, or semi-supervised learning problems [14, 29, 42]. For instance, Yi et al. [45] proposed a 3D method using the region growing algorithm utilizing its computational and implementation efficiency. Similarly, Mile et al. [29] represented the segmentation in terms of a deformable parametric model that accurately match the vessel fractal shapes.

On the other hand, due to the current success of DL in various research fields, DL models are popular and yield higher accuracy levels. In recent years, researchers, practitioners, and challenge participants almost exclusively utilize different variants of DL models to tackle different biomedical computer vision problems. For example, a model based on the transfer learning of the Google Inception V3 network [40] surpassed the accuracy of seven ophthalmologists combined in the diabetic retinopathy detection task within the color fundus images [18]. The CNN models, such as the model by Wu et al. [44] can segment vasculature networks successfully, and in different circumstances, recurrent neural network (RNN) models are proposed in place of post-processing conditional random fields (CRF) models [16, 15]. The most common biomedical image segmentation DL architecture theme is called U-Net [35], which is a fully convolutional architecture [28] with encoding and decoding layers in addition to skip connections at different spatial scales.

2.2. MPM segmentation

MPM is the standard imaging modality for studying different live animal models enabling longitudinal studies (e.g. weeks and months) while providing high temporal resolutions (e.g. 2–30 Hz) and sub-micron spatial resolutions with minimally invasive procedure requirements. In comparison to the linear process-based counterpart imaging modality, the intensity of the non-linear processes in MPM imaging (e.g. two-photon excited fluorescence) is proportional to the square or cube of the required average laser power. This non-linearity increases imaging depth and decrease photobleaching [25].

Tsai et al. [42] introduced the first comprehensive MPM image processing software suit, a.k.a. VIDA, based on traditional computer vision methods with the ability to handle large datasets. On the other hand, Teikari et al. [41] utilized 2.5D CNN architecture with CRF post-processing to segment MPM vessel images, while Bates et al. [3] used RNN architecture to extract the vasculature graph representations. Recently, DeepVess et al. [21] was proposed, which is an optimized CNN architecture with a customized loss function and accuracy comparable to that of expert human annotators for both vessel segmentation and centerline extractions and Gur et al. [19] developed an unsupervised neural network with active contours mimicking loss term.

2.3. Topological data analysis

Topological data analysis (TDA) is a mathematical field that studies complex datasets from the algebraic topology point of view. PH, as a TDA theory, is able to characterize high-dimensional manifolds within datasets such as graph-based characterization, unsupervised feature engineering, and image manifold analysis [12, 33, 36].

In comparison to other models with no regularization or preset boundary topologies (e.g. multivariate regression model and support vector machines), Chen et al. [8] used PH to regularize the decision boundaries and to simplify the boundaries. Similarly, Rieck et al. [34] used the PH as a complexity metric for DL architectures by modeling the network as a weighted stratified graph to facilitate the underling optimization problem required for the learning process such as the mimicking the best DL regularization tricks (e.g. dropout and batch normalization) and optimization practice such as validation-based early stopping.

The feature engineering based on the input image is the primary utilization of PH for the segmentation task. Qaiser et al. [32] trained a CNN based on the extracted topological features using enhanced persistent homology profiles to segment tumors in Hematoxylin and Eosin stained histology images. Differently, Assaf et al. [2] segmented the cells by selecting the target object within the candidates based on the largest life span using PH of the cubical complex. Similarly, Gao et al. [17] used the PH to segment the high-resolution CT images of the left ventricle including the papillary muscles and the trabeculae by detecting the salient object with the desirable topological characteristic, i.e. handles, to be restored in the final results. Furthermore, Clough et al. [10] tackled the same problem with the exception of using the PH for providing the gradient signals to the learning optimizer.

3. Method

3.1. Persistent homology

The homology in the context of abstract algebra uses invariant theory to maps the groups on an algebraic variety such as Euclidean spaces to the topological space. If we describe the manifolds in a D-dimensional Euclidean space XD as a simplicial complex K, which is a generalization of graphs in the higher dimensions, simplicial homology represents the connectivity of K in terms of homology groups using matrix reduction algorithms. A k-dimensional simplex is the convex combination of k +1 vertices. The vertices correspond to points in a sufficiently nice ambient D space, which results in the existence of the simplex geometrical realization within X. A collection of simplices forms a simplicial complex X if for every simplex τX, every face στ is also in X and also for any two simplices τi and τj, σ = τiτj is a face of both τi and τj.

Homology groups {Hk(K):k{0,,D1}} describe the kth order topological features, i.e. holes of the kth dimension. For instance, H0(K), H1(K), and H2(K) correspond to connected components, 2D hole or 3D tunnel, and 3D voids, respectively. The Betti numbers (βi) summarize the homology group information by measuring the cardinality of the Hi(K). For example, a 2D ring has Betti numbers of (1, 1), a filled 2D square has Betti numbers of (1, 0), a torus has Betti numbers of (1, 1, 1), and a sphere has Betti numbers of (1, 0, 1).

Betti numbers are very abstract representation of the homology groups and they are not smooth in noisy datasets. Persistent homology resolves this shortcoming of Betti numbers by measuring the topological features of X at different resolutions to promote the detection of the true features embedded in X and exclusion of features constructed based on quantification and noise errors. The simplicial complexes can be ordered into a nested sequence by considering a non-decreasing function f:K that induces a homomorphism on the simplicial homology groups for each dimension. For every a, there is a subcomplex of K as a sublevel set K(a)=f1(inf,a] and a set of scale weights a0a1 ≤ … ≤ an (a.k.a. filtration parameter) put the K in filtration based on the ordering of f to represent the growth of K by changing the filtration parameter (ai),

=K0K1Kn=K. (1)

Over the course of K growth by filtration, topological features are born and die at different scales. Persistence homology maps the birth and death of features into 2 space as (bi, di) points, where bi, di ∈ [0, 1] or any other range. The collection of all points based on k-dimensional topological features generate the kth persistence diagram,

PDk:(X,f){(bi,di):iIk}. (2)

If homology is computed over a field, the homology of a filtration admits an interval decomposition, which is the sum of the elements of rank one within the interval [bi, di). The persistence of each point in each PDk is the lifetime of the feature and defined as pers(bi, di) = |dibi|. Furthermore, the points in each PDk are indexed in a descending order based on their persistence value. Therefore, the first point in Ik is the most persistent feature and the last one is probably an artifact.

To implement the PH as a differentiable DL layer, the complete inverse map from PDk to the input dataset D:(X,Y) and a loss function are required. The complete inverse map has two steps: mapping PDk to X and then mapping X to D. The inverse map between PDk and X can be defined by searching for the pair of two simplices τ and τ′ such that bi = f (τ) and di = f (τ′),

π(PDk,iIk;f):(bi,di)(τ,τ). (3)

The second step of the complete inverse map is to connect the X to the input dataset D. Since in most computer vision problems, the D=(X,Y) consists of 3D digitized images as voxel grids, we define the filtration function on the voxel intensity value and simplex based on the voxel neighborhood. Therefore, the filtration is equivalent of level sets at different intensity thresholds and the mapping associates each simplex (τ) to the indices of voxels στ that their intensity values are equal to the filtering cutoff value (ai),

ω(τ)=argmaxστf(σ). (4)

The PH cost functions should represent the distance between the target persistence diagram PD and the predicted persistence diagram PD^. We use Wasserstein-1 distance W1(p,q), a.k.a. Earth-Mover distance, which is related to the minimum cost of stockpile transportation from stock configuration p to stock configuration q, where the transportation cost is equal to the mass times the transportation distance. Note that under a few mild assumptions, W1 is continuous and differentiable almost everywhere [31]. The objective is to minimize the W1 between PD^ and PD. At first, the cardinalities of PD and PD^ are matched by projecting the additional points in PD^ to the diagonal (5) and then the loss function measures the Wasserstein-1 distance between counterpart topological features to equate the number of topological features with high persistent between PD and PD^ and push the rest of that features to have zero persistent values,

proj(bi,di)=bi+di2, (5)
ET(βk;PD^k)=(i=1βk(bi,di)(0,1)1)1+(i=βk+1|Ik|(bi,di)proj(bi,di)1)1=i=1nk|1di+bi|+i=nk+1|Ik||dibi|. (6)

The gradient of the topological loss function (6) with respect to the input image is measurable through the inverse maps (3) and enables the optimization of the whole network in an end-to-end fashion.

An example of this framework is depicted in Figure 1. Panel A and B in Figure 1 are the target and query images, respectively, and the image intensities are normalized into the [0, 1] range. The color blue, red, and dark yellow represent low, medium, and high intensity values such as 0.01, 0.20, and 0.70. In Figure 1.C, green and magenta markers are associated with the target and query images, and square and circle markers are related to H0 and H1 homology groups, respectively. The unfilled markers located on the horizontal axis are the elements that are projected to the diagonal line to match the cardinalities of the target persistence diagram PD and the query persistence diagram PD^. The arrows are the distances between the target and query features, which measure the distance between the two persistence diagrams. For example, when the filtering value is equal to the red color (e.g. 0.20), the letter “j” will appear, which results in the birth of a hole that is confined between the letter “C” and the letter “j” in addition to birth of a new connected components that corresponds to the dot of letter “j”. The changes due to appearance of letter “j” are reflected in Figure 1.C as the magenta circle and square with mid-range persistent values. The magenta circle and square with low-range persistent values in the lower right corner are related to the letter “Q”.

Figure 1.

Figure 1.

An example of a persistence diagram. Panels A and B are the target and query images, respectively. The image intensities are normalized into the [0, 1] range, and the color blue, red, and dark yellow represent low, medium, and high intensity values. In panel C, green and magenta markers are associated with the target and query images, while square and circle markers are related to H0 and H1 homology groups, respectively. The unfilled green markers located on the horizontal axis are the projected elements to match the cardinalities of the two persistence diagrams. The arrows in panel C are the distance between the two persistence diagrams.

3.2. Network architecture

We adopted the network architecture of DeepVess [21], which is the state-of-the-art architecture optimized for vessel segmentation in MPM images, as our baseline architecture (Fig. 2). Briefly, DeepVess’ architecture has an input image, which is 7-voxel deep along the third direction and initially apply three 3D convolution layers with the 3 × 3 × 3 kernel size, 32 features, and without padded edges followed by a max-pooling with the 2 × 2 kernel size and 2 × 2 strides. Then two 2D convolution layers with the 3 × 3 kernel size and 64 features followed by a max pooling with the 2 × 2 kernel size and 2 × 2 strides are applied. Finally, the output of the last max-pooling layer is flattened to a fully-connected layer followed by a 1024-node fully-connected layer in addition to a dropout layer at the training time (with 50% probability) and the last fully-connected layer, which is reshaped to the output patch size. LeakyReLU was used as the activation function for all the layers. The width of the input image and consequently the receptive field as the output size were the optimized design hyperparameters that were selected based on the new loss function and prior knowledge of the topological characteristics of brain vasculature networks.

Figure 2.

Figure 2.

Our convolutional neural network architecture. The input is a 98 × 98 × 7 3D image patch with a single input channel. The first three 3D convolution layers with the 3 × 3 × 3 kernel size and 32 features were followed by a max-pooling with the 2 × 2 kernel size and 2 × 2 strides. Then two 2D convolution layers with the 3 × 3 kernel size and 64 features followed by a max-pooling with the 2 × 2 kernel size and 2 × 2 strides are applied. Finally, a dense layer classifier with a 1024-node fully-connected hidden layer as well as a dropout layer at the training time was applied. The loss function terms are based on the generalize Dice, cross-entropy, and topological persistence.

3.3. Learning and implementation

We preprocessed the 3D in vivo MPM images of mouse brain vasculature networks following [21]. Briefly, the physiological motion artifacts were removed using non-rigid non-parametric diffeomorphic demons image registration [43], followed by image intensity normalization and 1μm3 spatial resolution scaling.

DeepVess resolved the highly unbalanced class population problem by using a customized cross-entropy loss function (7),

LDeepVess(y,y^)=i{TP,FP,FN}CE(yi,y^i), (7)

which is measured over all voxels except true negative voxels. Instead, we used generalized Dice loss function (8) due to the successful results in [30],

LDice(y,y^)=12i(yi×y^i)+ϵi(yi2+y^i2)+ϵ. (8)

Additionally, we used the normal cross-entropy loss function (LCE) for the initial optimization warm-up to decrease the required training time,

CE(yi,y^i)=yi×log(y^i)(1yi)×log(1y^i). (9)

In this training regime, initially, LCE is the significant loss term, but after the initial warm-up phase, LDeepVess will automatically dominate the optimization as a result of the highly unbalanced class populations.

The objective of the topological cost function (6) is to enforce the prior knowledge of the desired topological characterises in terms of the target Betti numbers (βk). Due to the fractal characteristics of venous and arterial networks and their interaction through the grid networks of capillaries [37]. The topological characterises of brain vasculature network is scale-dependent. Therefore, the optimal size of the field of view based on the topological constrains is the largest field of view, which has similar Betti numbers in various regions. In our case, which is the mouse brain vasculature network, we observed there is no large capillary loop within a 21 × 21 μm2 region, which translates to target β0 of 1 and β1 of 0, equivalent to one connected component without any hole. Therefore, the topological loss function is deterministic without the need to be inferred from the data, and it can be formulated based on (6) into

LT(PD^)=k=0D1ET(β˜k;PD^k)=ET(1;PD^0)+ET(0;PD^1). (10)

The final proposed loss function contains three terms: Dice, cross-entropy, and topology loss functions. We measured the topological loss function on 10% of the samples in the mini-batch to reduce the computational cost, while maintaining a sufficient gradient signal,

L(y,y^)=LDice(y,y^)+λLCE(y,y^)+λLT(PD(y^1,..,y^n/10)). (11)

We implemented our model in TensorFlow [1] and measure PH using the “Topology Layer” package [7]. We trained our model using Adam stochastic optimization with a learning rate of 10−4 for 2000 epochs. We set the λ and λ′ to the optimal values of 1 and 0.01 based on the evaluation results of the validation dataset. The division patterns of the image patches within the training dataset were shifted randomly to cover different possible slicing schemes. Also, the same procedure was used to enhance the classification at the inference phase. Our code and model are open source at https://github.com/mhaft/DeepVess.

4. Experiments

4.1. Data

We demonstrate the performance of our algorithm on DeepVess dataset [21], which is the largest, publicly available, manually-labeled, in vivo MPM image dataset of mouse brain vasculature with 24 mice and more than 13 × 106 μm3 labelled images. We utilized their division of the dataset into training, validation, and testing sub-datasets (50%, 25%, and 25%). Similarly, while we fine-tuned and optimized the models based on the performance on the validation dataset, the final model accuracy results were measured on the independent test sub-dataset.

4.2. Baseline and evaluation

We compare our proposed model to the state-of-the-art segmentation methods VesselNN [41], DeepVess [21], and UMIS [19] in addition to VIDA [42] and 3D U-Net [9], which are the methods commonly used in the analysis of MPM and other biomedical image modalities.

We evaluate our model qualitatively with figures in addition to quantitatively based on accuracy, sensitivity, specificity, Dice coefficient (a.k.a. F1), and Jaccard index. This metrics are based on the populations of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) cases,

 Sensitivity=TPTP+FN, (12)
 Specificity=TNTN+FP, (13)
Accuracy=TP+TNTP+TN+FP+FN, (14)
Jaccard=TPTP+FP+FN, (15)
Dice=2×TP2×TP+FP+FN. (16)

4.3. Results and Discussion

We constructed a loss function (11) that enables the optimization of CNN models to segment a highly imbalanced foreground using the Dice term, besides encoding the prior knowledge of the topological characteristics of the segmentation results into the dense layer classifier through the topological loss term.

We compared the performance of our proposed model to the-state-of-the-art baselines quantitatively in Table 1 and qualitatively in Figure 4. Our model outperforms other models in terms of Dice and Jaccard indices. Since foreground voxels are less than 10% of the population, these two metrics are better indicators of the actual performance of a model in comparison to other metrics such as accuracy. A classifier that classifies all the input voxels as backgrounds results in 90% accuracy and 100% specificity (or 100% sensitivity if it classifies all the voxels as foregrounds); nevertheless, its Dice and Jaccard indices are 0%. Under other models, the results will have slightly higher accuracy, sensitivity, and specificity; but with lower Dice and Jaccard indices in addition to having topological errors (Figure 4). However, these shortcomings demonstrate the existing capacity for improvement, mainly because this tool is frequently used in different branches of biomedical research and also the dependency of downstream image analysis tasks, such as the centerline extraction, on the quality of the segmentation results.

Table 1.

Result of our model in comparison to the baseline models. Baseline models are VIDA [42], VesselNN [41], U-Net [9], UMIS [19], and DeepVess [21]. The measured performance metrics are accuracy, sensitivity, specificity, Dice coefficient, and Jaccard index, with the last two metrics being correlated. Our model presents a higher Dice and Jaccard indices while maintaining comparable performance in terms of the other metrics.

Model Accuracy Sensitivity Specificity Dice Jaccard
VIDA [42, 19] 60.2% 99.8% 56.4% 30.4% 17.9%
VesselNN [41, 21] 95.1% 62.4% 98.7% 69.7% 55.1%
3D U-Net [9, 22] 95.6% 70.0% 98.2% 72.7% 59.4%
UMIS [19] 98.6% 99.2% 98.6% 82.9% 70.8%
DeepVess [21] 96.4% 90.0% 97.0% 81.6% 69.2%
Ours 96.8% 96.4% 98.2% 86.5% 76.2%

Figure 4.

Figure 4.

Qualitative comparison between our model and DeepVess [21]. A-B. The cross-sections spaced through the depth are plotted. The first column is the image intensity of the 2D slice in gray-scale. The second and third columns are the results of DeepVess and our models overlaid on the intensity images. The last column of each panel is the entropy with cooler and warmer colors representing the lower and higher uncertainty levels, respectively. C. The results of DeepVess and our models applied to the last sample. The cyan and yellow arrows represent the discontinuity and hole artifacts, respectively. Scale bar is 100μm.

The usage of infrared light to excite fluorophores with high absorption cross-sections enables deeper imaging in light-scattering tissues with low energy requirements and high resolution. However, the main sources of image processing difficulties are the rapid loss of signal and the background signal from the superficial layer in the images acquired deeper within the tissue. Due to this intrinsic limitation, the study of the model performance as a function of depth is essential. Figure 3 is the box-plot of the Dice Coefficient measured at each slice for all slices in the 3D image. Our model has lower Dice variation in comparison to DeepVess, while our model maintains a similar median value for the performance, which is an indicator that our model is less sensitive to the SNR variation over the depth of imaging with higher consistency.

Figure 3.

Figure 3.

2D slice-based Dice coefficient variation as a function of imaging depth based on our model and baselines. On each box, the central red line indicates the median, while the bottom and top limits of the box indicate the first and third quartiles, respectively. The whiskers do not include outliers, which are plotted using the plus sign.

Figure 4 illustrates the results of our model and DeepVess overlaid on the input images. In Figure 4.AB, the cross-sections spaced through the depth are plotted. The first column is the image intensity of the 2D slice in grayscale. The second and third columns are the results of DeepVess and our model overlaid on the intensity images. The input images were divided into the input patches using different splitting schemes to measure our model uncertainty by measuring Shannon’s entropy. Higher entropy values correlate with higher segmentation uncertainties. The last column is the entropy with cooler and warmer colors representing the lower and higher uncertainty levels, respectively.

Although the deeper slices, e.g. Figure 4.B, have more voxels with uncertainty; evidently, the level of uncertainty is lower than the results in [21] and the topological features of the segmentation results are close to the ground truth and the topological priors in our application. In Figure 4.C, the cyan and yellow arrows in DeepVess’ result represent the discontinuity and hole artifacts, respectively. The segmentation results based on our model do not have any hole in addition to the fact that the integrity and continuity of the vessel are very well-maintained. These two characteristics are essential for the downstream image analysis tasks, including the vessel centerline extraction.

5. Conclusion

We proposed a convolutional neural network architecture with a loss function consists of three terms, including a topological loss term. The topological loss term is based on the persistent homology and encodes the prior knowledge of topological features and their characteristics into the model. We demonstrated the application of this model to the segmentation of brain vasculature networks in multiphoton microscopy images in mice. While our model outperforms the-state-of-the-art models based on the Dice and Jaccard indices, our model has comparable accuracy, sensitivity, and specificity in addition to producing segmentation results with less topological errors in comparison to the-state-of-the-art models.

Acknowledgment

This work was supported by the National Institutes of Health NIBIB under Grant P41EB-015903 and Grant P41EB015902. In addtion, MH was supported by Bullock Postdoctoral Fellowship.

References

  • [1].Abadi Martín, Agarwal Ashish, Barham Paul, Brevdo Eugene, Chen Zhifeng, Citro Craig, Corrado Greg S, Davis Andy, Dean Jeffrey, Devin Matthieu, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [Google Scholar]
  • [2].Assaf Rabih, Goupil Alban, Kacim Mohammad, and Vrabie Valeriu. Topological persistence based on pixels for object segmentation in biomedical images. In 2017Fourth International Conference on Advances in Biomedical Engineering (ICABME), pages 1–4. IEEE, 2017. [Google Scholar]
  • [3].Bates Russell, Irving Benjamin, Markelc Bostjan, Kaeppler Jakob, Muschel Ruth, Grau Vicente, and Schnabel Julia A. Extracting 3d vascular structures from microscopy images using convolutional recurrent networks. arXiv preprint arXiv:1705.09597, 2017. [Google Scholar]
  • [4].Blinder Pablo, Tsai Philbert S, Kaufhold John P, Knutsen Per M, Suhl Harry, and Kleinfeld David. The cortical angiome: an interconnected vascular network with noncolumnar patterns of blood flow. Nature neuroscience, 16(7):889, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Bracko Oliver, Njiru Brendah N, Swallow Madisen, Ali Muhammad, Haft-Javaherian Mohammad, and Schaffer Chris B. Increasing cerebral blood flow improves cognition into late stages in alzheimer’s disease mice. Journal of Cerebral Blood Flow & Metabolism, page 0271678X19873658, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Bracko OLIVER, Vinarcsik Lindsay K, Cruz Hernandez Jean C, Ruiz-Uribe Nancy E, Haft-Javaherian Mohammad, Falkenhain Kaja, Ramanauskaite Egle M, Ali Muhammad, Mohapatra Aditi, Swallow Madisen A, et al. High fat diet worsens pathology and impairment in an alzheimers mouse model, but not by synergistically decreasing cerebral blood flow. bioRxiv, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Brüel-Gabrielsson Rickard, Nelson Bradley J, Dwaraknath Anjan, Skraba Primoz, Guibas Leonidas J, and Carlsson Gunnar. A topology layer for machine learning. arXiv preprint arXiv:1905.12200, 2019. [Google Scholar]
  • [8].Chen Chao, Ni Xiuyan, Bai Qinxun, and Wang Yusu. A topological regularizer for classifiers via persistent homology. arXiv preprint arXiv:1806.10714, 2018. [Google Scholar]
  • [9].Çiçek Özgün, Abdulkadir Ahmed, Lienkamp Soeren S, Brox Thomas, and Ronneberger Olaf. 3d u-net: learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention, pages 424–432. Springer, 2016. [Google Scholar]
  • [10].Clough James R, Oksuz Ilkay, Byrne Nicholas, Zimmer Veronika A, Schnabel Julia A, and King Andrew P. A topological loss function for deep-learning based image segmentation using persistent homology. arXiv preprint arXiv:1910.01877, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Cruz Cruz Hernández Jean C, Bracko Oliver, Kersbergen Calvin J, Muse Victorine, Haft-Javaherian Mohammad, Berg Maxime, Park Laibaik, Vinarcsik Lindsay K, Ivasyk Iryna, Rivera Daniel A, et al. Neutrophil adhesion in brain capillaries reduces cortical blood flow and impairs memory function in alzheimer’s disease mouse models. Nature neuroscience, 22(3):413–420, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Edelsbrunner Herbert and Harer John. Computational topology: an introduction. American Mathematical Soc., 2010. [Google Scholar]
  • [13].Falkenhain Kaja, Ruiz-Uribe Nancy E, Haft-Javaherian Mohammad, Ali Muhammad, Michelucci Pietro E, Schaffer Chris B, Bracko OLIVER, et al. Voluntary running does not increase capillary blood flow but promotes neurogenesis and short-term memory in the app/ps1 mouse model of alzheimers disease. bioRxiv, 2020. [Google Scholar]
  • [14].Fraz Muhammad Moazam, Remagnino Paolo, Hoppe Andreas, Uyyanonvara Bunyarit, Rudnicka Alicja R, Owen Christopher G, and Barman Sarah A. Blood vessel segmentation methodologies in retinal images–a survey. Computer methods and programs in biomedicine, 108(1):407–433, 2012. [DOI] [PubMed] [Google Scholar]
  • [15].Fu Huazhu, Xu Yanwu, Lin Stephen, Wong Damon Wing Kee, and Liu Jiang. Deepvessel: Retinal vessel segmentation via deep learning and conditional random field. In International conference on medical image computing and computer-assisted intervention, pages 132–139. Springer, 2016. [Google Scholar]
  • [16].Fu Huazhu, Xu Yanwu, Wong Damon Wing Kee, and Liu Jiang. Retinal vessel segmentation via deep learning network and fully-connected conditional random fields. In 2016 IEEE 13th international symposium on biomedical imaging (ISBI), pages 698–701. IEEE, 2016. [Google Scholar]
  • [17].Gao Mingchen, Chen Chao, Zhang Shaoting, Qian Zhen, Metaxas Dimitris, and Axel Leon. Segmenting the papillary muscles and the trabeculae from high resolution cardiac ct through restoration of topological handles. In International Conference on Information Processing in Medical Imaging, pages 184–195. Springer, 2013. [DOI] [PubMed] [Google Scholar]
  • [18].Gulshan Varun, Peng Lily, Coram Marc, Martin C Stumpe Derek Wu, Narayanaswamy Arunachalam, Venugopalan Subhashini, Widner Kasumi, Madams Tom, Cuadros Jorge, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama, 316(22):2402–2410, 2016. [DOI] [PubMed] [Google Scholar]
  • [19].Gur Shir, Wolf Lior, Golgher Lior, and Blinder Pablo. Unsupervised microvascular image segmentation using an active contours mimicking neural network. In Proceedings of the IEEE International Conference on Computer Vision, pages 10722–10731, 2019. [Google Scholar]
  • [20].Haft Javaherian Mohammad. Quantitative assessment of cerebral microvasculature using machine learning and network analysis. 2019.
  • [21].Haft-Javaherian Mohammad, Fang Linjing, Muse Victorine, Schaffer Chris B, Nishimura Nozomi, and Sabuncu Mert R. Deep convolutional neural networks for segmenting 3d in vivo multiphoton images of vasculature in alzheimer disease mouse models. PloS one, 14(3), 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Haft-Javaherian Mohammad, Schaffer Chris B, Nishimura Nozomi, and Sabuncu Mert R. Comparison of convolutional neural and fully convolutional networks for segmentation of 3d in vivo multiphoton microscopy images of brain vasculature. In Optics and the Brain, pages BT2A-4. Optical Society of America, 2019. [Google Scholar]
  • [23].Hofer Christoph, Kwitt Roland, Niethammer Marc, and Uhl Andreas. Deep learning with topological signatures. In Advances in Neural Information Processing Systems, pages 1634–1644, 2017. [Google Scholar]
  • [24].Hossmann KA. Viability thresholds and the penumbra of focal ischemia. Annals of neurology, 36(4):557–565, 1994. [DOI] [PubMed] [Google Scholar]
  • [25].Larson Adam M. Multiphoton microscopy. Nature Photonics, 5(1):1, 2010. [Google Scholar]
  • [26].Lesage David, Angelini Elsa D, Bloch Isabelle, and Funka-Lea Gareth. A review of 3d vessel lumen segmentation techniques: Models, features and extraction schemes. Medical image analysis, 13(6):819–845, 2009. [DOI] [PubMed] [Google Scholar]
  • [27].Litjens Geert, Kooi Thijs, Bejnordi Babak Ehteshami, Setio Arnaud Arindra Adiyoso, Ciompi Francesco, Ghafoorian Mohsen, Van Der Laak Jeroen Awm, Van Ginneken Bram, and Sánchez Clara I. A survey on deep learning in medical image analysis. Medical image analysis, 42:60–88, 2017. [DOI] [PubMed] [Google Scholar]
  • [28].Long Jonathan, Shelhamer Evan, and Darrell Trevor. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431–3440, 2015. [DOI] [PubMed] [Google Scholar]
  • [29].Mille Julien and Cohen Laurent D. Deformable tree models for 2d and 3d branching structures extraction. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pages 149–156. IEEE, 2009. [Google Scholar]
  • [30].Milletari Fausto, Navab Nassir, and Ahmadi Seyed-Ahmad. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV), pages 565–571. IEEE, 2016. [Google Scholar]
  • [31].Peyré Gabriel, Cuturi Marco, et al. Computational optimal transport. Foundations and Trends® in Machine Learning, 11(5–6):355–607, 2019. [Google Scholar]
  • [32].Qaiser Talha, Tsang Yee-Wah, Epstein David, and Rajpoot Nasir. Tumor segmentation in whole slide images using persistent homology and deep convolutional features. In Annual Conference on Medical Image Understanding and Analysis, pages 320–329. Springer, 2017. [DOI] [PubMed] [Google Scholar]
  • [33].Rieck Bastian, Fugacci Ulderico, Lukasczyk Jonas, and Leitte Heike. Clique community persistence: A topological visual analysis approach for complex networks. IEEE transactions on visualization and computer graphics, 24(1):822–831, 2017. [DOI] [PubMed] [Google Scholar]
  • [34].Rieck Bastian, Togninalli Matteo, Bock Christian, Moor Michael, Horn Max, Gumbsch Thomas, and Borgwardt Karsten. Neural persistence: A complexity measure for deep neural networks using algebraic topology. In International Conference on Learning Representations~(ICLR), 2019. [Google Scholar]
  • [35].Ronneberger Olaf, Fischer Philipp, and Brox Thomas. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015. [Google Scholar]
  • [36].Sizemore Ann, Giusti Chad, and Bassett Danielle S. Classification of weighted networks through mesoscale homological features. Journal of Complex Networks, 5(2):245–273, 2017. [Google Scholar]
  • [37].Smith Amy F, Doyeux Vincent, Berg Maxime, Peyrounette Myriam, Haft-Javaherian Mohammad, Larue Anne-Edith, Slater John H, Lauwers Frédéric, Blinder Pablo, Tsai Philbert, et al. Brain capillary networks across species: a few simple organizational requirements are sufficient to reproduce both structure and function. Frontiers in physiology, 10:233, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Sonntag William E, Eckman Delrae M, Ingraham Jeremy, and Riddle David R. Regulation of cerebrovascular aging. In Riddle David R, editor, Brain aging: models, methods, and mechanisms, pages 279–304. CRC Press/Taylor & Francis, Boca Raton, FL, 2007. [PubMed] [Google Scholar]
  • [39].Srinivasan Vivek J, Radhakrishnan Harsha, Lo Eng H, Mandeville Emiri T, Jiang James Y, Barry Scott, and Cable Alex E. Oct methods for capillary velocimetry. Biomedical optics express, 3(3):612–629, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Szegedy Christian, Vanhoucke Vincent, Ioffe Sergey, Shlens Jon, and Wojna Zbigniew. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016. [Google Scholar]
  • [41].Teikari Petteri, Santos Marc, Poon Charissa, and Hynynen Kullervo. Deep learning convolutional networks for multiphoton microscopy vasculature segmentation. arXiv preprint arXiv:1606.02382, 2016. [Google Scholar]
  • [42].Tsai Philbert S, Kaufhold John P, Blinder Pablo, Friedman Beth, Drew Patrick J, Karten Harvey J, Lyden Patrick D, and Kleinfeld David. Correlations of neuronal and microvascular densities in murine cortex revealed by direct counting and colocalization of nuclei and vessels. Journal of Neuroscience, 29(46):14553–14570, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [43].Vercauteren Tom, Pennec Xavier, Perchant Aymeric, and Ayache Nicholas. Diffeomorphic demons: Efficient non-parametric image registration. NeuroImage, 45(1):S61–S72, 2009. [DOI] [PubMed] [Google Scholar]
  • [44].Wu Aaron, Xu Ziyue, Gao Mingchen, Buty Mario, and Mollura Daniel J. Deep vessel tracking: A generalized probabilistic approach via deep learning. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), pages 1363–1367. IEEE, 2016. [Google Scholar]
  • [45].Yi Jaeyoun and Ra Jong Beom. A locally adaptive region growing algorithm for vascular segmentation. International Journal of Imaging Systems and Technology, 13(4):208–214, 2003. [Google Scholar]

RESOURCES