Skip to main content
. 2019 Sep 11;8(9):1446. doi: 10.3390/jcm8091446

Table 1.

Comparison between Vess-Net and previous methods for vessel segmentation.

Type Methods Strength Weakness
Using handcrafted local features Vessel segmentation using thresholding [23,24,28,29,31,33,34,35,36] Simple method to approximate vessel pixels False points detected when vessel pixel values are closer to background
Fuzzy-based segmentation [25] Performs well with uniform pixel values Intensive pre-processing is required to intensify blood vessels’ response
Active contours [26,30] Better approximation for detection of real boundaries Iterative and time-consuming processes are required
Vessel tubular properties-based method [32] Good estimation of vessel-like structures Limited by pixel discontinuities
Line detection-based method [27] Removing background helps reduce false skin-like pixels
Using features based on machine learning or deep learning Random forest classifier-based method [37] Lighter method to classify pixels Various transformations needed before classification to form features
Patch-based CNN [38,42] Better classification Training and testing require long processing time
SVM-based method [41] Lower training time Use of pre-processing schemes with several images to produce feature vector
Extreme machine-learning [39] Machine learning with many discriminative features Morphology and other conventional approaches are needed to produce discriminative features
Mahalanobis distance classifier [40] Simple procedure for training Pre-processing overhead is still required to compute relevant features
U-Net-based CNN for semantic segmentation [43] U-Net structure preserves the boundaries well Gray scale pre-processing is required
Multi-scale CNN [44,47] Better learning due to multi-receptive fields Tiny vessels not detected in certain cases
CNN with CRFs [45] CNN with few layers provides faster segmentation CRFs are computationally complex
SegNet-inspired method [46] Encoder and decoder architecture provides a uniform structure of network layers Use of PCA to prepare data for training
CNN with visual codebook [48] 10-layer CNN for correlation with ground truth representation No end-to-end system for training and testing
CNN with quantization and pruning [49] Pruned convolutions increase the efficiency of the network Fully connected layers increase the number of trainable parameters
Three-stage CNN-based deep-learning method [50] Fusion of multi-feature image provides powerful representation Usage of three CNNs requires more computational power and cost
Modified U-Net with dice loss [51] Dice loss provides good results with unbalanced classes Use of PCA to prepare data for training
Deformable U-Net-based method [52] Deformable networks can adequately accommodate geometric variations of data Patch-based training and testing is time-consuming
PixelBNN [53] Pixel CNN is famous for predicting pixels with spatial dimensions Use of CLAHE for pre-processing
Dense U-Net-based method [54] Dense block is good for alleviating vanishing gradient problem Patch-based training and testing is time-consuming
Cross-connected CNN (CcNet) [55] Cross-connections of layers empower features Complex architecture with pre-processing
Vess-Net
(this work)
Robust segmentation with fewer layers Augmented data necessary to fully train network