Skip to main content
. 2023 Mar 30;23(7):3618. doi: 10.3390/s23073618

Table 6.

Summary of studies on the semantic segmentation of chronic wounds.

Reference Subject and Classes (Each Bullet Represents a Different Model) Methodology Original Images
(Training Samples)
Results
García-Zapirain et al. (2018)
[26]
Pressure ulcer
  • Granulation/Slough/Necrotic

  • A CNN with two parallel branches with four convolutional layers was used to extract ROIs from HSI color space and Gaussian-smoothed images

  • Morphological filters were applied

  • ROIs was further processed by a CNN with four branches with eight convolutional layers; a prior model and LCDG modalities were used to make up the four input modalities

193 (193)
  • F1: 0.92

Li et al. (2018)
[22]
Wound
  • Wound/Non-Wound

  • Manual skin proportion evaluation

  • YCbCr color space; the Cr channel was used to segment skin regions

  • Images were normalized

  • Augmentation was applied

  • The 13 layers taken from MobileNet were used for skin region segmentation

  • Semantic correction was applied

  • Ambiguous background removal

950 (57,000)
  • IoU: 0.8588

Zahia et al. (2018)
[36]
Pressure ulcer
  • Granulation/Slough/Eschar

  • Flashlight reflection removal

  • Image cropping to 5 × 5 patches

  • Patch classification was performed with a CNN with three convolutional layers

  • Image quantification was performed by using classified patches

Granulation: 22 (270,762)
Necrotic: 22 (37,146)
Slough: 22 (80,636)
Granulation:
  • Accuracy: 0.9201

  • F1: 0.9731

Necrotic:
  • Accuracy: 0.9201

  • F1: 0.9659

Slough:
  • Accuracy: 0.9201

  • F1: 0.779

Jiao et al. (2019)
[93]
Burn wound
  • Superficial/Superficial Thickness/Deep Partial Thickness/Full-Thickness

  • Semantic segmentation was performed by using Mark R-CNN with the ResNet101FA backbone

1150
  • F1: 0.8451

Khalil et al. (2019)
[16]
Wound
  • Necrotic/Granulation/Slough/Epithelial

  • Image preprocessing was performed with CLAHE

  • Statistical methods were used for color and texture feature generation

  • NMF was used for feature map reduction

  • GBT was used for pixel semantic segmentation

377
  • Accuracy: 0.96

  • F1: 0.9225

Li et al. (2019)
[23]
Wound
  • Wound/Non-Wound

  • Pixel locations were encoded

  • Images were concatenated with location maps

  • A MobileNet backbone was used for feature extraction

  • A depth-wise convolutional layer was used to eliminate tiny wounds and holes

950
  • IoU: 0.86468

Rajathi et al. (2019)
[57]
Varicose ulcer
  • Granulation/Slough/Necrosis/Epithelial

  • Flashlight reflection removal

  • Active contour segmentation was used for image segmentation

  • Wound segments were cropped into 5 × 5 patches

  • A CNN with two convolutional layers was used for patch classification

  • Image quantification was performed by using classified patches

1250
  • Accuracy: 0.9955

Şevik et al. (2019)
[94]
Burn wound
  • Skin/Wound/Background

  • Images were split into 64 × 64 patches

  • SegNet was used for semantic patch segmentation

105
  • F1: 0.805

Blanco et al. (2020)
[59]
Dermatological ulcer
  • Granulation/Fibrin/Necrosis/Non-Wound

  • Augmentation was applied

  • Images were segmented by using super-pixels

  • Super-pixels were classified by using the ResNet architecture

  • Classified super-pixels were used for wound quantification

217 (179,572)
  • F1: 0.971

Chino et al. (2020)
[66]
Wound
  • Wound/Non-Wound

  • Augmentation was applied

  • UNet was used for semantic segmentation (backbone not specified)

  • Manual input for pixel density estimation

  • Wound measurements were returned

446 (1784)
  • F1: 90

Muñoz et al. (2020)
[95]
Diabetic foot ulcer
  • Wound/Non-Wound

  • Semantic segmentation was performed by using Mark R-CNN

520
  • F1: 0.964

Wagh et al. (2020)
[55]
Wound
  • Wound/Skin/Background

  • Probabilistic image augmentation was applied

  • DeepLabV3 based on ResNet101 was used for segmentation

  • CRF was applied to improve segmentation accuracy

1442
  • F1: 0.8554

Wang et al. (2020)
[32]
Foot ulcer
  • Wound/Non-Wound

  • Cropping and zero padding

  • Augmentation was applied

  • The MobileNetV2 architecture with the VGG16 backbone was used to perform semantic segmentation.

  • Connected component labeling was applied to close small holes.

1109 (4050)
  • F1: 0.9405

Zahia et al. (2020)
[28]
Pressure ulcer
  • Wound/Non-Wound

  • Mark R-CNN with the ResNet50 backbone was used for image semantic segmentation

  • A 3D mesh was used to generate the wound top view and a face index matrix

  • Matching blocks were used for wound pose correction

  • Face indices and wounds with corrected poses were used to measure wound parameters

210
  • F1: 0.83

Chang et al. (2021)
[96]
Burn wound
  • Superficial/Deep Partial/Full Thickness

  • Mask R-CNN based on ResNet101 was used for semantic segmentation

2591
  • Accuracy: 0.913

  • F1: 0.9496

Chauhan et al. (2021)
[64]
Burn wound
  • Wound/Non-Wound

  • Augmentation was applied

  • ResNet101 was used for feature extraction

  • A custom encoder created two new feature vectors using different stages of ResNet101 feature maps

  • A decoder with two convolutional layers, upsampling, and translation returned the final predictions.

449
  • Accuracy: 0.9336

  • F1: 0.8142

  • MCC: 0.7757

Dai et al. (2021)
[68]
Burn wound
  • Wound/Non-Wound

  • Style-GAN was used to generate synthetic wound images

  • CASC was used to fuse synthetic wound images with human skin textures

  • Burn-CNN was used for semantic segmentation

1150
  • F1: 0.893

Liu et al. (2021)
[31]
Burn wound
  • Superficial/Superficial Partial Thickness/Deep Partial Thickness/Full Thickness/Undebrided Burn/Background

  • Augmentation was applied

  • HRNetV2 was used as an encoder, and one convolutional layer was used as a decoder

1200
  • F1: 0.917

Pabitha et al. (2021)
[56]
Burn wound
  • Superficial Dermal/Deep Dermal/Full-Thickness

  • Image noise removal

  • Customized Mask R-CNN based on the ResNet backbone was used for semantic segmentation

1800 Segmentation:
  • Accuracy: 0.8663

  • F1: 0.869

Classification:
  • Accuracy: 0.8663

  • F1: 0.8594

Sarp et al. (2021)
[49]
Wound
  • Wound Border/Granulation/Slough/Necrotic

  • cGAN (UNet was used for resolution enhancement in the generator) was used for semantic segmentation

13,000
  • F1: 0.93

Cao et al. (2022)
[18]
Diabetic foot ulcer
  • Wagner Grades 0–5

  • Mask R-CNN based on ResNet101 was used for semantic segmentation

1426
  • Accuracy: 0.9842

  • F1: 0.7696

  • mAP: 0.857

Chang et al. (2022)
[60]
Pressure ulcer
  • Wound/Re-Epithelization Granulation/Slough/Eschar

  • SLIC was used for tissue labeling

  • Augmentation was applied

  • DeepLabV3 based on ResNet101 was used for both segmentation tasks

Wound And Reepithelization: 755 (2893)
Tissue: 755 (2836)
Wound And Reepithelization:
  • Accuracy: 0.9925

  • F1: 0.9887

Tissue:
  • Accuracy: 0.9957

  • F1: 0.9915

Chang et al. (2022)
[50]
Burn wound
  • Wound/Non-Wound/Deep Wound

  • Augmentation was applied

  • DeeplavV3+ algorithm with the ResNet101 backbone was used.

4991
  • Accuracy: 0.9888

  • F1: 0.9018

Lien et al. (2022)
[58]
Diabetic foot ulcer
  • Granulation Tissue/Non-Granulation Tissue/Non-Wound

  • Generation of 32 × 32 image patches

  • RGB and gray-channel 32 × 32 patches were classified with ResNet18

  • Classified image patches were used for wound quantification

219
  • Precision: 0.91

Ramachandram et al. (2022)
[48]
Wound
  • Epithelial/Granulation/Slough/Necrotic

  • UNet based on a non-conventional backbone was used for wound area segmentation

  • Wound areas were cropped and resized

  • An encoder–decoder with the EfficientNetB0 encoder and a simplified decoder was used for wound semantic segmentation

Wound: 465,187Tissue: 17,000 Wound:
  • F1: 0.61

Tissue:
  • F1: 0.61

Scebba et al. (2022)
[27]
Wound
  • Wound/Non-Wound

  • Wound areas were detected by using MobileNet CNN.

  • UNet was used for binary segmentation of detected areas.

1330
  • MCC: 0.85