García-Zapirain et al. (2018) [26] |
Pressure ulcer
|
A CNN with two parallel branches with four convolutional layers was used to extract ROIs from HSI color space and Gaussian-smoothed images
Morphological filters were applied
ROIs was further processed by a CNN with four branches with eight convolutional layers; a prior model and LCDG modalities were used to make up the four input modalities
|
193 (193) |
|
Li et al. (2018) [22] |
Wound
|
Manual skin proportion evaluation
YCbCr color space; the Cr channel was used to segment skin regions
Images were normalized
Augmentation was applied
The 13 layers taken from MobileNet were used for skin region segmentation
Semantic correction was applied
Ambiguous background removal
|
950 (57,000) |
|
Zahia et al. (2018) [36] |
Pressure ulcer
|
Flashlight reflection removal
Image cropping to 5 × 5 patches
Patch classification was performed with a CNN with three convolutional layers
Image quantification was performed by using classified patches
|
Granulation: 22 (270,762) Necrotic: 22 (37,146) Slough: 22 (80,636) |
Granulation:
Accuracy: 0.9201
F1: 0.9731
Necrotic:
Accuracy: 0.9201
F1: 0.9659
Slough:
Accuracy: 0.9201
F1: 0.779
|
Jiao et al. (2019) [93] |
Burn wound
|
|
1150 |
|
Khalil et al. (2019) [16] |
Wound
|
Image preprocessing was performed with CLAHE
Statistical methods were used for color and texture feature generation
NMF was used for feature map reduction
GBT was used for pixel semantic segmentation
|
377 |
Accuracy: 0.96
F1: 0.9225
|
Li et al. (2019) [23] |
Wound
|
Pixel locations were encoded
Images were concatenated with location maps
A MobileNet backbone was used for feature extraction
A depth-wise convolutional layer was used to eliminate tiny wounds and holes
|
950 |
|
Rajathi et al. (2019) [57] |
Varicose ulcer
|
Flashlight reflection removal
Active contour segmentation was used for image segmentation
Wound segments were cropped into 5 × 5 patches
A CNN with two convolutional layers was used for patch classification
Image quantification was performed by using classified patches
|
1250 |
|
Şevik et al. (2019) [94] |
Burn wound
|
|
105 |
|
Blanco et al. (2020) [59] |
Dermatological ulcer
|
Augmentation was applied
Images were segmented by using super-pixels
Super-pixels were classified by using the ResNet architecture
Classified super-pixels were used for wound quantification
|
217 (179,572) |
|
Chino et al. (2020) [66] |
Wound
|
Augmentation was applied
UNet was used for semantic segmentation (backbone not specified)
Manual input for pixel density estimation
Wound measurements were returned
|
446 (1784) |
|
Muñoz et al. (2020) [95] |
Diabetic foot ulcer
|
|
520 |
|
Wagh et al. (2020) [55] |
Wound
|
Probabilistic image augmentation was applied
DeepLabV3 based on ResNet101 was used for segmentation
CRF was applied to improve segmentation accuracy
|
1442 |
|
Wang et al. (2020) [32] |
Foot ulcer
|
Cropping and zero padding
Augmentation was applied
The MobileNetV2 architecture with the VGG16 backbone was used to perform semantic segmentation.
Connected component labeling was applied to close small holes.
|
1109 (4050) |
|
Zahia et al. (2020) [28] |
Pressure ulcer
|
Mark R-CNN with the ResNet50 backbone was used for image semantic segmentation
A 3D mesh was used to generate the wound top view and a face index matrix
Matching blocks were used for wound pose correction
Face indices and wounds with corrected poses were used to measure wound parameters
|
210 |
|
Chang et al. (2021) [96] |
Burn wound
|
|
2591 |
Accuracy: 0.913
F1: 0.9496
|
Chauhan et al. (2021) [64] |
Burn wound
|
Augmentation was applied
ResNet101 was used for feature extraction
A custom encoder created two new feature vectors using different stages of ResNet101 feature maps
A decoder with two convolutional layers, upsampling, and translation returned the final predictions.
|
449 |
Accuracy: 0.9336
F1: 0.8142
MCC: 0.7757
|
Dai et al. (2021) [68] |
Burn wound
|
Style-GAN was used to generate synthetic wound images
CASC was used to fuse synthetic wound images with human skin textures
Burn-CNN was used for semantic segmentation
|
1150 |
|
Liu et al. (2021) [31] |
Burn wound
|
|
1200 |
|
Pabitha et al. (2021) [56] |
Burn wound
|
|
1800 |
Segmentation:
Accuracy: 0.8663
F1: 0.869
Classification:
Accuracy: 0.8663
F1: 0.8594
|
Sarp et al. (2021) [49] |
Wound
|
|
13,000 |
|
Cao et al. (2022) [18] |
Diabetic foot ulcer
|
|
1426 |
Accuracy: 0.9842
F1: 0.7696
mAP: 0.857
|
Chang et al. (2022) [60] |
Pressure ulcer
|
|
Wound And Reepithelization: 755 (2893) Tissue: 755 (2836) |
Wound And Reepithelization:
Accuracy: 0.9925
F1: 0.9887
Tissue:
Accuracy: 0.9957
F1: 0.9915
|
Chang et al. (2022) [50] |
Burn wound
|
|
4991 |
Accuracy: 0.9888
F1: 0.9018
|
Lien et al. (2022) [58] |
Diabetic foot ulcer
|
Generation of 32 × 32 image patches
RGB and gray-channel 32 × 32 patches were classified with ResNet18
Classified image patches were used for wound quantification
|
219 |
|
Ramachandram et al. (2022) [48] |
Wound
|
UNet based on a non-conventional backbone was used for wound area segmentation
Wound areas were cropped and resized
An encoder–decoder with the EfficientNetB0 encoder and a simplified decoder was used for wound semantic segmentation
|
Wound: 465,187Tissue: 17,000 |
Wound:Tissue:
|
Scebba et al. (2022) [27] |
Wound
|
|
1330 |
|