Skip to main content
. 2021 Mar 30;10(7):1391. doi: 10.3390/jcm10071391

Table 2.

Deep-learning-based AI studies for view identification and quality assessment. MAE: mean absolute error.

Task DL Model Data/Validation Performance
Zhang et al. [16,17] 23 standard echo view classification Customized 13-layer CNN model 5-fold cross validation/7168 cine clips of 277 studies Overall accuracy: 84% at individual image level
Mandani et al. [19] 15 standard echo view classification VGG [26] Training: 180,294 images of 213 studies
Testing: 21,747 images of 27 studies
Overall accuracy: 97.8% at individual image level and 91.7% at cine-lip level
Akkus et al. [20] 24 Doppler image classes Inception_resnet
[27]
Training: 5544 images of 140 studies
Testing: 1737 images of 40 studies
Overall accuracy of 97%
Abdi et al. [21,22] Rating quality of apical 4 chamber views (0–5 scores) A customized fully connected CNN 3-fold cross validation/6196 images MAE: 0.71 ± 0.58
Abdi et al. [23] Quality assessment for five standard view planes CNN regression architecture Total dataset: 2435 cine clips
Training: 80%
Testing: 20%
Average of 85% accuracy
Dong et al. [24] QC for fetal ultrasound cardiac four chamber planes Ensembled three CNN model 5-fold cross validation (7032 images) Mean average precision of 93.52%.
Labs et al. [25] Assessing quality of apical 4 chamber view Hybrid model including CNN and LSTM layers Training/validation/testing (60/20/20%) of in total of 1039 images Average accuracy of 86% on the test set