TABLE 2:
Quantitative comparison of automated methods for echo quality assessment. A2C (apical 2 view), A3C (apical 3 view), A4C (apical 4 view), PLAXA (parasternal long axis view, aortic valve), PLAXPM (parasternal long axis view, papillary muscle), GHT (Generalized Hough Transform), CNN (Convolutional neural network), TPR (true positive rate), CC (correlation coefficient).
Work | ROI Method | Mode & View | Method | System & Data | Train & Test | Ground Truth | Performance |
---|---|---|---|---|---|---|---|
[27] | NA | B-mode: A4C | Model-based: B-splines to model four chambers; goodness-of-fit | GE Vivid E9 system 95 videos | Train: 4 patients 35 cases Test: 2 patients 60 cases | Scores by 2 cardiologists: Good, fair, and poor | TPR (Section 3.1): Good quality: 22% Fair quality: 20% Poor quality: 15% |
[28] | NA | B-mode: PLAX | Model-based: GHT applied to input image compared with Atlas: created from images segmented manually | GE Vivid 7 system 133 images 35 patients | Train: 89 images to create PLAX Atlas Test: 44 | Scores by expert sonographer: Good (score 3) Poor (score 0) | CC (Section 3): 0.84 correlation between manual and automated scores |
[29] | NA | B-mode: A4C | Deep Learning: Customized regression CNN | NA system; 2,904 A4C images | Train: 80% 2,345 images; Test: 20% 560 images; | Scores by expert cardiologist: Good and Poor | Mean Absolute Error (MAE): 0.87 ± 0.72 |
[30] | NA | B-mode: AP2, AP3, AP4, PLAXA, PLAXPM | Deep Learning: Customized regression CNN | Different GE and Philips systems; 2,450 cines: A2C (478), A3C (455), A4C (575), PLAXA(480), PLAXP(462) | Train: 80% # videos per view = 935 Total (4,675); Test: 20% # videos per view = 228 Total (1,144); 20 frames videos | Scores by physicians: A2C (0–8), A3C (0–7), A4C (0–10),PLAXA(0–4),PLAXPM(0–5) scores normalized | View accuracy: T: cases per view A-M: auto-hand, A2C (86±9)A3C (89±9)A4C (83±14)PLAXA(84±12)PLAXP(83±13) |