Skip to main content
. 2022 Jul 25;3(3):390–404. doi: 10.1093/ehjdh/ztac038

Figure 6.

Figure 6

Comparison of architecture and model- and individual patient-level explainability using the novel inherently explainable approach as compared with post hoc heatmap-based explainability for detection of reduced ejection fraction. The conventional ‘black box’ deep neural network contains only a single encoder to interpret the electrocardiogram. Afterwards, Guided Grad-CAM is applied to show what segments of the electrocardiogram were important for prediction on the patient level. Model-level explainability is not possible. The novel explainable pipeline adds a generative part to the architecture, which allows for precise visualizations of the morphological electrocardiogram features. By combining factor Shapley Additive exPlanations importance scores and factor traversals, we obtain model-level explainability. Individual patient-level explainability is achieved using individual Shapley Additive exPlanations importance score.