Skip to main content
. Author manuscript; available in PMC: 2021 Feb 1.
Published in final edited form as: Circ Arrhythm Electrophysiol. 2020 Jul 6;13(8):e007952. doi: 10.1161/CIRCEP.119.007952

Figure 2. Architectures of an artificial neural network vs deep learning in ECG interpretation.

Figure 2.

A, An example of an artificial neural network used to predict whether or not a patient will experience cardiovascular mortality, using 4 clinical features and 132 resting ECG features (intervals and amplitudes of various ECG segments). These create a 1×136 feature vector input to the neural network, represented by neurons (x1, x2, x3,…x136). The input neurons are then connected to a single fully connected hidden layer of 70 neurons (h1, h2, h3,…h70), and then ultimately connected to the output node (y), which yields a prediction score of cardiovascular mortality. The black lines between nodes represent weights, which are iteratively adjusted during the training process to minimize output prediction error. B, An example of a deep learning convolutional neural network based on the network used to predict whether or not a patient has left ventricular dysfunction from the waveforms of a 10-s 12-lead ECG. The input is the entire 12-lead ECG signal, formatted as a 12×1024 sample matrix. This network first learns temporal features within each lead, by extracting feature maps via 6 iterations of 1-dimensional convolution in the temporal axis followed by 1-dimensional pooling. Next, the network learns how the temporal features are distributed across leads by spatial feature learning via convolution across the 12 ECG leads. The resulting feature maps are flattened and passed to 2 fully connected layers (ha,1, ha,2, ha,3,…ha,64) and (hb,1, hb,2, hb,3,…hb,32), which used the learned temporal and spatial features to classify whether or not the patient has left ventricular dysfunction, as predicted in output node y.