Skip to main content
Journal of the American Heart Association: Cardiovascular and Cerebrovascular Disease logoLink to Journal of the American Heart Association: Cardiovascular and Cerebrovascular Disease
. 2023 Mar 21;12(8):e026974. doi: 10.1161/JAHA.122.026974

Convolution Neural Network Algorithm for Shockable Arrhythmia Classification Within a Digitally Connected Automated External Defibrillator

Christine P Shen 1, Benjamin C Freed 2, David P Walter 3, James C Perry 4, Amr F Barakat 5, Ahmad Ramy A Elashery 6, Kevin S Shah 7, Shelby Kutty 8, Michael McGillion 9, Fu Siong Ng 10, Rola Khedraki 1, Keshav R Nayak 11, John D Rogers 1, Sanjeev P Bhavnani 1,
PMCID: PMC10227259  PMID: 36942628

Abstract

Background

Diagnosis of shockable rhythms leading to defibrillation remains integral to improving out‐of‐hospital cardiac arrest outcomes. New machine learning techniques have emerged to diagnose arrhythmias on ECGs. In out‐of‐hospital cardiac arrest, an algorithm within an automated external defibrillator is the major determinant to deliver defibrillation. This study developed and validated the performance of a convolution neural network (CNN) to diagnose shockable arrhythmias within a novel, miniaturized automated external defibrillator.

Methods and Results

There were 26 464 single‐lead ECGs that comprised the study data set. ECGs of 7‐s duration were retrospectively adjudicated by 3 physician readers (N=18 total readers). After exclusions (N=1582), ECGs were divided into training (N=23 156), validation (N=721), and test data sets (N=1005). CNN performance to diagnose shockable and nonshockable rhythms was reported with area under the receiver operating characteristic curve analysis, F1, and sensitivity and specificity calculations. The duration for the CNN to output was reported with the algorithm running within the automated external defibrillator. Internal and external validation analyses included CNN performance among arrhythmias, often mistaken for shockable rhythms, and performance among ECGs modified with noise to mimic artifacts. The CNN algorithm achieved an area under the receiver operating characteristic curve of 0.995 (95% CI, 0.990–1.0), sensitivity of 98%, and specificity of 100% to diagnose shockable rhythms. The F1 scores were 0.990 and 0.995 for shockable and nonshockable rhythms, respectively. After input of a 7‐s ECG, the CNN generated an output in 383±29 ms (total time of 7.383 s). The CNN outperformed adjudicators in classifying atrial arrhythmias as nonshockable (specificity of 99.3%–98.1%) and was robust against noise artifacts (area under the receiver operating characteristic curve range, 0.871–0.999).

Conclusions

We demonstrate high diagnostic performance of a CNN algorithm for shockable and nonshockable rhythm arrhythmia classifications within a digitally connected automated external defibrillator.

Registration

URL: https://clinicaltrials.gov/ct2/show/NCT03662802; Unique identifier: NCT03662802

Keywords: automated external defibrillator, convolution neural network, ECG, machine learning, ventricular arrhythmias

Subject Categories: Sudden Cardiac Death, Cardiopulmonary Arrest, Cardiopulmonary Resuscitation and Emergency Cardiac Care


Nonstandard Abbreviations and Acronyms

AFL

atrial flutter

CNN

convolution neural network

ML

machine learning

OHCA

out‐of‐hospital cardiac arrest

SVT

supraventricular tachycardia

Clinical Perspective.

What Is New?

  • This machine learning algorithm tested within an automated external defibrillator demonstrated high performance to classify a shockable rhythm resulting from ventricular tachycardia/ventricular fibrillation.

  • The algorithm was designed to function within a unique time and within the hardware of a miniaturized automated external defibrillator.

What Are the Clinical Implications?

  • The algorithm will be able to perform with high diagnostic accuracy in distinguishing shockable rhythms, even in the presence of noise artifacts during cardiac arrest.

  • Using a convolution neural network algorithm, this automated external defibrillator device enters the challenging field to improve bystander defibrillation, the detection of a shockable rhythm, and to deliver defibrillation intended for improving outcomes in out‐of‐hospital cardiac arrest.

Despite advances in out‐of‐hospital cardiac arrest (OHCA), mortality remains high, with a rate of survival <10% and even poorer outcomes in lower socioeconomic communities. 1 , 2 Although public health initiatives over the past decade have resulted in greater rates of bystander cardiopulmonary resuscitation (CPR), rates of defibrillation have largely remained unchanged. 3 Successful treatment of OHCA emphasizes a chain of survival, starting with early access to treatment, activation of emergency medical response systems (EMS), and immediate basic life support by laypeople. These upstream interventions are termed prearrival care, beginning with bystander CPR and defibrillation before the arrival of EMS. 4 Data from the Cardiac Arrest Registry to Enhance Survival 5 and the Public Access Defibrillation Trial 6 demonstrated that bystander CPR and automated external defibrillator (AED) use have significant and direct impacts on survival. Currently, bystander AEDs are applied to 4% of patients with OHCA. 2 Although predictive modeling suggests that if a greater proportion of OHCAs had AED use, survival would increase from 9% to 29%, 7 significant performance heterogeneity exists between commercial AEDs, particularly when differentiating ventricular tachycardia (VT) from supraventricular tachycardia (SVT), including operator‐related circumstances that may preclude successful defibrillation. 8 , 9 Recent technologic advances, including geo‐location activation through smartphone apps that digitally coordinate bystander‐enacted CPR, 10 , 11 and those that instruct CPR through EMS personnel available virtually through telephone or video communication, are innovative approaches to provide the right person the right information at the right time. 12 Such developments aim to augment prearrival care and to improve downstream survival in OHCA.

Machine learning (ML) and new computational approaches are emerging as important tools to accurately analyze vast amounts of digitally collected health data. 13 ML methods, including convolution neural networks 14 (CNN) and deep learning, 15 have been developed to process high‐fidelity data sets, such as ECGs, and to transform waveform data (P/QRS/T waves) into extractable features used to train algorithms to diagnose arrhythmias. 15 , 16 Neural networks trained with ECG data have recently been used to detect asymptomatic cardiovascular diseases such as left ventricular dysfunction, silent atrial fibrillation, and hypertrophic cardiomyopathy. 17 , 18 , 19 In general, neural networks sample, filter, and map features that input data to predict a binary classification of a disease state (ie, whether a disease is present or not) and uses raw, time‐series ECG data as inputs in iterative learning to produce an output that is predictive and clinically relevant.

Many CNN algorithms are commonly trained and validated on optimal data sets using 12‐lead ECGs and often exclude ECGs with baseline waveform variations because of noise or where there is disagreement on expert adjudication of a specific arrhythmia. In the context of OHCA, the function of a CNN algorithm is different. It must use ECG inputs from single‐lead ECGs of a short duration, include ECGs with baseline noise artifacts, ECG where the adjudication results are variable, and where the binary classification of to shock or not is unclear. 20 , 21 , 22 Although it is important for any new CNN algorithm to achieve high diagnostic accuracy, it must also produce a decision in a rapid time frame if it is to impact survival. This combination of ECG inputs, accuracy, and rapid determination has significant design constraints within the architecture of a CNN as well as the AED hardware in which the CNN software is embedded. In the aggregate, understanding CNN behavior and algorithmic performance is essential given that the CNN decides to shock.

Considering the aforementioned design factors, in the present investigation we developed a CNN algorithm to diagnose shockable rhythms and tested its accuracy within a miniaturized, internet‐connected AED.

METHODS

See Data S1 for Supplemental Methods.

Data Availability

Pending an internal scientific committee review, data access will be considered on a per request basis and upon execution of an appropriate data use agreement. The test data set rhythms sourced from publicly available institutions will be made available via GitHub upon such request.

Trial Design

In this retrospective study, we constructed a large ECG data set that underwent expert physician annotation for a broad range of cardiac rhythms, and we developed a CNN for use in a new AED (Avive, San Francisco, CA; Figure S1). This AED received Food and Drug Administration premarket approval (P210015) 23 and has previously demonstrated high defibrillation efficacy in an animal model. 24 The present study conformed to the Standards for Reporting Diagnostic accuracy studies reporting guidelines for prediction models in health care, received institutional review board exemption (Advarra institutional review board number 00032604), and was registered on clinicaltrials.gov (NCT number 03662802), with registration completed before algorithm development. Consent was not required, because the study database was constructed from deidentified data.

Primary Outcome

The primary outcome was to develop a ML CNN algorithm to diagnose a shockable rhythm caused by VT/ventricular fibrillation (VF) on a single‐lead ECG.

Secondary Outcomes

The following secondary outcomes include:

  1. Demonstrate the performance of the CNN algorithm according to the American Heart Association (AHA) criteria for classification of shockable and nonshockable rhythms. 25

  2. Determine the time duration for CNN output within the AED it is designed for.

  3. Demonstrate the CNN's accuracy on nonshockable rhythms commonly mistaken for shockable rhythms, specifically atrial fibrillation/atrial flutter (AF/AFL) and SVT.

  4. Perform an internal validation analysis comparing CNN accuracy on conflict ECGs adjudicators disagreed upon.

  5. Evaluate the CNN behavior within false negatives where adjudicators agreed on a shockable rhythm but the CNN classified as a nonshockable rhythm.

  6. Perform an external validation analysis of CNN performance on ECGs modified with varying levels of noise to mimic artifacts during OHCA.

  7. Provide an explanation of CNN behavior with heatmap data visualizations in shockable and nonshockable ECGs.

  8. Determine the generalizability of the CNN algorithm with a comparison of performance between the CNN and physician readers.

Source of the Data

Study ECGs

The study database contained an aggregation of single‐lead ECGs from adult and pediatric patients from 13 data sources spanning publicly available ECG databases and real‐world sources, including health care institutions and cardiac telemetry devices (Table S1) to represent a variety of out‐of‐hospital and in‐hospital ECG rhythms detected in patient care. The exact sample size of each cardiac arrhythmia (in the test data set) was determined a priori as defined by the AHA benchmark for minimum sample size in each arrhythmia category and therefore constitutes the prevalence of each cardiac rhythm and shockable and nonshockable classifications (Table S2). 25 The sample size recommendations from the AHA are based upon a balance of known disease prevalence and higher quantities of certain rhythms important to demonstrate the safety and effectiveness of an AED device.

ECG Data Acquisition: Extraction and Sampling Procedures

ECG lead II was used as the most representative lead for AED electrode pad placement. Data were sampled at 300 Hz to match the sampling frequency of the AED for which the algorithm is designed.

ECG Annotation, Adjudication, and Interreader Agreement

ECG Adjudication

All ECGs were either previously adjudicated or underwent independent adjudication by consensus of a group of 3 physicians (Figure S2). Annotators included board‐certified electrophysiologists, cardiologists, or internal medicine physicians in at least their third year of cardiology fellowship.

The test data set required the highest reliability; therefore, each ECG in the test data set was annotated by 3 board‐certified cardiologists or electrophysiologists who labeled the ECG rhythm. Three board‐certified pediatric electrophysiologists annotated all pediatric ECGs. All ECGs were randomly assigned for review. Each physician reader was blinded to other annotations.

Physician readers selected a single rhythm classification from 16 predefined AHA rhythms (Table S3). 25 Adjudicators were not provided with any preannotations or patient information. No modification to the source ECG data was permitted after upload.

Interreader Agreement

We defined interreader agreement of ECGs as (1) unanimous 3 out of 3 agreements on the type of rhythm or (2) 2 out of 3 agreements on the type of rhythm and 3 out of 3 agreements on shockability (Table S4). If the annotation did not meet this criterion, the ECG was adjudicated as a conflict and excluded from the primary analysis.

Algorithm Development

Training, Validation, and Test Data Sets

ECGs were divided between training, validation, and test data sets (Table S5). The test data set comprised 1005 ECGs. The sample sizes of individual rhythms in the validation and test data sets were proportional to the frequency of rhythms expected to be seen in the field, which, for example, included a higher frequency of VT/VF. All test data set ECGs were not included in training or validation. Therefore, the test data set is considered blinded. Algorithmic performance was evaluated and reported in this test data set with the ML algorithm and all 1005 ECGs inputted within the hardware of the AED. Within the deidentified test data set, age and sex were available in 52% of ECGs (552/1055), with an age range of 8 to 96 years old, and ECGs from women and men in 52% and 48%, respectively. Metadata of age and sex were not used in the development of the CNN algorithm.

CNN Architecture and Output

We developed a 6‐layer CNN to detect shockable rhythms resulting from VT/VF, taking as input only the single‐lead ECG data without other patient metadata, and outputting a shockable or nonshockable determination. The CNN consists of 6 main layers, 5 convolutional layers, and 1 fully connected layer. After each of the convolutional layers, a leaky rectified linear activation and dropout was applied. After the final fully connected layer, a SoftMax activation was applied for the output (Figure 1, Figures S3 through S5).

Figure 1. Architecture of the convolution neural network.

Figure 1

ReLU indicates rectified linear unit.

The network was trained with a random Gaussian initialization of the weights. The learning rate was decreased every epoch using a geometric range, down to 5% of the initial learning rate in the final epoch.

To evaluate the algorithm as it will be used, performance was tested within an AED device itself for the test data set, as opposed to testing on an external computer program. By running the algorithm on the hardware of an AED for which it is designed (32 bit Microprocessor without Interlocked Pipelined Stages core running at 200 MHz with an on‐chip 2 MB flash and 512 kB random access memory), all embedded system constraints were included in the CNN evaluation of a shockable or nonshockable rhythm.

Statistical Analysis

Algorithm performance was assessed with area under the curve/receiver operating characteristic (AUC‐ROC) curve analyses, F1, and sensitivity and specificity calculations. In addition, precision‐recall curves, as well as positive and negative predicted values, were calculated. CNN performance was evaluated in an external validation noise test, evaluating its ability to classify ECGs in the presence of noise artifacts such as baseline wander, muscle artifact, and electrode motion that occurs during OHCA. 26 , 27 Where appropriate, 2‐sided 95% CIs were included (Microsoft Excel and Python). Three types of noise were sourced from the Massachusetts Institute of Technology‐Boston's Beth Israel Hospital Noise Stress Test Database and used for testing, including baseline wander (usually caused by motion of the subject or the leads), muscle artifact, and electrode motion artifact (usually caused by intermittent mechanical forces acting on the electrodes). 12 Saliency maps were constructed as a visual model to determine which areas in an ECG influence CNN classification. 28 Saliency analyses commonly use the final convolution layer of the model to generate a map, plotted in 1 dimension, for visualizing the spatial and temporal features driving model output.

RESULTS

See Data S2 for Supplemental Results.

ECG Database and Distribution

There were 26 464 ECGs that comprised the overall ECG database (Figure 2); 1582 (6.0%) ECGs were excluded based on interreader disagreement. The remaining 24 882 ECGs (23 428 adult and 1454 pediatric) formed the study ECG data set and were divided into training (N=23 156 ECGs, cumulative total of 3 895 762.4 seconds of ECG data), validation (N=721 ECGs), and test data sets (N=1005 ECGs).

Figure 2. Overall ECG study database broken down by number of ECGs in the training/validation/test data sets by rhythm type (A), as well as adjudication results by rhythm type (B).

Figure 2

Adjudication agreement represents the percentage that adjudicators selected the same rhythm type (eg, adjudicators both selecting AF is agreement, whereas 1 selecting AF and the other selecting SVT is disagreement). Shockability agreement represents the percentage that adjudicators selected rhythms that were the same shockable/nonshockable classification (eg, 1 adjudicator selecting sinus rhythm and the other selecting sinus bradycardia is agreement, because both are nonshockable, whereas 1 adjudicator selecting sinus rhythm and the other selecting VF is disagreement, because 1 is nonshockable and the other is shockable). AF indicates atrial fibrillation; AV, atrioventricular; PVCs, premature ventricular contractions; SVT, supraventricular tachycardia; and VF, ventricular fibrillation.

Interreader Agreement

Our independent adjudication method included 18 physicians annotating 21 985 ECGs. There were 4479 ECGs that were previously annotated and did not require reannotation. The interreader agreement was calculated for 2 categories, namely adjudicated rhythm type and shockability. Overall, 65 955 adjudications were performed; 16.7% did not match the finalized rhythm adjudication, and 2.4% disagreed on shockability.

Primary Outcome: CNN Accuracy for Classification of Shockable and Nonshockable Rhythms

Diagnostic accuracy of the CNN algorithm for the prediction of shockable ECGs in the blind test data set demonstrated an AUC‐ROC of 0.995 (95% CI, 0.990–1.0; Figure 3A), with a corresponding sensitivity of 98%, specificity of 100%, positive predictive value of 1.0, negative predictive value of 0.989, and an F1 score of 0.990 and 0.995 for shockable and nonshockable rhythms, respectively.

Figure 3. Primary outcome of CNN accuracy for classification of shockable and nonshockable rhythms, including overall receiver operating characteristic curve (A) with AUC, F1, sensitivity, specificity.

Figure 3

Receiver operating characteristic curve with AUC and F1 for the adult (B) and pediatric subsets (C) of the ECG test data set. AUC indicates area under the curve; and CNN, convolution neural network.

Secondary Outcomes

Performance of the CNN Algorithm for Adult and Pediatric Rhythms

When the test data sets were stratified into adult (N=932) and pediatric ECGs (N=73), the CNN algorithm demonstrated an AUC‐ROC of 0.999 (95% CI, 0.998–1.0; Figure 3B) and an AUC‐ROC of 0.978 (95% CI, 0.941–1.0; Figure 3C), respectively. The corresponding F1 scores for shockable and nonshockable rhythms were 0.995 and 0.997 for adult ECGs, respectively, and 0.923 and 0.983 for pediatric ECGs, respectively.

Performance of the CNN Algorithm According to the AHA Criteria for the Classification for Shockable and Nonshockable Cardiac Rhythms 25

Stratified by individual AHA rhythm classifications, 25 the CNN algorithm exceeded the minimum diagnostic performance for each rhythm (sensitivity or specificity) and exceeded the recommended minimums for sample size for each rhythm (Table S2).

Time Duration for CNN Output

After input of a 7‐s ECG, the CNN generated an output in 383±29 ms (total time of 7.383 s). By running the algorithm on the actual AED device, all embedded system constraints were included in the CNN performance evaluation.

Accuracy of the CNN on Nonshockable Rhythms That Can Commonly Be Mistaken for VT/VF

Among rhythms commonly mistaken for VT/VF, including AF/AFL (N=4232 ECGs) and SVT (N=1557 ECGs), the CNN demonstrated a specificity of 99.3% (4202/4232) and 98.1% (1527/1557); respectively, for classifying AF/AFL and SVT correctly as nonshockable. The CNN outperformed the specificity observed from adjudicators on AF/AFL (99.3% versus 97.5%, P<0.001). ECG examples of nonshockable rhythms commonly mistaken for VT/VF that the CNN correctly classified are provided in Figure 4A through 4C.

Figure 4. Examples of ECG recordings with agreement between the CNN and adjudicators.

Figure 4

A, Supraventricular tachycardia (nonshockable) example correctly classified by CNN that all adjudicators correctly labeled. B, Atrial fibrillation (nonshockable) example correctly classified by CNN that all adjudicators correctly labeled. C, Atrial flutter (nonshockable) example correctly classified by CNN that all adjudicators correctly labeled. CNN indicates convolution neural network.

Internal Validation Analysis of CNN Performance Among Conflict ECGs

There were 1582 (6.0%) ECGs that were excluded from the study data set as conflict ECGs. Among these, the CNN agreed with the 2 out of 3 majority classification in 84.4% (1336/1582) of cases, compared with the average of 66.6% among the adjudicators. Examples of conflict ECGs for which the adjudicators disagreed on shockability and for which the CNN agreed with the 2 out of 3 majority classification is provided in Figure 5A through 5C.

Figure 5. Examples of ECG recordings with disagreement.

Figure 5

A, SVT (nonshockable) example correctly classified by the CNN that 1 out of 3 adjudicators labeled a shockable VF/VT rhythm. B, AF (nonshockable) example correctly classified by the CNN that 1 out of 3 adjudicators labeled a shockable VF/VT rhythm. C, VF/VT (shockable) rhythm example correctly classified by the CNN that 1 out of 3 adjudicators labeled as a nonshockable rhythm. AF indicates atrial fibrillation; CNN, convolution neural network; SVT, supraventricular tachycardia; VF, ventricular fibrillation; and VT, ventricular tachycardia.

Evaluation of CNN False Positives and False Negatives and CNN Behavior

A false‐positive result by the CNN involves a rhythm that is categorized as nonshockable, with 3 out of 3 adjudicator agreement, but is designated by the CNN as shockable. There were no observed false positives in the test data set among adult or pediatric ECGs. A false‐negative result by the CNN involves a rhythm that is categorized as shockable with 3 out of 3 adjudicator agreement, but is designated by the CNN as nonshockable. All false negatives observed from the CNN and the overlaid saliency maps on each ECG waveform demarcating the regions of the ECGs having the greatest influence on CNN behavior and model decisions are provided in Figure 6.

Figure 6. All false‐negative ECG recordings in the ECG test data set with a heatmap superimposed highlighting the regions in the ECG important for the CNN's prediction.

Figure 6

A darker red hue indicates more importance being placed on that portion of the ECG recording for the CNN's decision. A, Rhythm agreed by adjudicators to be VF, classified as nonshockable by the CNN. B, Rhythm agreed by adjudicators to be VF, classified as nonshockable by the CNN. C, Rhythm agreed by adjudicators to be rapid VT, classified as nonshockable by the CNN. D, Rhythm agreed by adjudicators to be rapid VT, classified as nonshockable by the CNN. E, Rhythm agreed by adjudicators to be rapid VT, classified as nonshockable by the CNN. F, Rhythm agreed by adjudicators to be rapid VT, classified as nonshockable by the CNN. G, Rhythm agreed by adjudicators to be rapid VT, classified as nonshockable by the CNN. CNN indicates convolution neural network; VF, ventricular fibrillation; and VT, ventricular tachycardia.

External Validation Analysis for CNN Performance on ECGs Modified With Varying Levels of Noise

Representative examples of low, medium, and high levels of noise applied to ECGs in the test data set can be found in Figure 7. For shockable rhythms, the sensitivity of the CNN algorithm across high‐low noise levels was 92.4% to 100% (baseline wander artifact), 38.7% to 96.6% (muscle artifact), and 99.2% to 100% (electrode motion artifact), with a specificity of 93% to 100% across all levels of noise in these artifact categories. Muscle artifact had the largest impact on the sensitivity of the CNN algorithm, whereas electrode motion artifact demonstrated the largest impact on the specificity. The range of AUC‐ROC was 0.993 to 0.999, 0.871 to 0.997, and 0.998 to 0.999 across high‐low noise levels of baseline wander, muscle artifact, and electrode motion artifact, respectively.

Figure 7. Examples of low, medium, and high levels of ECG‐specific noise applied to ECG records from the test data set, with ROC curves resulting from evaluation of the CNN on the entire test data set modified with the noise.

Figure 7

CNN indicates convolution neural network; ROC, receiver operating characteristic; and EMG, electromyography.

Explainability of CNN Behavior

Figure 8 provides a heatmap visualization of CNN behavior in a shockable (Figure 8A) and nonshockable ECG (Figure 8D). The region of interest is highlighted for where the CNN is analyzing the ECG waveform for both shockable and nonshockable features within each ECG (Figure 8B and 8E). The aggregate of designations is provided leading to a final shock (Figure 8C) or no shock decision (Figure 8F).

Figure 8. Heatmap visualization of CNN behavior.

Figure 8

A and D, Example of shockable and nonshockable rhythms undergoing CNN analysis. B and E, Heatmap of CNN analysis corresponding to regions of the ECG. C and F, Aggregate of designations provides a final shock or no shock decision. The purple lines highlight example similar regions of interest in the ECG that have similar heatmaps (blue rectangles). CNN indicates convolution neural network.

Comparison of CNN Performance to Physician ECG Readers

The CNN was evaluated against each physician annotator within the complete data set including test, validation, and training data sets. Adjudication results of a shockable or nonshockable rhythm for individual physicians were overlayed on the resulting receiver operating characteristic curve (Figure 9). The CNN demonstrated a greater sensitivity (97.9% versus 90.0%) and similar specificity (99.0% versus 98.8%) when compared with individual physician annotators. Overall, the physician readers had low false positive rates (range, 0.5%–2.1%) with variability in their true positive rate (range, 41%–97%).

Figure 9. Comparison of overall CNN performance to overall adjudicator performance across the complete ECG data set.

Figure 9

AUC indicates area under the curve; and CNN, convolution neural network.

DISCUSSION

Our primary results demonstrated a high accuracy of the CNN for the diagnosis of shockable rhythms. We postulate 3 main reasons for this finding. First, we used robust methodologies to develop a CNN algorithm 15 , 16 , 29 , 30 , 31 , 32 with features extracted from ECG waveforms associated with shockable rhythms. Such features plausibly represent structural or genetic pathologies associated with ventricular arrhythmias. 33 , 34 Second, we trained and validated algorithmic performance on a sufficiently large ECG data set that underwent independent and multiple physician adjudication. This approach for adjudication can limit heterogeneity among unique ECG rhythms and can maximize the accuracy of shockable rhythm classifications. Third, we included ECGs from real‐world data sets and tested within the AED hardware to mitigate common ML biases such as measurement bias (when data for training differs from the real world) and sample bias (when a data set does not reflect the realities of the environment in which a model will run), enabling the interpretation of our results to OHCA in which the CNN and AED will ultimately be used.

In the present study, we demonstrate a finding for which an ML algorithm functions as a major determinant of a critical clinical outcome. Unlike other ECG algorithms designed to improve physician workflows, preannotate ECGs with relevant findings, or generate new phenotypes of chronic disease states, 19 a shockable rhythm ML algorithm used within an AED decides whether life‐saving defibrillation will be delivered or not. Although ML algorithms for identification of arrhythmias have been developed, 14 , 15 , 16 such algorithms commonly exclude ECGs without unanimous adjudication and do not analyze performance among conflict ECGs to determine broader algorithmic accuracy. Over 1500 ECGs in our data set were considered conflict ECGs. When evaluated in an analysis against the consensus of 3 physician readers, the CNN agreed with the 2 out of 3 majority in 84% versus the 66% physician agreement in such cases. Similarly, the CNN demonstrated higher accuracy when compared with individual physician readers for the determination of shockability. We observed that physicians erred toward not indicating a shockable rhythm (ie, false negative), resulting in a lower true positive rate than the CNN.

Although incorrect predictions are not unexpected from CNNs, they are valuable in understanding the behavior of the CNN and crucial when considering the CNN's application in the real world. When considering false positives resulting from arrhythmias such as AF/AFL/SVT that can be mistaken for VT/VF, the CNN outperformed physicians in identifying such rhythms as nonshockable. Furthermore, the CNN demonstrated no false positives. However, the CNN demonstrated 7 false negatives in the test data set (false‐negative rate of 1.9% [7/375]).

We hypothesize several reasons that may explain CNN false negatives. First, it is important to understand that the CNN is trained, validated, and tested in the overall study data set upon the adjudication results by a consensus of the same group of physician readers. Although there are many permutations for grouping physician readers (in our case randomly grouped), any variation in adjudication within training may result in error during validation and testing, especially because these data sets have a significantly smaller number of ECGs compared with the training data set. Second, the nature of developing a CNN algorithm takes input‐weighted values from an ECG and derives features determined through convolutional extraction of patterns in the ECG time‐series signal. Therefore, the CNN may not allow for a clear explanation for instances in which the algorithm makes an error. When visualizing these rhythms using saliency maps, we can recognize certain ECG features where the CNN is focused such as the angles and frequency of positive and negative ECG deflections (Figure 6). Two false negatives (Figure 6A and 6B) show significant artifacts and were adjudicated as shockable by readers. It appears that the CNN is correctly focused on waveform variations (particularly in Figure 6A) that would be consistent with VF but an output of a nonshockable rhythm. In contrast, the other 5 false negatives (Figure 6C through 6G) show narrow and wide‐complex tachyarrhythmias, which can be categorized as VT or a supraventricular rhythm with aberrancy, a challenging distinction on a single‐lead ECG. It is likely that the CNN is extracting waveform features in a greater number and definition than what human readers can determine through single‐lead interpretation. Although this may be true, when the features were applied, the CNN generated an error. Third, it is plausible that the CNN was correct in adjudicating a nonshockable rhythm and the adjudicators were incorrect. While the significance of these explanations is unknown, they are provided in our effort to generate a greater understanding of the CNN model when considering the clinical translational of our results.

When interpreting the generalizability of the CNN to the OHCA process of care, within a noise stress test using standardized methods to introduce waveform artifacts commonly observed in OHCA, the CNN demonstrated resilience to correctly identify shockable ECGs. The algorithm was resilient to low noise signals, with more varied impacts as noise levels increased. Low‐frequency baseline wander noise is the most common noise type that would be expected for an AED, and it had minimal impact on the algorithm's performance across any rhythm. To further understand CNN behavior, we provided an explainability analysis and heatmap to visualize CNN regions of interest on shockable and nonshockable ECGs. In contrast to the clinical scenario where a physician may interpret ECG features such as QRS width, rate, and regularity of the rhythm, in this type of explainability analysis CNN behavior is modeled as an aggregation of multiple designations in these regions of interest leading to a shock or no‐shock decision, and may be a possible explanation why the CNN demonstrated higher accuracy when compared with individual physician readers for the determination of shockability in VT/VF rhythms.

The combination of the algorithm and AED must provide 3 vital functions, namely an expedient and accurate decision, and the AED hardware must be miniaturized for use by the public. In this context, the CNN has tight constraints between processing bandwidth, accuracy and time to output, unique requirements for any ML algorithm, and those that are embedded within a medical device. In our case, this would constitute a device with multiple functions 35 (ie, the software [CNN] and hardware [AED] both having medical device function as an artificial intelligence/ML‐enabled medical device). 36 Compared with cloud computing platforms commonly used in ML algorithm development and testing, the processing hardware for this algorithm has limited memory space and runs on embedded processors. Recent progress has been made with newly developed ML algorithms for shockable rhythm identification. 37 , 38 , 39 Although not the first CNN algorithm to be applied to a device with limited hardware resources, because of hardware constraints our CNN was limited to 5 convolutional layers and 1 connected layer. After input of the 7‐s ECG rhythm strip, running on an AED device, the algorithm averaged 383 ms to reach a shock or no‐shock decision (total time of 7.383 s). Higher complexity networks (ie, greater number of layers) with marginally better results on the validation data set (ie, AUC‐ROC >0.995) were created during development, they were not selected, because the present results provide adequate system performance. By combining this algorithm with the hardware of the AED device, we believe that our methodology to test the algorithm within the AED is essential for both the regulatory process of new AED designs and in the verification of algorithm performance as new devices become available for real‐world use.

New innovations are urgently required to improve OHCA process of care, particularly upstream measures. 2 To address bystander activation, Andelius et al 10 and Ringh et al 11 demonstrated the effectiveness of a new system of care using digital vectors of communication through smartphone apps that incorporate geo‐localization to alert citizen responders of nearby OHCA with feedback loops that include EMS to instruct CPR and defibrillation. In population‐wide studies that included 500 to 800 OHCAs, citizen geo‐activation was associated with a 2‐ to 3‐fold increased likelihood of bystander CPR, with 50% of citizens applying an AED and 10% performing defibrillation. As EMS response time increased, the proportion of bystander CPR and/or defibrillation before EMS also increased. While we await the results of ongoing randomized survival studies, 40 citizen activation through smartphone apps represents a powerful mechanism to transform the community response in OHCA. 6 , 41 Our study enters this field and adds to a new armament of digital tools to address a major public health challenge that continues to claim >1 million lives annually. 42 , 43

LIMITATIONS

We conformed to AHA requirements and met the minimum sample sizes for each cardiac rhythm. By doing so, we did not pursue sample size estimations for the minimum number of individual rhythms to demonstrate the required sensitivity or specificity performance targets. Our results of high diagnostic accuracy of the CNN were consistent across various internal and external validation analyses and support that the risk of a type I error was mitigated. Rhythms such as VT/VF are challenging to acquire in a digital manner suitable for analysis. The rarity of such rhythms can affect prevalence‐dependent analyses such as the F1 score. Similarly, sample sizes of pediatric rhythms, particularly shockable VT/VF pediatric rhythms, are extraordinarily limited. Although the design of this study is retrospective, prospective development and performance would not be feasible in patients with OHCA. We reported results of a CNN within this specific AED tailored to its hardware resources. As such, we have not compared the algorithm in a side by side performance within or against other commercially available AEDs, or compared performance with other CNNs developed for AED‐based rhythm classifications. Therefore, our methodology has limitations affecting the generalizability and reproducibility of the findings.

CONCLUSIONS

We developed a CNN algorithm that is the sole determinant of a critical clinical outcome to distinguish between shockable and nonshockable rhythms from single‐lead ECGs in a novel, miniaturized AED device and demonstrated high diagnostic accuracy. These results have important implications that could lead to improved outcomes in OHCA and advances the field with a new ML medical device.

Sources of Funding

Avive Solutions provided funding for this study. F.S.N. received the Programme Grant to Imperial College of London from the British Heart Foundation and grant funding from the National Institute for Health Research. J.C.P. received a grant from the National Institutes of Health R01.

Disclosures

B.C.F. and D.P.W. were issued patent number US 11089989 B2 (45) with Avive Solutions. J.C.P. receives consultant fees from Protaryx and Alta Thera. M.M. is on the executive committee for the Society for Perioperative Care and receives patient monitoring equipment from Philips. F.S.N. receives funding from Programme Grant to Imperial College of London and National Institute for Heath and Care Research grant funding. S.P.B. received consulting fees from Bristol Meyers Squibb, Pfizer, and Infinion; participated on the advisory board for Proteus Digital; had a leadership role in the American College of Cardiology, American Society of Echocardiography, and Biocom (all positions unpaid and voluntary).

Supporting information

Data S1–S2

Tables S1–S6

Figures S1–S7

For Sources of Funding and Disclosures, see page 13.

REFERENCES

  • 1. Sasson C, Magid DJ, Chan P, Root ED, McNally BF, Kellermann AL, Haukoos JS. Association of neighborhood characteristics with bystander‐initiated CPR. N Engl J Med. 2012;367:1607–1615. doi: 10.1056/NEJMoa1110700 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Merchant RM, Topjian AA, Panchal AR, Cheng A, Aziz K, Berg KM, Lavonas EJ, Magid DJ. Part 1: executive summary: 2020 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation. 2020;142:S337–S357. doi: 10.1161/CIR.0000000000000918 [DOI] [PubMed] [Google Scholar]
  • 3. Fordyce CB, Hansen CM, Kragholm K, Dupre ME, Jollis JG, Roettig ML, Becker LB, Hansen SM, Hinohara TT, Corbett CC, et al. Association of public health initiatives with outcomes for out‐of‐hospital cardiac arrest at home and in public locations. JAMA Cardiol. 2017;2:1226–1235. doi: 10.1001/jamacardio.2017.3471 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Berg KM, Cheng A, Panchal AR, Topjian AA, Aziz K, Bhanji F, Bigham BL, Hirsch KG, Hoover AV, Kurz MC, et al. Part 7: systems of care: 2020 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation. 2020;142:S580–S604. doi: 10.1161/CIR.0000000000000899 [DOI] [PubMed] [Google Scholar]
  • 5. McNally B, Stokes A, Crouch A, Kellermann AL, CARES Surveillance Group . CARES: Cardiac Arrest Registry to Enhance Survival. Ann Emerg Med. 2009;54:674–683.e2. doi: 10.1016/j.annemergmed.2009.03.018 [DOI] [PubMed] [Google Scholar]
  • 6. Hallstrom AP, Ornato JP, Weisfeldt M, Travers A, Christenson J, McBurnie MA, Zalenski R, Becker LB, Schron EB, Proschan M; Public Access Defibrillation Trial Investigators . Public‐access defibrillation and survival after out‐of‐hospital cardiac arrest. N Engl J Med. 2004;351:637–646. doi: 10.1056/NEJMoa040566 [DOI] [PubMed] [Google Scholar]
  • 7. Abrams HC, McNally B, Ong M, Moyer PH, Dyer KS. A composite model of survival from out‐of‐hospital cardiac arrest using the Cardiac Arrest Registry to Enhance Survival (CARES). Resuscitation. 2013;84:1093–1098. doi: 10.1016/j.resuscitation.2013.03.030 [DOI] [PubMed] [Google Scholar]
  • 8. Nishiyama T, Nishiyama A, Negishi M, Kashimura S, Katsumata Y, Kimura T, Nishiyama N, Tanimoto Y, Aizawa Y, Mitamura H, et al. Diagnostic accuracy of commercially available automated external defibrillators. JAHA. 2015;4:e002465. doi: 10.1161/JAHA.115.002465 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Zijlstra JA, Bekkers LE, Hulleman M, Beesems SG, Koster RW. Automated external defibrillator and operator performance in out‐of‐hospital cardiac arrest. Resuscitation. 2017;118:140–146. doi: 10.1016/j.resuscitation.2017.05.017 [DOI] [PubMed] [Google Scholar]
  • 10. Andelius L, Malta Hansen C, Lippert FK, Karlsson L, Torp‐Pedersen C, Kjær Ersbøll A, Køber L, Collatz Christensen H, Blomberg SN, Gislason GH, et al. Smartphone activation of citizen responders to facilitate defibrillation in out‐of‐hospital cardiac arrest. J Am Coll Cardiol. 2020;76:43–53. doi: 10.1016/j.jacc.2020.04.073 [DOI] [PubMed] [Google Scholar]
  • 11. Ringh M, Rosenqvist M, Hollenberg J, Jonsson M, Fredman D, Nordberg P, Järnbert‐Pettersson H, Hasselqvist‐Ax I, Riva G, Svensson L. Mobile‐phone dispatch of laypersons for CPR in out‐of‐hospital cardiac arrest. N Engl J Med. 2015;372:2316–2325. doi: 10.1056/NEJMoa1406038 [DOI] [PubMed] [Google Scholar]
  • 12. Kuschner CE, Becker LB. Recent advances in personalizing cardiac arrest resuscitation. F1000Research. 2019;8:915. doi: 10.12688/f1000research.17554.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Bhavnani SP, Parakh K, Atreja A, Druz R, Graham GN, Hayek SS, Krumholz HM, Maddox TM, Majmudar MD, Rumsfeld JS, et al. 2017 roadmap for innovation‐ACC health policy statement on healthcare transformation in the era of digital health, big data, and precision health: a report of the American College of Cardiology Task Force on Health Policy Statements and Systems of Care. J Am Coll Cardiol. 2017;70:2696–2718. doi: 10.1016/j.jacc.2017.10.018 [DOI] [PubMed] [Google Scholar]
  • 14. Hughes JW, Olgin JE, Avram R, Abreau SA, Sittler T, Radia K, Hsia H, Walters T, Lee B, Gonzalez JE, et al. Performance of a convolutional neural network and explainability technique for 12‐lead electrocardiogram interpretation. JAMA Cardiol. 2021;6:1285–1295. doi: 10.1001/jamacardio.2021.2746 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Hannun AY, Rajpurkar P, Haghpanahi M, Tison GH, Bourn C, Turakhia MP, Ng AY. Cardiologist‐level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med. 2019;25:65–69. doi: 10.1038/s41591-018-0268-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. van de Leur RR, Blom LJ, Gavves E, Hof IE, van der Heijden JF, Clappers NC, Doevendans PA, Hassink RJ, van Es R. Automatic triage of 12‐lead ECGs using deep convolutional neural networks. J Am Heart Assoc. 2020;9:e015138. doi: 10.1161/JAHA.119.015138 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Khurshid S, Friedman S, Pirruccello JP, Di Achille P, Diamant N, Anderson CD, Ellinor PT, Batra P, Ho JE, Philippakis AA, et al. Deep learning to predict cardiac magnetic resonance–derived left ventricular mass and hypertrophy from 12‐lead ECGs. Circ: Cardiovasc Imaging. 2021;14:14. doi: 10.1161/CIRCIMAGING.120.012281 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Khurshid S, Friedman S, Reeder C, Di Achille P, Diamant N, Singh P, Harrington LX, Wang X, Al‐Alusi MA, Sarma G, et al. ECG‐based deep learning and clinical risk factors to predict atrial fibrillation. Circulation. 2022;145:122–133. doi: 10.1161/CIRCULATIONAHA.121.057480 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Siontis KC, Noseworthy PA, Attia ZI, Friedman PA. Artificial intelligence‐enhanced electrocardiography in cardiovascular disease management. Nat Rev Cardiol. 2021;18:465–478. doi: 10.1038/s41569-020-00503-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Picon A, Irusta U, Álvarez‐Gila A, Aramendi E, Alonso‐Atienza F, Figuera C, Ayala U, Garrote E, Wik L, Kramer‐Johansen J, et al. Mixed convolutional and long short‐term memory network for the detection of lethal ventricular arrhythmia. PLoS ONE. 2019;14:e0216756. doi: 10.1371/journal.pone.0216756 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Krasteva V, Ménétré S, Didon J‐P, Jekova I. Fully convolutional deep neural networks with optimized hyperparameters for detection of shockable and non‐shockable rhythms. Sensors. 2020;20:2875. doi: 10.3390/s20102875 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Jekova I, Krasteva V. Optimization of end‐to‐end convolutional neural networks for analysis of out‐of‐hospital cardiac arrest rhythms during cardiopulmonary resuscitation. Sensors. 2021;21:4105. doi: 10.3390/s21124105 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Avive automated external defibrillator (AED), Avive AED pad cartridge, Avive AED training cartridge, Avive USB charging cable, Avive USB power adaptor– P210015. FDA . 2022. Accessed January 5, 2023. https://www.fda.gov/medical‐devices/recently‐approved‐devices/avive‐automated‐external‐defibrillator‐aed‐avive‐aed‐pad‐cartridge‐avive‐aed‐training‐cartridge
  • 24. Shen C, Rogers J, Bhavnani SP. Innovations in resuscitation science: assessment of defibrillation efficacy of a next‐generation miniaturized automated external defibrillator. J Am Coll Cardiol. 2020;75:3473. doi: 10.1016/S0735-1097(20)34100-0 [DOI] [Google Scholar]
  • 25. Kerber RE, Becker LB, Bourland JD, Cummins RO, Hallstrom AP, Michos MB, Nichol G, Ornato JP, Thies WH, White RD, et al. Automatic external defibrillators for public access defibrillation: recommendations for specifying and reporting arrhythmia analysis algorithm performance, incorporating new waveforms, and enhancing safety. Circulation. 1997;95:1677–1682. doi: 10.1161/01.CIR.95.6.1677 [DOI] [PubMed] [Google Scholar]
  • 26. Moody GB, Muldrow W, Mark RG. A noise stress test for arrhythmia detectors. Comput Cardiol. 1984;11:381‐384. doi: 10.13026/C2HS3T [DOI] [Google Scholar]
  • 27. Goldberger AL, Amaral LA, Glass L, Hausdorff JM, Ivanov PC, Mark RG, Mietus JE, Moody GB, Peng CK, Stanley HE. PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation. 2000;101:E215–E220. doi: 10.1161/01.CIR.101.23.e215 [DOI] [PubMed] [Google Scholar]
  • 28. Jones Y, Deligianni F, Dalton J. Improving ECG classification interpretability using saliency maps. 2020 IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE). Cincinnati, OH: IEEE; 2020:675–682. doi: 10.1109/BIBE50027.2020.00114 [DOI] [Google Scholar]
  • 29. Maas AL, Hannun AY, Ng AY. Rectifier nonlinearities improve neural network acoustic models. ICML Workshop on Deep Learning for Audio, Speech and Language Processing. W&CP volume 28. Atlanta, GA: JMLR; 2013. [Google Scholar]
  • 30. Goodfellow I, Bengio Y, Courville A. Deep Learning. MIT Press. 2016. Accessed September 28, 2022. www.deeplearningbook.org [Google Scholar]
  • 31. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15:1929–1958. [Google Scholar]
  • 32. Giryes R, Sapiro G, Bronstein AM. Deep neural networks with random Gaussian weights: a universal classification strategy? Trans Sig Proc. 2016;64:3444–3457. doi: 10.1109/TSP.2016.2546221 [DOI] [Google Scholar]
  • 33. Ko W‐Y, Siontis KC, Attia ZI, Carter RE, Kapa S, Ommen SR, Demuth SJ, Ackerman MJ, Gersh BJ, Arruda‐Olson AM, et al. Detection of hypertrophic cardiomyopathy using a convolutional neural network‐enabled electrocardiogram. J Am Coll Cardiol. 2020;75:722–733. doi: 10.1016/j.jacc.2019.12.030 [DOI] [PubMed] [Google Scholar]
  • 34. Attia ZI, Kapa S, Lopez‐Jimenez F, McKie PM, Ladewig DJ, Satam G, Pellikka PA, Enriquez‐Sarano M, Noseworthy PA, Munger TM, et al. Screening for cardiac contractile dysfunction using an artificial intelligence–enabled electrocardiogram. Nat Med. 2019;25:70–74. doi: 10.1038/s41591-018-0240-2 [DOI] [PubMed] [Google Scholar]
  • 35. Multiple function device products: policy and considerations. U.S. Food and Drug Administration. 2020. Accessed September 28, 2022. https://www.fda.gov/regulatory‐information/search‐fda‐guidance‐documents/multiple‐function‐device‐products‐policy‐and‐considerations [Google Scholar]
  • 36. Artificial intelligence and machine learning (AI/ML)‐enabled medical devices . U.S. Food and Drug Administration. Accessed September 28, 2022. https://www.fda.gov/medical‐devices/software‐medical‐device‐samd/artificial‐intelligence‐and‐machine‐learning‐aiml‐enabled‐medical‐devices [Google Scholar]
  • 37. Figuera C, Irusta U, Morgado E, Aramendi E, Ayala U, Wik L, Kramer‐Johansen J, Eftestøl T, Alonso‐Atienza F. Machine learning techniques for the detection of shockable rhythms in automated external defibrillators. PLoS One. 2016;11:11. doi: 10.1371/journal.pone.0159654 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Nguyen MT, Nguyen BV, Kim K. Deep feature learning for sudden cardiac arrest detection in automated external defibrillators. Sci Rep. 2018;8:17196. doi: 10.1038/s41598-018-33424-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Nasimi F, Yazdchi M. LDIAED: a lightweight deep learning algorithm implementable on automated external defibrillators. PLoS One. 2022;17:e0264405. doi: 10.1371/journal.pone.0264405 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Folke F. Public access defibrillation by activated citizen first‐responders ‐ The HeartRunner Trial.clinicaltrials.gov; 2020. Accessed December 20, 2021. https://clinicaltrials.gov/ct2/show/NCT03835403
  • 41. Sasson C, Rogers MAM, Dahl J, Kellermann AL. Predictors of survival from out‐of‐hospital cardiac arrest: a systematic review and meta‐analysis. Circ Cardiovasc Qual Outcomes. 2010;3:63–81. doi: 10.1161/CIRCOUTCOMES.109.889576 [DOI] [PubMed] [Google Scholar]
  • 42. Berdowski J, Berg RA, Tijssen JGP, Koster RW. Global incidences of out‐of‐hospital cardiac arrest and survival rates: systematic review of 67 prospective studies. Resuscitation. 2010;81:1479–1487. doi: 10.1016/j.resuscitation.2010.08.006 [DOI] [PubMed] [Google Scholar]
  • 43. Milan M, Perman SM. Out of hospital cardiac arrest: a current review of the literature that informed the 2015 American Heart Association guidelines update. Curr Emerg Hosp Med Rep. 2016;4:164–171. doi: 10.1007/s40138-016-0118-x [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data S1–S2

Tables S1–S6

Figures S1–S7

Data Availability Statement

Pending an internal scientific committee review, data access will be considered on a per request basis and upon execution of an appropriate data use agreement. The test data set rhythms sourced from publicly available institutions will be made available via GitHub upon such request.


Articles from Journal of the American Heart Association: Cardiovascular and Cerebrovascular Disease are provided here courtesy of Wiley

RESOURCES