Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jan 31.
Published in final edited form as: Acad Emerg Med. 2015 Jan 29;22(2):204–211. doi: 10.1111/acem.12577

Classification of Cardiopulmonary Resuscitation Chest Compression Patterns: Manual Versus Automated Approaches

Henry E Wang 1, Robert H Schmicker 1, Heather Herren 1, Siobhan Brown 1, John P Donnelly 1, Randal Gray 1, Sally Ragsdale 1, Andrew Gleeson 1, Adam Byers 1, Jamie Jasti 1, Christina Aguirre 1, Pam Owens 1, Joe Condle 1, Brian Leroux 1
PMCID: PMC4329029  NIHMSID: NIHMS657652  PMID: 25639554

Abstract

Objectives

New chest compression detection technology allows for the recording and graphical depiction of clinical cardiopulmonary resuscitation (CPR) chest compressions. The authors sought to determine the inter-rater reliability of chest compression pattern classifications by human raters. Agreement with automated chest compression classification was also evaluated by computer analysis.

Methods

This was an analysis of chest compression patterns from cardiac arrest patients enrolled in the ongoing Resuscitation Outcomes Consortium (ROC) Continuous Chest Compressions Trial. Thirty CPR process files from patients in the trial were selected. Using written guidelines, research coordinators from each of eight participating ROC sites classified each chest compression pattern as 30:2 chest compressions, continuous chest compressions (CCC), or indeterminate. A computer algorithm for automated chest compression classification was also developed for each case. Inter-rater agreement between manual classifications was tested using Fleiss’s kappa. The criterion standard was defined as the classification assigned by the majority of manual raters. Agreement between the automated classification and the criterion standard manual classifications was also tested.

Results

The majority of the eight raters classified 12 chest compression patterns as 30:2, 12 as CCC, and six as indeterminate. Inter-rater agreement between manual classifications of chest compression patterns was κ = 0.62 (95% confidence interval [CI] = 0.49 to 0.74). The automated computer algorithm classified chest compression patterns as 30:2 (n = 15), CCC (n = 12), and indeterminate (n = 3). Agreement between automated and criterion standard manual classifications was κ = 0.84 (95% CI = 0.59 to 0.95).

Conclusions

In this study, good inter-rater agreement in the manual classification of CPR chest compression patterns was observed. Automated classification showed strong agreement with human ratings. These observations support the consistency of manual CPR pattern classification as well as the use of automated approaches to chest compression pattern analysis.


The advent of cardiopulmonary resuscitation (CPR) chest compression detection technology is one of the most important advances in resuscitation science and practice. Using accelerometer or electrical impedance sensors, this technology has enabled characterization of CPR chest compression delivery during clinical resuscitation efforts. Prior studies have used CPR process data to describe interruptions in chest compressions, as well as the associations between chest compression fraction and out-of-hospital cardiac arrest outcomes.1-6

Recent studies have promoted novel strategies for CPR using continuous chest compressions (CCC) with few or no pauses for ventilation.7-9 However, these prior studies relied on rescuer self-reports to characterize the patterns of delivered chest compressions, without the use of CPR measurement technology. The classification of observed chest compression patterns (e.g., CCC vs. 30:2 vs. other) requires manual interpretation of CPR process data, a process that is arduous, is time-consuming, and has unknown inter-rater agreement. Automated computer analysis could potentially improve the efficiency of CPR pattern classification, but no studies have described this technique nor compared its accuracy with manual CPR patterns classifications.

In this study of CPR delivered in the Resuscitation Outcomes Consortium (ROC) CCC Trial, we determined the inter-rater reliability of CPR chest compression patterns classified by manual data review. We also compared manual and automated computer approaches to chest compression pattern classification.

METHODS

Study Design

We conducted an analysis of chest compression patterns from cardiac arrest patients enrolled in the ongoing ROC CCC Trial. The ROC CCC Trial (www.clinicaltrials.gov NCT01372748) is conducted under US regulations for exception from informed consent for emergency research (21 CFR 50.24) and the Canadian Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. Additional reviews and approvals were obtained from the Office of Human Research Protection and Health Canada, as well as the institutional review boards and research ethics boards in the communities where the research was conducted.

Study Setting and Population

The ROC is a North American multicenter clinical trial network designed to conduct out-of-hospital interventional and clinical research in the areas of cardiac arrest and traumatic injury.10,11 Of the 264 emergency medical services (EMS) agencies in ROC, 101 from eight ROC regional sites (Alabama; Dallas, Texas; King County, Washington; Milwaukee, Wisconsin; Pittsburgh, Pennsylvania; British Columbia, Canada; and Ottawa and Toronto, Ontario, Canada) are participating in the ROC CCC Trial.12-14

The aim of the ROC CCC Trial is to compare survival to hospital discharge between adult out-of-hospital cardiac arrests randomized to a strategy of 30:2 CPR chest compressions versus a strategy of CCC. The protocol entails three consecutive 2-minute bouts of chest compressions. The 30:2 chest compressions arm consists of 30 chest compressions alternating with a full pause for the delivery of two ventilations by bag-valve-mask device. CCC consists of continuously delivered chest compressions with a single, brief ventilation after every 10th compression, without chest compression interruptions.

Study Protocol

Procurement of CPR Chest Compression Process Data

All ROC EMS agencies record CPR chest compressions using state-of-the art portable cardiac monitors with chest compression detection technology, including the Zoll M and X series (Zoll, Inc., Chelmsford, MA), Philips MRX (Philips, Inc., Amsterdam, The Netherlands), and Physio-Control LifePak 12 and 15 (Physio-Control, Inc., Redmond, WA). The Philips device measures chest compressions through an accelerometer-based sternal detector. The Zoll device similarly uses a sternal detector, but the sensor is physically fixed between the two cardiac defibrillation pads. The Physio-Control device detects chest compressions through changes in electrical impedance between chest electrodes; there is no additional hardware required. Independent studies have verified the accuracy of these detection technologies.15 While minimal formal training is required for operation of the CPR detection technology, technical points reinforced during CPR training typically include (where applicable) proper midsternal placement of the detector, application of chest compressions over the detector, and strategies for maintaining proper detector placement.

All manufacturers provided commercially available processing software for translating the chest compression data into analyzable format. The software identified the timing of each individual chest compression, from which one can determine CPR performance parameters such as compression rate and the segment durations of chest compressions application and interruptions. Because of their use of a sternal accelerometer, only the Philips and Zoll devices are able to indicate chest compression depth. The Physio-Control device uses thoracic impedance changes and cannot ascertain chest compression depth.

Selection of CPR Chest Compression Process Cases

For this analysis we assembled a test set of 30 CPR chest compression process files selected from cardiac arrest cases enrolled in the trial. Based on a projected kappa of 0.6 and at least six raters, we estimated needing CPR pattern ratings on at least 30 cases to achieve a lower one-sided 95% confidence interval (CI) no lower than 0.48.16,17 Although we planned to solicit ratings from eight raters, we designed the study for six raters to allow for potential subject dropout.

The study team selected candidate chest compression files to encompass the spectrum of chest compression patterns observed during the trial. The final selection of images included examples from both study intervention arms (CCC and 30:2) and seven of the eight ROC sites (Alabama, five; Dallas, five; Milwaukee, three; Pittsburgh, six; British Columbia, five; Ottawa, three; and Toronto, three). The candidate chest compression image set also included examples from the three brands of cardiac monitors used by participating EMS agencies (Zoll, 11; Philips, six; and Physio-Control, 13). We did not systematically sample images by site or manufacturer.

We provided CPR process images for the first 8 minutes of resuscitation, defined from the time of first reported EMS chest compressions and reflecting the 8-minute duration of the trial protocol (Figure 1 and Data Supplement S1, available as supporting information in the online version of this paper). For the Physio-Control and Zoll files, we provided per-minute summary statistics available from the respective software processing programs. We placed the chest compression images in random order without repeats. To minimize subject burden, we did not test intra-rater agreement, which would have required repeat assessment of 10 to 15 additional CPR process files.

Figure 1.

Figure 1

Sample chest compression process file and summary report from Physio-Control cardiac monitor. (Top) Compression process file; (bottom) summary report: Additional sample chest compression process files for Philips and Zoll monitors are provided in Data Supplement S1.

Manual Classification of Chest Compression Patterns

A trained research coordinator from each of the eight participating ROC regional coordinating centers reviewed the sample CPR process images. All raters possessed extensive CPR process analysis experience from prior ROC studies. Most raters had prior out-of-hospital or in-hospital clinical experience as doctors, nurses, or paramedics. Using structured guidelines, raters classified the chest compression pattern as CCC, 30:2, or indeterminate (Table 1). The guidelines for manual classification were developed by the lead study investigators and approved by the cardiac workgroup of the Consortium.

Table 1.

Guidelines for Manual Classification of Chest Compression Patterns

30:2 Chest Compressions Continuous Chest Compressions
  • -

    For the first 8 minutes of resuscitation.

  • -

    For at least 60% of the available and analyzable CPR chest compression image.

  • -

    There are chest compression periods of at least 20 seconds duration interspersed with regular chest compression pauses (presumptively for two ventilations).

  • -

    The chest compression pauses occur in a periodic fashion.

  • -

    There are at least three chest compression pauses per cycle of CPR. A "cycle" of CPR refers to an approximately 2-minute period of compressions performed preceding a rhythm analysis.

  • -

    For the first 8 minutes of resuscitation.

  • -

    For at least 60% of the available and analyzable CPR chest compression image.

  • -

    There are chest compressions periods of at least 90 seconds duration without regular chest compression pauses.

  • -

    Any observed chest compression pauses during a CPR cycle are brief and do not occur in a periodic fashion.

  • -

    There are fewer than three chest compression pauses per cycle of CPR, where cycle of CPR is defined as above.

Patterns fulfilling neither (or both) criteria were classified as indeterminate.

CPR = cardiopulmonary resuscitation.

Pilot efforts suggested that CCC may be characterized by: 1) the presence of approximately 2-minute periods of uninterrupted chest compressions without pauses; 2) the presence of only brief, nonperiodic chest compression pauses; and 3) the presence of no more than two chest compression pauses per 2-minute cycle of CPR. Similarly, 30:2 appeared to be characterized by: 1) the presence of approximately 20- to 30-second segments of chest compression interspersed with regular deliberate pauses and 2) the presence of at least three chest compression pauses per 2-minute cycle of chest compressions. Preliminary efforts also confirmed that the chest compression pattern in select cases may be indeterminate; for example, if an EMS crew started with 30:2 chest compressions but switched to CCC.

Automated Classification of Chest Compression Patterns

Research coordinators identified the start and end clock times for each chest compression segment appearing on the CPR process file. (Data Supplement S2, available as supporting information in the online version of this paper) Using these parameters, one of the authors (RHS) developed automated computer algorithms identifying mean chest compression fraction, median chest compression time segment length, and mean number of chest compression interruptions (Table 2). The algorithms defined chest compression fraction as the number of seconds with chest compressions, divided by the number of seconds of analyzable data. The program defined chest compression time segment length as the duration of each section of uninterrupted chest compressions, with interruptions defined as chest compression pause greater than 2 seconds.

Table 2.

Criteria Used for Automated Classification of Chest Compression Patterns

Measure Criteria for Continuous
Chest Compressions
Criteria for 30:2
Chest Compressions
Mean chest compression fraction >0.80 0.60–0.80
Median chest compression segment length (seconds) 60–150 <20
Mean chest compression pauses (n per minute) <1 2–4

Chest compression patterns were classified as continuous or 30:2 chest compressions it the pattern fulfilled two of the three criteria. Patterns fulfilling neither (or both) criteria were classified as indeterminate.

The automated analysis included available CPR data for the first 8 minutes of resuscitation, excluding time epochs with missing or unanalyzable data. The automated analysis required fulfillment of two of three criteria for classification as 30:2 or CCC (Table 2). If neither criteria were fulfilled, the program classified the case as “indeterminate.” (A detailed description of and rationale for the automated classification criteria are provided in Data Supplement S3, available as supporting information in the online version of this paper.).

Data Analysis

We determined inter-rater agreement between manual classifications using Fleiss’s kappa for multiple raters.18 Using the Stata module “kapci,” we applied bootstrapping with 100 repetitions to determine the 95% CI for the kappa value. Because of their varying graphical representations of CPR process, on a post hoc basis we repeated the comparisons stratified by cardiac monitor manufacturer.

To determine the agreement between automated and manual CPR patterns classifications, we first defined the criterion standard manual classification as the chest compression pattern provided by the majority (at least five of eight) of human ratings. We determined inter-rater agreement between automated and criterion standard manual classifications using Cohen’s kappa. We similarly determined the 95% CIs by using bootstrapping with 100 repetitions. We also repeated the comparisons stratified by cardiac monitor manufacturer.

The use of three classification categories (CCC, 30:2, indeterminate) may have biased the analysis toward lower kappa values. Therefore, in a sensitivity analysis, we evaluated the potential range of kappa values with indeterminate manual ratings reclassified as CCC or 30:2. All analyses were conducted using Stata v.12.2 and R v.2.15.1.

RESULTS

Based on the majority of classifications for each case, the eight raters classified 12 CPR patterns as 30:2, 12 as CCC, and six as indeterminate. CPR pattern classification was unanimous for 15 cases, including 30:2 (n = 7) and CCC (n = 8; Table 3). Inter-rater agreement between manual classifications of chest compression patterns was κ = 0.62 (95% CI = 0.49 to 0.74; Table 4). Inter-rater agreement was similar when stratified by cardiac monitor manufacturer.

Table 3.

Summary of Chest Compression Pattern Classifications

Manual (Human) Classification
Case Cardiac
Monitor
Manufacturer
Rater 1 Rater 2 Rater 3 Rater 4 Rater 5 Rater 6 Rater 7 Rater 8 Unanimous
Agreement
Criterion Standard
(Majority Agreement)
Automated
Classification
1 Zoll CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC
2 Philips CCC CCC CCC CCC CCC CCC CCC Indet CCC CCC
3 Physio-Control 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2
4 Physio-Control CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC
5 Physio-Control 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2
6 Physio-Control Indet Indet CCC Indet Indet Indet Indet 30:2 Indet Indet
7 Zoll 30:2 30:2 30:2 Indet 30:2 30:2 30:2 30:2 30:2 30:2
8 Physio-Control Indet CCC CCC CCC CCC CCC CCC CCC CCC CCC
9 Physio-Control Indet CCC CCC Indet Indet 30:2 30:2 CCC CCC CCC
10 Philips 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2
11 Zoll CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC
12 Physio-Control Indet Indet Indet Indet Indet 30:2 30:2 30:2 Indet 30:2
13 Zoll CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC
14 Philips 30:2 30:2 30:2 Indet 30:2 30:2 Indet 30:2 30:2 30:2
15 Zoll Indet Indet CCC Indet Indet CCC Indet CCC Indet Indet
16 Physio-Control CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC
17 Zoll 30:2 30:2 CCC Indet 30:2 30:2 30:2 30:2 30:2 30:2
18 Physio-Control Indet CCC CCC CCC CCC CCC CCC CCC CCC CCC
19 Philips CCC Indet Indet Indet Indet 30:2 Indet 30:2 Indet 30:2
20 Physio-Control 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2
21 Zoll CCC CCC CCC CCC CCC CCC CCC CCC CCC CCClo CCC
22 Physio-Control Indet CCC CCC Indet Indet Indet Indet 30:2 Indet Indet
23 Zoll Indet 30:2 Indet Indet Indet CCC Indet CCC Indet 30:2
24 Philips CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC
25 Physio-Control 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2
26 Zoll CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC CCC
27 Physio-Control 30:2 30:2 30:2 Indet Indet 30:2 30:2 30:2 30:2 30:2
28 Philips 30:2 30:2 30:2 30:2 CCC 30:2 30:2 30:2 30:2 30:2
29 Zoll Indet 30:2 30:2 Indet 30:2 30:2 Indet 30:2 30:2 30:2
30 Zoll 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2 30:2

Each chest compression case was classified as CCC, 30:2 chest compressions (30:2), or indeterminate (Indet). Manual (human) classifications were provided by raters from each of eight regional coordinating centers. Guidelines for manual classification are listed in Table 1. Unanimous agreement reflects complete agreement among the manual classifications. The criterion standard rating reflects majority agreement among the manual classifications. Automated chest compression pattern classifications were determined by computer analysis of chest compression process files using rules summarized in Table 1.

CCC = continuous chest compressions.

Table 4.

Inter-rater Agreement of Chest Compression Pattern Classifications

Cardiac Monitor Manufacturer
Inter-rater Agreement Physio-Control
(n = 13)
Philips (n = 6) Zoll (n = 11) All Cases
Agreement between manual classifications, kappa
 (95% CI)
0.61 (0.36–0.78) 0.59 (0.29–0.91) 0.64 (0.44–0.83) 0.62 (0.49–0.74)
Agreement between [criterion standard manual
 classification] and [automated classification],
 kappa (95% CI)
0.88 (0.65–1.00) 0.70 (0.00–1.00)* 0.85 (0.60–1.00) 0.84 (0.59–0.95)

CIs determined by bootstrapping with 100 repetitions.

*

The bootstrapped 95% CI is wide because of small sample size despite near complete concordance (only one disagreement among six cases) between criterion standard manual and automated classification.

The automated algorithm classified 15 chest compression patterns as 30:2, 12 as CCC, and three as indeterminate (Table 3). Classification agreement between automated and criterion standard manual classifications was κ = 0.84 (95% CI = 0.59 to 0.95). There was disagreement in three of 30 cases; the pattern of discordance in all three cases was [criterion standard manual = indeterminate vs. automated = 30:2]. The kappa measure of agreement between automated and criterion standard manual ratings was lowest for Philips CPR process files, despite near-perfect concordance in ratings (there was disagreement in only one of six cases), possibly due to the small sample size (Table 4).

In a sensitivity analysis, when reclassifying all indeterminate ratings as CCC, the kappa measure of inter-rater agreement between manual classifications was 0.69 (95% CI = 0.56 to 0.81). When reclassifying indeterminate ratings as 30:2, kappa was 0.75 (95% CI = 0.59 to 0.86).

DISCUSSION

In this study we observed good inter-rater agreement in the manual classification of CPR chest compression patterns, a finding that sets the foundation for use of CPR process data to characterize chest compression strategies.19 More importantly, we also observed strong agreement between manual and automated classifications, a finding that supports the viability of automated approaches to chest compression pattern identification and categorization.

Our findings have important implications for CPR research. The use of CPR detection technology provides a more rigorous approach to identifying and characterizing chest compressions delivered during clinical care. Prior studies have relied on self-reports for characterizing different CPR chest compression strategies. For example, in the comparison of minimally interrupted with traditional 30:2 CPR by Bobrow et al.,7 the type of CPR delivered by EMS personnel was indicated by paramedic reports, without supporting CPR process data. No prior studies have independently confirmed the application of nor adherence to an intended CPR pattern strategy. The identification of CPR patterns is particularly important in an interventional trial (such as the ROC CCC Trial) where it is essential to determine both the intended and the actual CPR treatment received by a subject.

While inter-rater agreement was good, the raters did not show unanimous agreement in 15 cases; in the majority of these cases, the disagreement pattern was either [CCC vs. indeterminate] or [30:2 vs. indeterminate], and in many instances there was only one dissenting rating. Furthermore, the majority of raters classified the CPR pattern as “indeterminate” in 20%; raters indicated that in most of these cases the intended chest compression patterns were simply not discernable. These observations are not unexpected given the complexity of the CPR process images and the natural chest compression variations that may occur from personnel changes, efforts to move the patient, manipulations of the airway, or rescuer fatigue.4 The inclusion of an “indeterminate” chest compression category in this analysis was both necessary and appropriate, as we expected natural variation in CPR delivery. While we provided structured guidelines for chest compression pattern classification, the reviewers affirmed that their exact interpretation likely varied.

Perhaps the most important finding of this study is the agreement of automated with manual chest compression classifications. Manual CPR pattern review and classification is time-intensive and arduous; raters in this study reported spending between 5 to 60 minutes to assess and assign a rating to each case. An automated approach may lend efficiencies to this process, bringing clear benefits to scientific and clinical applications. While our automated analysis was based on information that was manually extracted (the time and duration of each CPR segment), we surmise that computer analysis could also automate the latter task. We suspect that the feasibility of computer classification analysis likely depends on the complexity of a given chest compression strategy, and thus independent validation would be appropriate prior to application to a new CPR strategy.

In post hoc analyses, we found that agreement between manual and automated classifications was lower for the Philips CPR process files, an observation that was likely due to the small number of Philips CPR files in the series. However, the raters did comment on graphical distinctions between the CPR process reports that may have influenced chest compression pattern classifications. For example, the Physio-Control device uses bar graphs to depict each discrete compression, while the Zoll and Philips devices use line graphs depicting the vertical displacement of the chest compression sensor (Figure 1 and Data Supplement S1). The time resolution of the CPR reports also varies, with each line of the Physio-Control and Zoll graphical output spanning approximately 15 seconds and each line of the Philips output spanning 2 minutes. Unlike the Philips software, the Physio-Control and Zoll software also offer per-minute summaries of chest compression metrics. Additional study must identify the graphical characteristics most conducive to CPR pattern analysis.

Our study also highlights unexplored strategies for characterizing chest compressions. Prior efforts have focused on individual dimensions of CPR process such as the number and duration of chest compression interruptions or chest compression depth or rate.2-5,20,21 In the automated algorithm, we classified chest compression patterns using combinations of chest compression fraction, segment length, and number of interruptions. There may be other individual or combinations of metrics that better characterize the overall pattern of chest compressions and correlate with clinical outcomes. These measures might also improve our understanding of the physiologic mechanisms linking CPR strategy to improved outcomes. Additional study must explore these important unanswered questions.

LIMITATIONS

We used as a convenience sample of CPR process images by study team consensus rather than by random selection by site or cardiac monitor manufacturer, and thus selection bias may have influenced the observed results. To minimize subject burden, we did not test intra-rater agreement, which would have required repeat ratings of an additional 10 to 15 chest compression cases. We did not evaluate inter-rater agreement of other chest compression metrics such as compression depth, rate, or chest compression fraction. As suggested in the sensitivity analysis, a smaller number of indeterminate cases would have increased the observed kappa. We did not use audio recordings of the encounters.

In the evaluation of the automated chest compression algorithm, we designated criterion standard classifications by using the ratings assigned by the majority of manual ratings, which may have resulted in misclassification in a small number of cases. Given the complexity of classifying chest compression patterns, we opted not to designate criterion standard by study team consensus. Although we observed good inter-rater agreement estimates, the bootstrapped CIs suggest that a portion of the estimates may have been lower than observed.

Given the absence of prior data, we developed definitions for manual and automated classifications based on study team consensus. The development of different metrics and approaches to chest compression pattern classification is possible, but was outside the scope of this analysis. Additional study must explore and identify additional strategies for chest compression pattern classification.

This analysis used chest compression data from out-of-hospital cardiac arrests treated by expert EMS agencies specifically trained in the ROC research protocols. The clarity of chest compression patterns may differ with other EMS agencies or practitioners. Also, the raters in this study possessed extensive experience with CPR process analysis. We evaluated classification agreement for two specific treatment algorithms; the potential inter-rater agreement for alternate chest compression strategies is unclear.

CONCLUSIONS

In this study human raters showed good inter-rater agreement in the classification of cardiopulmonary resuscitation chest compression patterns. Automated classification showed strong agreement with human ratings. These observations support the consistency of manual cardiopulmonary resuscitation pattern classification as well as the use of automated approaches to chest compression pattern analysis.

Supplementary Material

Supp DataS1
Supp DataS2
Supp DataS3

Acknowledgments

The Resuscitation Outcomes Consortium (ROC) is supported by a series of cooperative agreements to nine regional clinical centers and one data coordinating center. The authors at these centers received salary support from grants. They are 5U01 HL077863–University of Washington Data Coordinating Center, HL077866–Medical College of Wisconsin, HL077867–University of Washington, HL077871–University of Pittsburgh, HL077872–St. Michael’s Hospital, HL077873–Oregon Health and Science University, HL077881–University of Alabama at Birmingham, HL077885–Ottawa Health Research Institute, HL077887–University of Texas SW Medical Center/Dallas, HL077908–University of California San Diego) from the National Heart, Lung and Blood Institute in partnership with the National Institute of Neurological Disorders and Stroke, U.S. Army Medical Research & Material Command, The Canadian Institutes of Health Research (CIHR)–Institute of Circulatory and Respiratory Health, Defense Research and Development Canada, Heart and Stroke Foundation of Canada, and the American Heart Association. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Heart, Lung and Blood Institute or the National Institutes of Health.

JPD received grant support from the Agency for Healthcare Research and Quality Grant T32 HS13852-11.

Footnotes

Please note: Wiley Periodicals Inc. is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

Supporting Information:

The following supporting information is available in the online version of this paper:

Data Supplement S1. Sample chest compression process files.

Data Supplement S2. Example of identification of CPR chest compression segments.

Data Supplement S3. Rationale for automated chest compression classification criteria.

The documents are in PDF format.

References

  • 1.Abella BS, Alvarado JP, Myklebust H, et al. Quality of cardiopulmonary resuscitation during in-hospital cardiac arrest. JAMA. 2005;293:305–10. doi: 10.1001/jama.293.3.305. [DOI] [PubMed] [Google Scholar]
  • 2.Christenson J, Andrusiek D, Everson-Stewart S, et al. Chest compression fraction determines survival in patients with out-of-hospital ventricular fibrillation. Circulation. 2009;120:1241–7. doi: 10.1161/CIRCULATIONAHA.109.852202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Vaillancourt C, Everson-Stewart S, Christenson J, et al. The impact of increased chest compression fraction on return of spontaneous circulation for out-of-hospital cardiac arrest patients not in ventricular fibrillation. Resuscitation. 2011;82:1501–7. doi: 10.1016/j.resuscitation.2011.07.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Wang HE, Simeone SJ, Weaver MD, Callaway CW. Interruptions in cardiopulmonary resuscitation from paramedic endotracheal intubation. Ann Emerg Med. 2009;54:645–52. doi: 10.1016/j.annemergmed.2009.05.024. [DOI] [PubMed] [Google Scholar]
  • 5.Cheskes S, Schmicker RH, Christenson J, et al. Perishock pause: an independent predictor of survival from out-of-hospital shockable cardiac arrest. Circulation. 2011;124:58–66. doi: 10.1161/CIRCULATIONAHA.110.010736. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Wik L, Kramer-Johansen J, Myklebust H, et al. Quality of cardiopulmonary resuscitation during out-of-hospital cardiac arrest. JAMA. 2005;293:299–304. doi: 10.1001/jama.293.3.299. [DOI] [PubMed] [Google Scholar]
  • 7.Bobrow BJ, Clark LL, Ewy GA, et al. Minimally interrupted cardiac resuscitation by emergency medical services for out-of-hospital cardiac arrest. JAMA. 2008;299:1158–65. doi: 10.1001/jama.299.10.1158. [DOI] [PubMed] [Google Scholar]
  • 8.Kellum MJ, Kennedy KW, Ewy GA. Cardiocerebral resuscitation improves survival of patients with out-of-hospital cardiac arrest. Am J Med. 2006;119:335–40. doi: 10.1016/j.amjmed.2005.11.014. [DOI] [PubMed] [Google Scholar]
  • 9.Ewy GA, Kern KB, Sanders AB, et al. Cardiocerebral resuscitation for cardiac arrest. Am J Med. 2006;119:6–9. doi: 10.1016/j.amjmed.2005.06.067. [DOI] [PubMed] [Google Scholar]
  • 10.Newgard CD, Sears GK, Rea TD, et al. The Resuscitation Outcomes Consortium epistry-trauma: design, development, and implementation of a North American epidemiologic prehospital trauma registry. Resuscitation. 2008;78:170–8. doi: 10.1016/j.resuscitation.2008.01.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Morrison LJ, Nichol G, Rea TD, et al. Rationale, development and implementation of the Resuscitation Outcomes Consortium epistry-cardiac arrest. Resuscitation. 2008;78:161–9. doi: 10.1016/j.resuscitation.2008.02.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Davis DP, Garberson LA, Andrusiek DL, et al. A descriptive analysis of emergency medical service systems participating in the Resuscitation Outcomes Consortium (ROC) network. Prehosp Emerg Care. 2007;11:369–82. doi: 10.1080/10903120701537147. [DOI] [PubMed] [Google Scholar]
  • 13.Aufderheide TP, Nichol G, Rea TD, et al. A trial of an impedance threshold device in out-of-hospital cardiac arrest. N Engl J Med. 2011;365:798–806. doi: 10.1056/NEJMoa1010821. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Stiell IG, Nichol G, Leroux BG, et al. Early versus later rhythm analysis in patients with out-of-hospital cardiac arrest. N Engl J Med. 2011;365:787–97. doi: 10.1056/NEJMoa1010076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Ayala U, Eftestol T, Alonso E, et al. Automatic detection of chest compressions for the assessment of CPR-quality parameters. Resuscitation. 2014;85:957–63. doi: 10.1016/j.resuscitation.2014.04.007. [DOI] [PubMed] [Google Scholar]
  • 16.Donner A, Rotondi MA. Sample size requirements for interval estimation of the kappa statistic for interobserver agreement studies with a binary outcome and multiple raters. Int J Biostat. 2010;6 doi: 10.2202/1557-4679.1275. Article 31. [DOI] [PubMed] [Google Scholar]
  • 17.Rotondi MA, Donner A. A confidence interval approach to sample size estimation for interobserver agreement studies with multiple raters and outcomes. J Clin Epidemiol. 2012;65:778–84. doi: 10.1016/j.jclinepi.2011.10.019. [DOI] [PubMed] [Google Scholar]
  • 18.Fleiss JL. Measuring nominal scale agreement among many raters. Psychol Bull. 1971;76:378–82. [Google Scholar]
  • 19.Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–74. [PubMed] [Google Scholar]
  • 20.Idris AH, Guffey D, Aufderheide TP, et al. Relationship between chest compression rates and outcomes from cardiac arrest. Circulation. 2012;125:3004–12. doi: 10.1161/CIRCULATIONAHA.111.059535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Stiell IG, Brown SP, Christenson J, et al. What is the role of chest compression depth during out-of-hospital cardiac arrest resuscitation? Crit Care Med. 2012;40:1192–8. doi: 10.1097/CCM.0b013e31823bc8bb. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supp DataS1
Supp DataS2
Supp DataS3

RESOURCES