Skip to main content
Resuscitation Plus logoLink to Resuscitation Plus
. 2024 Nov 5;20:100825. doi: 10.1016/j.resplu.2024.100825

Feasibility of real-time compression frequency and compression depth assessment in CPR using a “machine-learning” artificial intelligence tool

Hannes Ecker a,b,, Niels-Benjamin Adams a,b, Michael Schmitz c, Wolfgang A Wetsch a,b
PMCID: PMC11570746  PMID: 39559731

Abstract

Background

Video assisted cardiopulmonary resuscitation (V-CPR) has demonstrated to be efficient in improving CPR quality and patient outcomes, as Emergency Medical Service (EMS) dispatchers can use the video stream of a caller for diagnostic purposes and give instructions in a CPR scenario. However, the new challenges faced by EMS dispatchers during video-guided CPR (V-CPR)—such as analyzing the video stream, providing feedback to the caller, and managing stress—demand innovative solutions. This study explores the feasibility of incorporating an open-source “machine-learning” tool (artificial intelligence – AI), to evaluate the feasibility and accuracy in correctly detecting the actual compression frequency and compression depth in video footage of a simulated CPR.

Design

MediaPipe Pose Landmark Detection (Google LLC, Mountain View, CA, USA), an open-source AI software using “machine-learning” models to detect human bodies in images and videos, was programmed to assess compression frequency an depth in nine videos, showing CPR on a resuscitation manikin. Compression frequency and depth were assessed from compression to compression with AI software and were compared to the manikin’s internal software (QCPR, Laerdal, Stavanger, Norway). After testing for Gaussian distribution, means of non-gaussian data were compared using Wilcoxon matched-pairs signed rank test and the Bland Altman method.

Main results

MediaPipe Pose Landmark Detection successfully identified and tracked the person performing CPR in all nine video sequences. There were high levels of agreement between compression frequencies derived from AI and manikin’s software. However, the precision of compression depth showed major inaccuracies and was overall not accurate.

Conclusions

This feasibility study demonstrates the potential of open-source “machine-learning” tools in providing real-time feedback on V-CPR video sequences. In this pilot study, an open-source landmark detection AI software was able to assess CPR compression frequency with high agreement to actual frequency derived from the CPR manikin. For compression depth, its performance was not accurate, suggesting the need for adjustment. Since the software used is currently not intended for medical use, further development is necessary before the technology can be evaluated in real CPR.

Keywords: Cardiopulmonary resuscitation, CPR, Video-assisted cardiopulmonary resuscitation, V-CPR, Telephone assisted cardiopulmonary resuscitation, T-CPR, Artificial Intelligence, AI, Machine Learning, ML, EMS Dispatching, Pose-tracking, Body-tracking

Introduction

Video assisted cardiopulmonary resuscitation (V-CPR) is an advanced form of the highly effective telephone assisted CPR (T-CPR), wherein the Emergency Medical Service (EMS) dispatcher sees a live videostream of the actual CPR efforts.1, 2 The dispatcher can use the additional information given by the videostream for diagnostic purposes and to instruct laypersons in CPR. V-CPR has proven to enhance the CPR quality in several simulation studies.3, 4, 5, 6, 7 Moreover, retrospective observational trails indicate improvements in patient survival and outcome in real cardiac arrest victims in the field.8, 9, 10 However, V-CPR can be a stressful situation for dispatchers, as they are, besides allocating EMS-units, burdened with analyzing the video stream and instructing the caller accordingly, while being challenged by sometimes psychological troubling images. Tasks they were initially not trained for.11, 12

The emergence of neural networks based on “machine-learning” will potentially have fundamental impact on our society, including their application in the medical field. More and more “machine-learning” tools are being developed or already applied to help physicians and medical staff in their work. We hypothesized that video assisted CPR used by EMS could potentially benefit from it. Therefore, we adapted an open-source “machine-learning” tool to evaluate certain aspects − compression frequency and depth − of CPR in a video sequence. This could levitate the burden on dispatchers and potentially improve CPR quality in the future.

Methods

Ethics approval

This study is a technical feasibility trial without participants. Therefore, ethical approval was waived.

Machine learning software

For this study, we used an open-source “machine-learning” based software model called MediaPipe Pose Landmark Detection (Version 0.4; Alphabet Inc., Mountain View, California, USA). The software is customizable and allows landmark detection of a body pose using a “machine-learning” model to detect the presence of human bodies within an image frame, and a second “machine-learning” model to locate landmarks on the bodies. It tracks thirty-three body landmark locations, representing the approximate location of the body parts and creates a skeletal representation of the human body.

For this study, MediaPipe Pose Landmark Detection Solution was incorporated into a custom web-based software by NEOANALOG (Schmitz & Rabe GbR, Cologne, Germany) for the purpose of identifying a person providing CPR and giving real time feedback by analyzing the body tracking data (Fig. 1 and Fig. 2). By detecting body movement by focusing on changes in hand- and shoulder-position, the software was able to generate values for chest compression-frequency (/min), chest compression depth (cm) and the angles of upper and lower arms. The software was customized to run as a browser application in Google Chrome (Alphabet Inc., Mountain View, California, USA) and was running on a MacBook Pro, operating with macOS Monterey 12.7.1 (Apple, Cupertino, California, USA). The values were compared to a given target range in order to give visual feedback on the quality of the CPR measurements in real time. It must be noted that the MediPipe software itself is still in an early release preview phase and that Google distributes the software as a leisure tool. As it is not approved for medical purposes, this study was only designed as a proof-of-principle study.

Fig. 1.

Fig. 1

Browser Application with the customized CPR detection working on the MediaPipe Solution “machine-learning” algorithm.

Fig. 2.

Fig. 2

Browser Application with the customized CPR detection working on the MediaPipe Solution “machine-learning” algorithm.

Experimental setup

Nine archival videos showing a compression-only CPR performed by the same member of the study-group (30 year old male with average physique) on a life-like manikin, taken from a prior study, were used. The videos were recorded in a resolution of 1920x1080 pixel and a framerate of 30 FPS with a Smartphone (Iphone 12 mini, Apple Inc., Cupertino, CA, USA). The camera of the phone was positioned 140 cm elevated and 200 cm away from the center of the manikin (Resusci Anne Advanced Skill Trainer, Laerdal, Stavanger, Norway), facing the person performing CPR. All videos were 60 s long and showed a different combination of compression depth and frequency as shown in Table 1. Compression frequency and compression depth were recorded and logged compression by compression using the manikin’s internal software, which served as reference.

Table 1.

CPR combination and corresponding values for Compression frequency and depth.

CPR Combination Compression frequency (/min) Compression depth (mm)
1 110 50
2 110 35
3 110 70
4 70 50
5 70 35
6 70 70
7 140 50
8 140 35
9 140 70

Experimental procedure

The nine video footages were uploaded into the experimental AI-software one at a time. Then, a comma-separated values (CSV) file of each CPR-simulation was generated and exported, containing the registered compression frequency and the corresponding compression depth. This log was compared to an extracted log of the CPR manikin, which registered every compression of its coil. As both, the “machine-learning” software and the manikin log, registered every compression a matching of the first compression for comparison was possible.

Statistical Analysis was performed using GraphPad Prism (10.3.1). Data sets were tested for Gaussian distribution, and then analyzed with Wilcoxon matched-pairs signed rank test for data without Gaussian distribution. Statistical analysis was performed using a Wilcoxon matched-pairs signed rank test. p < 0.05 was considered as statistically significant. Besides, data sets were also compared using the Bland-Altman method to identify the extent of agreement between the two quantitative measurements.

Results

Landmark detection of the AI software was able to identify and analyze the person performing CPR and their posture and movements in all nine video sequences. This way, n = 9 data sets were available for comparison. Table 2 gives an overview of compression frequency and depth with corresponding mean values recorded by manikin and AI tool.

Table 2.

Overview of Compression Frequency and Compression Depth with corresponding Mean Values Recorded by the Manikin and AI-Tool, including p-Values (Wilcoxon matched-pairs signed rank test).

Compression frequency and depth performed
70/min − 35 mm
70/min − 50 mm
70/min − 70 mm
110/min − 35 mm
110/min − 50 mm
110/min − 70 mm
140/min − 35 mm
140/min − 50 mm
140/min − 70 mm
Manikin LOG AI-Tool Manikin LOG AI-Tool Manikin LOG AI-Tool Manikin LOG AI-Tool Manikin LOG AI-Tool Manikin LOG AI-Tool Manikin LOG AI-Tool Manikin LOG AI-Tool Manikin LOG AI-Tool
Mean of Compression frequency 70.33 71.08 74.77 65 70.38 76.03 110.3 58.31 110.0 109.9 110.0 110.8 139.9 140.4 140.3 140.4 139.6 140.5
Standard Deviation 5.521 8.761 12.81 0.000 14.73 1.997 6.548 2.282 1.674 7.792 1.704 5.720 4.089 10.55 4.092 11.51 6.431 13.48
P-value (Wilcoxon matched-pairs signed rank test) 0,0032 0,0002 0,0002 0,0259 0,0164 0,0235 0,0268 0,0192 0,0004
Mean of Compression depth 43.13 24.94 56.84 38.08 65.00 51.52 34.17 14.48 58.31 36.53 65.00 35.50 37.95 11.19 59.34 28.84 64.68 22.31
Standard Deviation 2.686 6.253 1.954 6.521 0.000 14.58 1.997 4.446 2.282 6.430 0.000 8.905 2.183 5.528 1.689 5.021 4.098 10.92
P-value (Wilcoxon matched-pairs signed rank test) 0.0606 0.1953 not applicable 0.0297 <0.0001 not applicable 0.1931 0.4583 not applicable

Statistical analysis revealed a significant correlation between compression frequency recorded by the AI-Tool and the manikin registered data for all compression frequencies performed, regardless of the compression depth (p < 0.05 each) (Fig. 3).

Fig. 3.

Fig. 3

Scatter Plots visualizing the accuracy for the Compression Frequency detection for the AI-Tool (AI, y-axis) and the Manikin-Log (LOG, x-axis) (Wilcoxon matched-pairs signed rank test).

Comparison of the compression depth showed significant correlation only for compression frequencies of 110/min with a compression depth of 35 and 50 mm (p = 0.0297 and < 0.0001). All other compression depths were not accurately determined by the AI tool, which tended to underestimate the compression depth compared to the manikin (p = n.s.). We did not apply the statistical testing on compression depth of 70 mm, as the manikins maximum compression depth was limited to 65 mm, because of technical limitations of the manikins compression coil. Thus, all recorded values for targeted compression depth of 70 mm were the same for the manikin, making a statistical analysis futile for these compression depths (Fig. 4).

Fig. 4.

Fig. 4

Scatter Plots visualizing the accuracy for the Compression Depth detection for the AI-Tool (AI, y-axis) and the Manikin-Log (LOG, x-axis) (Wilcoxon matched-pairs signed rank test). Note: Scatter Plots for targeted Compression depth of 70 mm were not created, because of technical limitations of the manikin. See results section for more information.

We furthermore performed a Bland-Altman Analysis to compare the agreement of the two methods (Fig. 5, Fig. 6).

Fig. 5.

Fig. 5

Bland Altman Scatter Plots visualizing the accuracy for the Compression Frequency detection for the AI-Tool:

Fig. 6.

Fig. 6

Bland Altman Scatter Plots: visualizing the accuracy for the Compression Depth detection for the AI-Tool. Note: Scatter Plots for targeted Compression depth of 70 mm were not created, because of technical limitations of the manikin. See results section for more information.

Concerning compression frequency, the Bland-Altman methods showed that the mean difference between the manikin and AI-based measurements is relatively small, indicating that both methods yield similar values on average. The variability of the Standard Deviation of Differences was moderate, suggesting some variability between the measurements of the two methods. However, most data points fell within the ± 1.96 standard deviations, indicating that the differences between the methods are mostly within the expected range. Overall, a good correlation between the two measurement methods concerning compression frequency was found.

Even though the Standard Deviation of Differences for compression depth is also moderate and only few data points fall outside the ± 1.96 standard deviations, the Bland-Altman scatter plots show that the mean difference of the AI-based measurement tend to provide lower values than the manikin. There seems to be a tendency towards a proportional bias, as higher compression depths are proportionally more underestimated than the lower compression depths. For the compressions with a target of 70 mm, Bland-Altman analysis cannot be used, since the manikin’s maximum registered compression depth was 65 mm.

Discussion

In this pilot study, we used an adapted version of a free “machine-learning” tool for real-time assessment of CPR compression frequency and depth. By customizing the software for this purpose, it was able to identify the helper’s body posture and by assessing its motion data was able to calculate compression frequency and depth.

Artificial intelligence is an emerging technological revolution, which has the potential to change many aspects of our life, including medical care. Appropriate application of AI may help to reduce the work burdens also for medical professionals, as well as possibly improving patient outcomes.

To our knowledge, there are no studies that have applied AI for the assessment of video-streamed CPR, as it would be used by an EMS dispatchers. There is one study by Birkun et al., which used a large-language model (LLM) to operate as an automated dispatcher assistant for OHCA witnesses, to facilitate the recognition of cardiac arrest and to give real-time instructions on CPR.13 However, this modified chat bot only reacted to voice input and could not assess a video picture.

There were three simulation studies that used a preinstalled application on a smartphone, with the camera activated, which was placed on the floor next to CPR manikin. Using a complex algorithm to detect changes either in pixel density of the grey scale image or its spectral density, it was possible in these trials to give live feedback about the compression frequency to the person performing CPR.14, 15, 16

In addition, there were several simulation studies using motion capturing based on Kinect sensors (Microsoft) to either capture CPR data like compression frequency, or to give instructions for training purposes.17, 18, 19, 20, 21 The Kinect system is, however, a different kind of technology compared to the “machine learning” tool. It was primarily designed for gaming and uses infrared light, special camera lenses and additional light sensors to gather its data. In contrast, the AI used in our trial is standalone from additional sensory equipment and can theoretically be used on any video format. It should also be noted that Microsoft halted the production of the Kinect Versions used in the studies in 2017.

However, body posture detection was also used in a study from 2023 with the software OpenPose, which is also trained on detecting body posture by a neural network.25 However, OpenPose normally needs two synchronized cameras to operate and was in this study only used to detect the arm-angle of the person performing CPR. This study did not extrapolate compression frequency nor depth.

A very recent study used the Large Language Model (LLM) GPT-40 to score videos of CPR-exams for medical students and compare it to the scoring of experts.24 One of the items in their score was “chest compressions”, which includes items for compression depth and frequency. This approach, although very interesting does not give exact values for frequency and depth, only determines a correct compression. Moreover, it does not allow for horizontal comparative analysis, meaning it does not give a real-time assessment like the body-posture detection does.

We observed that the AI-based workflow to detect the compression frequency was surprisingly accurate in all CPR combinations investigated in this trial. However, the compression depth detection was very imprecise and overall raw. The AI was not able to consistently detect the exact compression depth, regardless which CPR condition was shown, probably because of inaccuracy in the pose tracking. The Bland Altman scatter plot suggests a proportional bias for compression depth, which is present when the difference in values resulting from two methods increases or decreases in proportion to the average values. We suspect a calibration error of the AI-based workflow, which could explain these discrepancies. Nevertheless, this finding should not be seen as discouraging, as any neural network can (and should) be specifically trained in a subject, like thorax compression, when it is programmed to do so. We used an open-source AI that was not designed for this purpose and still delivered acceptable results in certain aspects. This way, we can suggest that a customized and “tailor-made” AI for CPR video assessment could perform more reliably.

Limitations

This was a pilot trial testing the AI in “optimal” conditions – standardized videos performed by onlyone subject to test its potential. However, some limitations must be pointed out:

We tested the AI only in one camera perspective directly facing the helper performing CPR. Camera perspectives filmed from different positions toward the manikin were not tested, as we came across obstacles in a pretest to this study, where the AÍs body posture detection, which needs a line of sight to the helpeŕs eye line and arms, could not be established. The AI was also not challenged by additional persons in the frame, which could potentially interfere with its analysis. Moreover, the analyzed videos were recorded with a fixed camera on a tripod. Additional interference when filming with a handheld camera can also possibly interfere with the analysis.22, 23

The matching of the data sets from manikin and AI-based software was done by manually identifying the first registered compression of each set. This is a potentially source for inaccuracies.

Moreover, this study uses only a very small sample size (pilot study), which limits its statistical results and lessens its generalization. Although the Bland-Altman analysis is a common used statistical method to test the agreement of two measurement techniques, a small sample size such as ours can potentially lead to unreliable limits of agreement.

Conclusion

In conclusion, this feasibility study showed that open-source “machine-learning” tool can give real time feedback on a video sequence of CPR. The AI delivered accurate data on compression frequency, but no consistently accurate data for compression depth. Even though, delivering promising results, future software must be designed more specifically for this purpose and must be approved for medical use before being tested in real patients.

Funding.

This study was supported by institutional resources only.

CRediT authorship contribution statement

Hannes Ecker: Writing – review & editing, Writing – original draft, Methodology, Formal analysis, Conceptualization. Niels-Benjamin Adams: . Michael Schmitz: Writing – review & editing, Visualization, Software, Methodology, Data curation, Conceptualization. Wolfgang A. Wetsch: Writing – review & editing, Writing – original draft, Project administration, Methodology, Formal analysis, Data curation, Conceptualization.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

We used the Deepl “Write” language correction mode for checking of spelling, grammatical and other language errors. After using this tool, the author reviewed and edited the content as needed and take full responsibility for the content of the publication.

Author contributions

All authors contributed to the study conception. The first protocol was designed by Hannes Ecker and Wolfgang Wetsch. The MediaPipe Pose Landmark Detection software is an open source application by Google (Alphabet Inc., Mountain View, California, USA). Michael Schmitz applied it for this purpose. Material preparation, data collection and analysis were performed by Hannes Ecker, Michael Schmitz, Niels Adams and Wolfgang Wetsch. Statistical Analysis was done by Wolfgang Wetsch. The first draft of the manuscript was written by Hannes Ecker and Wolfgang Wetsch. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Ethics approval.

Since this was a completely technical simulation with no participants, no ethical approval was required.

References

  • 1.Berg K.M., Cheng A., Panchal A.R., et al. Part 7: systems of care: 2020 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation. 2020;142 (16_suppl_2):S580–S604 doi: 10.1161/CIR.0000000000000899. [DOI] [PubMed] [Google Scholar]
  • 2.Lin Y.Y., Chiang W.C., Hsieh M.J., Sun J.T., Chang Y.C., Ma M.H. Quality of audio-assisted versus video-assisted dispatcher-instructed bystander cardiopulmonary resuscitation: A systematic review and meta-analysis. Resuscitation. 2018;123:77–85. doi: 10.1016/j.resuscitation.2017.12.010. [DOI] [PubMed] [Google Scholar]
  • 3.Bolle S.R., Johnsen E., Gilbert M. Video calls for dispatcher-assisted cardiopulmonary resuscitation can improve the confidence of lay rescuers–surveys after simulated cardiac arrest. J Telemed Telecare. 2011;17:88–92. doi: 10.1258/jtt.2010.100605. [DOI] [PubMed] [Google Scholar]
  • 4.Bolle S.R., Scholl J., Gilbert M. Can video mobile phones improve CPR quality when used for dispatcher assistance during simulated cardiac arrest? Acta Anaesthesiol Scand. 2009;53:116–120. doi: 10.1111/j.1399-6576.2008.01779.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Stipulante S., Delfosse A.S., Donneau A.F., et al. Interactive videoconferencing versus audio telephone calls for dispatcher-assisted cardiopulmonary resuscitation using the ALERT algorithm: a randomized trial. Eur J Emerg Med. 2016;23:418–424. doi: 10.1097/MEJ.0000000000000338. [DOI] [PubMed] [Google Scholar]
  • 6.Ecker H., Lindacher F., Adams N., et al. Video-assisted cardiopulmonary resuscitation via smartphone improves quality of resuscitation: A randomised controlled simulation trial. Eur J Anaesthesiol. 2020;37:294–302. doi: 10.1097/EJA.0000000000001177. [DOI] [PubMed] [Google Scholar]
  • 7.Ecker H., Wingen S., Hamacher S., Lindacher F., Bottiger B.W., Wetsch W.A. Evaluation Of CPR Quality Via Smartphone With A Video Livestream - A Study In A Metropolitan Area. Prehosp Emerg Care. 2021;25:76–81. doi: 10.1080/10903127.2020.1734122. [DOI] [PubMed] [Google Scholar]
  • 8.Lee H.S., You K., Jeon J.P., Kim C., Kim S. The effect of video-instructed versus audio-instructed dispatcher-assisted cardiopulmonary resuscitation on patient outcomes following out of hospital cardiac arrest in Seoul. Sci Rep. 2021;11:15555. doi: 10.1038/s41598-021-95077-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Lee S.Y., Song K.J., Shin S.D., Hong K.J., Kim T.H. Comparison of the effects of audio-instructed and video-instructed dispatcher-assisted cardiopulmonary resuscitation on resuscitation outcomes after out-of-hospital cardiac arrest. Resuscitation. 2020;147:12–20. doi: 10.1016/j.resuscitation.2019.12.004. [DOI] [PubMed] [Google Scholar]
  • 10.Linderoth G., Rosenkrantz O., Lippert F., et al. Live video from bystanders' smartphones to improve cardiopulmonary resuscitation. Resuscitation. 2021;168:35–43. doi: 10.1016/j.resuscitation.2021.08.048. [DOI] [PubMed] [Google Scholar]
  • 11.Ecker H., Wingen S., Hagemeier A., Plata C., Bottiger B.W., Wetsch W.A. Dispatcher Self-assessment and Attitude Toward Video Assistance as a New Tool in Simulated Cardiopulmonary Resuscitation. West J Emerg Med. 2022;23:229–234. doi: 10.5811/westjem.2021.12.53027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Johnsen E., Bolle S.R. To see or not to see–better dispatcher-assisted CPR with video-calls? A qualitative study based on simulated trials. Resuscitation. 2008;78:320–326. doi: 10.1016/j.resuscitation.2008.04.024. [DOI] [PubMed] [Google Scholar]
  • 13.Birkun A. Performance of an artificial intelligence-based chatbot when acting as EMS dispatcher in a cardiac arrest scenario. Intern Emerg Med. 2023 Nov;18(8):2449–2452. doi: 10.1007/s11739-023-03399-1. Epub 2023 Aug 21 PMID: 37603142. [DOI] [PubMed] [Google Scholar]
  • 14.Engan K., Hinna T., Ryen T., Birkenes T.S., Myklebust H. Chest compression rate measurement from smartphone video. Biomed Eng Online. 2016;15(1):95. doi: 10.1186/s12938-016-0218-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Frisch A., Das S., Reynolds J.C., De la Torre F., Hodgins J.K., Carlson J.N. Analysis of smartphone video footage classifies chest compression rate during simulated CPR. Am J Emerg Med. 2014 Sep;32(9):1136–1138. doi: 10.1016/j.ajem.2014.05.040. Epub 2014 Jun 2 PMID: 24985942. [DOI] [PubMed] [Google Scholar]
  • 16.Meinich-Bache Ø., Engan K., Birkenes T.S., Myklebust H. Real-Time Chest Compression Quality Measurements by Smartphone Camera. J Healthc Eng. 2018 Oct;28(2018):6241856. doi: 10.1155/2018/6241856. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Lins C., Eckhoff D., Klausen A., Hellmers S., Hein A., Fudickar S. Cardiopulmonary resuscitation quality parameters from motion capture data using Differential Evolution fitting of sinusoids. Appl Soft Comput. 2019;79:300–309. doi: 10.1016/j.asoc.2019.03.023. [DOI] [Google Scholar]
  • 18.Okamoto S., Iwashiki H., Sato N., Karino K. Evaluation of Skills in Cardiopulmonary Resuscitation (CPR) Using Microsoft Kinect. Journal of Mechanics Engineering and Automation. 2018;8:264–272. doi: 10.17265/2159-5275/2018.06.004. [DOI] [Google Scholar]
  • 19.Di Mitri D., Schneider J., Specht M., Drachsler H. Detecting Mistakes in CPR Training with Multimodal Data and Neural Networks. Sensors (basel). 2019;19(14):3099. doi: 10.3390/s19143099. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Semeraro F., Marchetti L., Frisoli A., Cerchiari E.L., Perkins G.D. Motion detection technology as a tool for cardiopulmonary resuscitation (CPR) quality improvement. Resuscitation. 2012 Jan;83(1):e11–e12. doi: 10.1016/j.resuscitation.2011.07.043. Epub 2011 Sep 1 PMID: 21889470. [DOI] [PubMed] [Google Scholar]
  • 21.Semeraro F., Frisoli A., Loconsole C., et al. Motion detection technology as a tool for cardiopulmonary resuscitation (CPR) quality training: a randomised crossover mannequin pilot study. Resuscitation. 2013 Apr;84(4):501–507. doi: 10.1016/j.resuscitation.2012.12.006. Epub 2012 Dec 10 PMID: 23238423. [DOI] [PubMed] [Google Scholar]
  • 22.Plata C., Nellessen M., Roth R., et al. Impact of video quality when evaluating video-assisted cardiopulmonary resuscitation: a randomized, controlled simulation trial. BMC Emerg Med. 2021 Aug 21;21(1):96. doi: 10.1186/s12873-021-00486-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Wetsch WA, Ecker HM, Scheu A, Roth R, Böttiger BW, Plata C. Video-assisted cardiopulmonary resuscitation: Does the camera perspective matter? A randomized, controlled simulation trial. J Telemed Telecare. 2021 Jun 25:1357633X211028490. doi: 10.1177/1357633X211028490. Epub ahead of print. PMID: 34170206. [DOI] [PubMed]
  • 24.Wang L, Mao Y, Wang L, Sun Y, Song J, Zhang Y. Suitability of GPT-4o as an evaluator of cardiopulmonary resuscitation skills examinations. Resuscitation. 2024 Sep 28:110404. doi: 10.1016/j.resuscitation.2024.110404. Epub ahead of print. PMID: 39343124. [DOI] [PubMed]
  • 25.Weiss K.E., Kolbe M., Nef A., et al. Data-driven resuscitation training using pose estimation. Adv Simul (lond). 2023;8(1):12. doi: 10.1186/s41077-023-00251-6. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Resuscitation Plus are provided here courtesy of Elsevier

RESOURCES