To the Editor:
We read with great interest the paper by McCoy et al.1 published in the January 2019 issue of your valuable journal. The authors sought to evaluate the comparative effectiveness of high-fidelity simulation training vs standard manikin training for teaching medical students, using the American Heart Association guidelines for high-quality cardiopulmonary resuscitation (CPR). They concluded that high-fidelity simulation training was better than low-fidelity CPR manikin training.
Although this study was done in detail with an interesting result, there are some important methodological issues which should be considered to improve its application in practice and for future research:
The participants wereall fourth-year medical students, but there was no information about baseline data regarding their characteristics. Wouldn’t this issue be important as to whether or not the characteristics between the two groups were comparable?
There were two comparative groups in the study, but the authors used Kruskal-Wallis rank sum test without any adjustment. Mann-Whitney U test is commonly used to compare two sample means when the distribution is non-normal.2
It would have been better to refer to a related reference for calculating sample size in the study. Based on what justification was an effect size of five millimeters considered for comparing two groups?
We believe that some confounding variables such as previous experience, education, or interest in their own field may have had an effect on the results.3
The method of data collection appears to be missing or was not made clear to the reader.
Real-time feedback can increase the average of physical and mental workloads, and the quality of CPR then improves4 with higher reported physical workloads. In this study, it would have been better to do the training in the two groups by the mentioned method, and the two groups could then have been evaluated after a time interval.
We thank the authors for reporting the limitations of their study honestly. In one limitation, the authors declare that increasing the number of outcome measures increases the potential for a type I error. In this analysis a two-group comparison was done, not multiple comparisons.5
Footnotes
Section Editor: Soheil Sadaat, MD, MPH, PhD
Full text available through open access at http://escholarship.org/uc/uciem_westjem
Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. No author has professional or financial relationships with any companies that are relevant to this study. There are no conflicts of interest or sources of funding to declare.
REFERENCES
- 1.McCoy CE, Rahman A, Rendon JC, et al. Randomized controlled trial of simulation vs standard training for teaching medical students high-quality cardiopulmonary resuscitation. West J Emerg Med. 2019;20(1):15–22. doi: 10.5811/westjem.2018.11.39040. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.McElduff F, Cortina-Borja M, Chan SK, et al. When t-tests or Wilcoxon-Mann-Whitney tests won’t do. Adv Physiol Educ. 2010;34(3):128–33. doi: 10.1152/advan.00017.2010. [DOI] [PubMed] [Google Scholar]
- 3.Heidenreich JW, Berg RA, Higdon TA, et al. Rescuer fatigue: standard versus continuous chest-compression cardiopulmonary resuscitation. Acad Emerg Med. 2006;13(10):1020–6. doi: 10.1197/j.aem.2006.06.049. [DOI] [PubMed] [Google Scholar]
- 4.Brown LL, Lin Y, Tofil NM, et al. Impact of a CPR feedback device on healthcare provider workload during simulated cardiac arrest. Resuscitation. 2018;130:111–7. doi: 10.1016/j.resuscitation.2018.06.035. [DOI] [PubMed] [Google Scholar]
- 5.Feise RJ. Do multiple outcome measures require p-value adjustment? BMC Med Res Methodol. 2002;2:8. doi: 10.1186/1471-2288-2-8. [DOI] [PMC free article] [PubMed] [Google Scholar]