Skip to main content
The Journal of Education in Perioperative Medicine : JEPM logoLink to The Journal of Education in Perioperative Medicine : JEPM
. 2002 Jan 1;4(1):E019.

Learning by Computer Simulation Does Not Lead to Better Test Performance on Advanced Cardiac Life Support Than Textbook Study

Jong Hoon Kim *, Won Oak Kim , Kyeong Tae Min , Jong Yoon Yang §, Yong Taek Nam
PMCID: PMC4865364  PMID: 27390768

Abstract

Background

For an effective acquisition and the practical application of rapidly increasing amounts of information, computer-based learning has already been introduced in medical education. However, there have been few studies that compare this innovative method to traditional learning methods in studying advanced cardiac life support (ACLS).

Methods

Senior medical students were randomized to computer simulation and a textbook study. Each group studied ACLS for 150 minutes. Tests were done one week before, immediately after, and one week after the study period. Testing consisted of 20 questions. All questions were formulated in such a way that there was a single best answer. Each student also completed a questionnaire designed to assess computer skills as well as satisfaction with and benefit from the study materials.

Results

Test scores improved after both textbook study and computer simulation study in both groups but the improvement in scores was significantly higher for the textbook group only immediately after the study. There was no significant difference between groups in their computer skill and satisfaction with the study materials. The textbook group reported greater benefit from study materials than did the computer simulation group.

Conclusions

Studying ACLS with a hard copy textbook may be more effective than computer simulation for the acquisition of simple information during a brief period. However, the difference in effectiveness is likely transient.

Keywords: Cardiopulmonary resuscitation; computer-assisted instruction; education, medical

Introduction

Until recently, the use of printed material was the main method used to acquire information by self-learning. However, many other teaching and learning modalities have been developed in the medical field as a consequence of advancements in technology. In particular, the development of computer technology has made it possible to create an environment that simulates real practice situations. Screen based simulations have been introduced into medical education in many areas including cardiopulmonary resuscitation (CPR).1 - 4

The ACLS simulator 3.11 (Anesoft Corporation, Issaquah, WA) is a computer simulation program based on the advanced cardiac life support (ACLS) guideline made by American Heart Association for the treatment of cardiac dysrhythmias. It provides an environment that permits students to learn and practice managing various dysrhythmias without fear of patient morbidity and mortality. On the other hand, compared with study of printed materials, learning by computer simulation may suffer from lack of students’ familiarity with computers.

It is still a matter of debate whether computer-based learning is more effective in acquisition of knowledge and for practical use of the ACLS guideline. Therefore, we designed this randomized, prospective study to compare the effectiveness of learning ACLS through textbook study with participation in a computerized ACLS simulation.

Methods

Study Design

57 fourth-year students of the medical school who began their 2-week anesthesiology clerkship participated in this study. One session of each clerkship group consisted of 9 to 11 students who were not aware that they would have to take ACLS during the clerkship. The students who received a lecture or read a textbook about ACLS guidelines before the clerkship were also tested, but their scores were excluded from the data processing.

Pretests (1st test) were performed on day one of the clerkship without giving the students any prior notice. All students were asked not to study ACLS during their clerkship rotation but were also informed that enough time would be provided to study ACLS. One week after the pretest, the students were randomly divided into two groups. The computer simulation (CS) group (n = 29) used the computer program (ACLS Simulator, Anesoft®, Issaquah, USA). The program contains 28 cases of cardiac dysrhythmias, such as ventricular fibrillation, ventricular tachycardia, supraventricular tachycardia, wide QRS-complex tachycardia, atrial fibrillation, pulseless electrical activity, asystole, and 3rd degree heart block. The students could calculate what was the next best step to undertake and whether they were following the right course of dysrhythmia treatment according to ACLS guidelines. A textbook containing algorithms and a brief explanation of ACLS and EKG examples of cardiac dysrhythmias was given to the textbook (TB) group (n = 28). Students studied ACLS by the either method for 150 minutes. An additional 15 minutes were needed to provide instruction on how to use the computer program in the CS group. The computer program includes basic information about the airway, ventilation and cardiovascular system control, defibrillator application, administration of medication, and the core 9 cases (the cases that followed the ACLS algorithm completely from the beginning to the end). These were the cases that CS group students were instructed to study first. The 2nd test was administered immediately after the study period. After the 2nd test, students were asked to assess their computer skill, satisfaction with and benefit they obtained from the study materials. The computer skill was graded as follows; 0 = “I have never used a computer.”, 1 = “A computer is only a game machine.”, 2 = “I use the computer as a type writer.”, 3 = “I only can handle Windows.”, 4 = “I am familiar with many programs.”, and 5 = “I am actually a computer programmer.”. The grades of satisfaction and benefit from the study materials were measured as follows; 0 = ‘very bad”, 1 = ‘bad’, 2 = ‘not bad, not good’, 3 = ‘good’, 4 = ‘very good’, 5 = ‘excellent’. In this context, satisfaction referred to students’ personal preference of the learning method and benefit referred to students’ assessment of the educational value of the study materials. One week after the 2nd test, the 3rd test was performed without any prior announcement.

The test scores are presented as means ± SD and the grades are summarized as the mean rank. The difference of pretest scores between groups was tested using the unpaired t-test and the differences between pretest and, 2nd or 3rd tests were tested using paired t-test with correction for multiple comparisons (Bonferroni’s method). To eliminate the effect of possible differences in pretest performance on examining the difference of the scores between groups after studying ACLS, an analysis of covariance (ANCOVA) was performed, with the pretest (1st test) score as a covariate. Linear regression was used to analyze the correlation between the scores of the pretest and the 2nd test, and between the scores of the 2nd test and the 3rd test. To examine if the computer skill affected test scores, the difference in the test scores stratified by level of self-reported computer skill in the CS group was compared, using an analysis of variance. The differences in grades from the questionnaires were examined by the Mann-Whitney U test.

Tests

All tests consisted of 20 questions and all questions were formulated in such a way as have a single best answer. The materials in the cases used for the tests consisted of a brief history and physical signs, and EKG samples from textbooks that had not been studied by the students. The questions were about diagnosis of dysrhythmias and the proper treatment at each step according to the ACLS algorithm. The three tests had the same content, but the order of the questions, brief history, physical signs, and the EKG samples were different in each question. The content validity of the test was established through expert review. Specialists in cardiovascular anesthesia, who were not involved in the tests, critiqued the questions and revisions were made accordingly. After completion of all the tests, the Kuder-Richardson reliability of the tests was calculated. This statistic indicates whether examinees score roughly equally on different portions of the test, which would signify high internal consistency.

Results

Kuder-Richardson reliabilities of the tests were 0.28 (pretest), 0.63 (2nd test), and 0.61 (3rd test).

The scores of the pretest (1st test) were similar in both groups (CS: 7.3 ± 2.0 vs. TB: 8.0 ± 2.5). Test scores improved immediately after the study period (2nd test) in both groups with greater improvement in the TB group (CS: 10.3 ± 2.9 vs. TB: 12.2 ± 3.0) (F = 4.51, P<0.04). After one week (3rd test), scores were lower than those on the 2nd test were and there was no difference between two groups (CS: 9.9 ± 2.5 vs. TB: 11.1 ± 2.4) (F = 2.36, P = 0.13). In both groups, the scores on the 2nd and the 3rd tests were significantly higher compared with the pretest (Fig. 1).

Figure 1.

Figure 1

The average scores of the textbook (black bars) group and the computer simulation (white bars) group. Values are mean ± SD. * : P < 0.04 versus computer simulation group. † : P < 0.001 versus the scores of pretest

There was no significant difference in the students’ self-reported computer skill and satisfaction with the study materials. The TB group reported obtaining a higher benefit from study materials than did the CS group (Table 1). There was no difference in the Group CS scores stratified by reported computer skill (Fig. 2).

Table 1.

Results of Evaluation Questionnaires

Mean rank
Textbook Simulation P-Value
Prior computer experience 3.1 2.5 0.11
Educationbenefit 3.2 2.4 0.04
Satisfaction 2.9 2.7 0.78

Figure 2.

Figure 2

There is no difference in the distribution of scores according to the students’ computer skill in the computer simulation group. 0, “I have never used computer.”; 1, “A computer is only a game machine.”; 2, “I use computer as a type writer.”; 3, “I only can handle Windows.”; 4, “I am familiar with many programs.”; 5, “I am actually a computer programmer.”

The scores on the 3rd test were closely associated with the scores of the 2nd test, whereas the relationship between the scores of the pretest and 2nd test was not strong (Fig. 3).

Figure 3.

Figure 3

(a) Scattergram plotting the scores of the 2nd test versus the scores of pretest (r = 0.27). (b) Scattergram plotting the scores of the 3rd test versus the scores of the 2nd test (r = 0.55)

Discussion

Studying ACLS by computer simulation is somewhat different from merely reading and understanding printed materials. Although our students studied the same material content, the results were different according to the method used. In our study, medical students’ performance on a written multiple-choice test improved immediately after both computer-aided and textbook study, but the improvement was greater for textbook users. This observation was likely not related to the presence of preexisting knowledge, as shown by our analysis of covariance. Our results suggest that textbook study offer learners naïve about ACLS a slight advantage in acquiring information about ACLS. There are several possible reasons why simulation failed to lead to superior learning. First, novices may not benefit as much from learning on a simulator as more advanced learners. Simulation training may be better for honing more complex skills such as integration of knowledge and dynamic decision making. Second, the main part of the ACLS textbook consists of algorithms for treatment. Studying ACLS algorithms by computer simulation may be a more time-consuming and complicated process than reading a textbook. Many students in the TB group showed boredom after 2 hours, perhaps indicating that they had been able to acquire the knowledge they thought they needed. But almost all students of the CS group studied hard throughout the study period and some students complained about the shortness of the study period. We believe that the CS group needed more time because the students probably acquired knowledge by trial and error and only knew if their interventions were appropriate after a response by the computer. If more time had been given, the results might have been different. Furthermore, although additional instruction about handling the simulation program was given to the CS group, some students did not use either the ‘help’ or ‘what next’ icon, which would have provided assistance and the opportunity for further learning. Some students used the icons only after long period of time had passed. Some students of the CS group might also have considered the simulation merely to be a computer game and just concentrated on winning the game (normalizing the dysrhythmia), rather than understanding of the processes and implications of therapeutics.

Although there was significant improvement in performance on the 2nd test, the results of the 3rd test show that performance deteriorated over time, reaching the same level in both CS and TB groups one week after learning exposure. The score in the TB group declined more than the scores of the CS group after one week had elapsed. We think that learning by CS is more impressive,5 so that the CS group retained better what they had initially learned than the TB group. Our study could only assess retention of knowledge for one week. It is desirable to assess retention for longer periods of time and no conclusions about longer term retention of ACLS knowledge can be drawn from our results.

The method of performance assessment may have affected the ability of computer aided learning to show effectiveness. A performance test using a patient mannequin may have yielded different results. Results of performance using the standardized Mega Code examination, which was performed 10 to 11 months after training, showed that use of CS improved retention of the ACLS guideline better than textbook review.6 This finding indicates that computer aided study might be better than textbook review in helping students apply ACLS guidelines to real situations.

All three tests performed in our study contained the same material, but the internal consistency of the pretest was relatively low and that of the 2nd and 3rd tests relatively high. Because all those students who may have had knowledge about ACLS guidelines were excluded from this study, the students used in this study knew little about the material on the pretest, so they chose the answers to many questions randomly. The lack of correlation between pretest and posttest scores is therefore likely the result of random guessing on the pretest. However, after studying the ACLS guidelines by CS or TB, the students had gained some knowledge about ACLS. The improvement in test scores and reliability in 2nd and 3rd tests shows this.

Our results were consistent with those of others who used similar methods of evaluation in comparing computer-aided learning with traditional textbooks, seminars or lectures.5, 7 - 9 In these studies, the computer simulations were somewhat different from the ACLS simulator. Furthermore, the quality and nature of the textbooks provided to examinees might have been quite different from the ACLS textbook used in our study. ACLS guidelines, which mainly consist of algorithms, may be easier to understand and remember than traditional textbooks. We think that is one of the reasons why the TB group reported a greater benefit from study materials than the CS group.

We conclude that novices’ written test performance immediately after a short study period is better after study of an ACLS textbook than after participation in a simulation exercise. Retention of ACLS knowledge after one week, however, is likely to be the same with either method of learning. We speculate that an optimal learning strategy may involve the acquisition of book learning about ACLS algorithms followed by practice with computer simulation.

Appendix A. Textbooks provided to the students of TB group

  • Cardiopulmonary resuscitation. In: Morgan GE, Mikhail MS, eds. Clinical anesthesiology. 2nd ed. Stamford: Appleton and Lange Inc., 1996; 774-792.

  • RD White. Cardiopulmonary resuscitation: basic and advanced cardiac life support. In: RD Miller ed. Anesthesia. 5th ed. Philadelphia: Churchill Livingstone Inc., 2000: 2538-2547.

Appendix B. References of EKG samples used in the tests

  • Zipes DP. Management of cardiac arrhythmias: pharmacological, electrical and surgical techniques. In: Braunwald E, ed. Heart disease: a textbook of cardiovascular medicine. 5th ed. Philadelpia: Saunders Inc., 1997; 593-639.

  • Zipes DP. Specific arrhythmias: diagnosis and treatment. In: Braunwald E ed. Heart disease: a textbook of cardiovascular medicine. 5th ed. Philadelpia: Saunders Inc., 1997; 640-704.

  • Myerburg RJ, Kessler KM, Castellanos A. Recognition, clinical assessment and management of arrhythmia and conduction disturbance. In: Alexander RW, Schlant RC, Fuster V eds. Hurst’s The heart, arteries and veins. 9th ed. New York: McGraw-Hill Inc., 1998; 873-941.

  • Bigger TJ. Cardiac arrhythmia. In: Bennett JC, Plum F eds. Cecil: Textbook of medicine. 20th ed. Philadelphia: Saunders Inc., 1996; 231-53.

  • Bennett DH, ed. Cardiac arrhythmia. 4th ed. Oxford: Butterworth-Heinemann Inc., 1993; 24 – 119.

References

  • 1.Attia RR, Miller EV, Kitz RJ. Teaching effectiveness: evaluation of computer-assisted instruction for cardiopulmonary resuscitation. Anesth Analg. 1975;54:308–11. doi: 10.1213/00000539-197505000-00008. [DOI] [PubMed] [Google Scholar]
  • 2.Garfield JM, Paskin S, Philip JH. An evaluation of the effectiveness of a computer simulation of anaesthetic uptake and distribution as a teaching tool. Med Educ. 1989;23:457–62. doi: 10.1111/j.1365-2923.1989.tb00902.x. [DOI] [PubMed] [Google Scholar]
  • 3.Schwid HA, O’Donnell D. The anesthesia simulator-recorder: a device to train and evaluate anesthesiologists’ response to critical incidents. Anesthesiology. 1990;72:191–7. [PubMed] [Google Scholar]
  • 4.Andrews PV, Schwarz J, Helme RD. Students can learn medicine with computers. Med J Australia. 1992;157:693–695. [PubMed] [Google Scholar]
  • 5.Öhrn MAK, van Oostrom JH, van Meurs WL. A comparison of traditional textbook and interactive computer learning of neuromuscular block. Anesth Analg. 1997;84:657–61. doi: 10.1097/00000539-199703000-00035. [DOI] [PubMed] [Google Scholar]
  • 6.Schwid HA, Rooke GA, Ross BK, Sivarajan M. Use of computerized advanced cardiac life support simulator improves retention of advanced cardiac life support guidelines better than a textbook review. Crit Care Med. 1999;27:821–4. doi: 10.1097/00003246-199904000-00045. [DOI] [PubMed] [Google Scholar]
  • 7.Fincher RE, Abdulla AM, Sridharan MR, et al. Computer-assisted learning compared with weekly seminars for teaching fundamental electrocardiography to junior medical students. Southern Med J. 1988;81:1291–4. doi: 10.1097/00007611-198810000-00020. [DOI] [PubMed] [Google Scholar]
  • 8.Bridges AJ, Reid JC, Cutts JH, et al. A1/LEARN/Rheumatology. A comparative study of computer-assisted instruction for rheumatology. Arthritis Rheum. 1993;36:577–80. doi: 10.1002/art.1780360501. [DOI] [PubMed] [Google Scholar]
  • 9.D’Alecssandro DM, Kreiter CD, Erconon WE, et al. Longitudinal follow-up comparison of educational intervention: multimedia textbook, traditional lecture, and printed textbook. Acad Radiol. 1997;4:719–23. doi: 10.1016/s1076-6332(97)80074-8. [DOI] [PubMed] [Google Scholar]

Articles from The Journal of Education in Perioperative Medicine : JEPM are provided here courtesy of Society for Education in Anesthesia

RESOURCES