Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2011 Oct 22;2011:1224–1232.

Right Diagnosis, Wrong Care: Patient Management Reasoning Errors in Emergency Care Computer-Based Case Simulations

Guido F Schauer 1, David J Robinson 1, Vimla L Patel 1
PMCID: PMC3243267  PMID: 22195183

Abstract

The pervasiveness of reasoning errors in emergency care (EC) is commonly acknowledged in clinical research. Much of this work has focused on diagnostic errors; yet, in EC, providing a specific diagnosis is generally secondary to managing the patient. To gain insights into non-diagnostic, treatment-related errors, we presented EC residents with computer-based case simulations and recorded their actions and verbalized thoughts. Nearly all participants diagnosed both study cases correctly yet made a variety of patient management errors, some with serious consequences. More substantial errors could be classified as stemming from incorrect patient status and treatment inferences. These EC reasoning errors are discussed within the framework of underlying cognitive processes.

Introduction

Clinical research has depicted emergency care (EC) reasoning errors as prevalent and serious, and thus warranting of further study.1 The field, however, has tended to focus more on reasoning toward diagnoses than therapies (e.g., 2,3), as is suggested by such claims as that “[t]he first step to optimal care is making the correct diagnosis.4 Perhaps this is because diagnostic reasoning studies are easier to design and conduct, as diagnoses can be captured as single assessment responses, whereas patient management is generally more sensibly tracked as successive interventions over time. Yet this emphasis on diagnostics and the relative paucity of patient management reasoning studies may lead some (especially non-clinician researchers) to assume that correct diagnosis necessarily leads to correct patient management.

In emergency medicine, clinicians must often provide care when the diagnosis is uncertain or unknown.5 Moreover, the diagnosis itself may not indicate other aspects of the patient’s status, such as urgency of care, body systems’ status, and responsivity to treatment. Nor can it always speak to the need for certain information or the appropriateness of specific interventions. For example, due to variability in presentation of patients with a “congestive heart failure” diagnosis, determining an EC patient to have this condition does not, in itself, indicate how poorly the patient may be doing or how they should be managed. Clinicians may not as formally articulate non-diagnostic inferences as they do diagnoses, though formulating such inferences, absent strong patient management implications from the diagnosis, is critical to EC.

EC often involves managing patients’ multiple body systems, the statuses of which often change individually, interdependently, and continuously during the temporal window of EC. From what we know about human cognition constraints, such multi-stranded complexity and transience of information should increase vulnerability to error. Mental processes are severely taxed by “multitasking,”6,7 long-term memory is susceptible to forgetting8 and distortion9 effects from co-occurring information, and inferences are likewise adversely impacted (e.g., by anchoring on initial information10).

The purpose of this study is to examine EC patient management errors, given the generation of accurate diagnoses. To gain insights into such non-diagnostic, treatment related errors, we designed cases that required clinicians to diagnose and manage acute care cases for which correct primary diagnoses would likely be ascertained. This emphasis on patient management suggested a temporally extended, changing information approach to case presentation. Toward that aim, we designed computer-based case simulations and recorded EC clinicians’ actions and verbalized thoughts.

From the anticipated pattern of errors, we predicted these would conform to errors in formulating various non-diagnostic inferences relevant to patient management, including those regarding the need for specific information, body systems’ status, urgency of action, appropriateness of specific interventions, and responsivity to treatment. These errors were also examined in light of available information and how it was insufficiently or inappropriately exploited in drawing inferences.

Method

Participants

Eight participants (5 female, 3 male) were recruited from a university-based emergency medicine residency program. They included EC clinical residents from 26 to 41 years old (M = 30.4, SD = 4.9) about six months into their first (2), second (3), and third (3) years of training. All were presented with the same cases, though in different orders. IRB approval was obtained prior to the study, and all participants were treated in accordance with APA ethical guidelines.

Design

In this descriptive study, participants were presented computer-based EC cases to solve and instructed to verbalize their thoughts as they reasoned through them. Participants’ actions (chosen from hierarchically arranged “menus” by mouse clicks and drags) were recorded, along with sequence, time (to tenth-of-a-second accuracy), and patient vital signs. Verbalizations were also recorded and matched with concurrent actions.

Evaluation Criteria

Nationally recognized, broadly implemented clinical practice guidelines were used as the reference metrics for each study case.11,12 All EC resident physicians are expected to be familiar with these guidelines at their level of training. Core measures of The Joint Commission11 served as the basis for clinical care standards, while ACC/AHA12 guidelines for chest pain (AMI) supplemented these standards. These guidelines were used to form a standard practice range that was then used to benchmark optimal clinical performance. Inappropriate or dangerous variations from these predetermined ranges of acceptable clinical standard practice parameters were recorded as diagnostic or patient management errors. Examples include failing to provide supplemental oxygen or antibiotics to a hypoxic patient with pneumonia, excessive delay in obtaining an ECG, and not providing aspirin therapy in an AMI patient.

Materials and Equipment

Instructions

Specific instructions for training and study phases were presented just prior to and during cases. The training instructions were to familiarize participants with the various features of the interactive case simulation program, the procedure for solving the case, and the “think aloud” verbalization task. Similar instructions, with focus on the verbalization task were presented prior to and during the study-phase cases.

Emergency Care Cases

Three “chest pain” cases were modeled after single-patient American Board of Emergency Medicine oral examination preparation training cases.13 A tachycardia case was used in the training phase, and anterolateral myocardial infarction (AMI) and pneumocystis pneumonia (PCP) cases were used in the study phase. The two study cases included features such that a superficial reading might lead to misdiagnosis. For each case, simulated patient information, along with a large array of action and treatment options typically available to emergency physicians, was integrated into a database. Each database effectively set the conditions and available action choices for the computer program to control the play of that case.

Equipment

The study was run by portable computer, with information presented on a high-resolution 15” LCD screen (1680 × 1050 pixels), participant actions recorded by mouse click/drag inputs, and participant think aloud verbalizations recorded by built-in microphone.

Procedure

Each participant was seen individually in a small room. All participants were presented with three emergency medicine cases by computer, the first (tachycardia) to familiarize them with the procedure and two more (AMI and PCP) for the study proper. Participants were assigned to either of two presentation orders (AMI-PCP or PCP-AMI) by fixed rotation.

Each case was preceded by instructions for that case, and then the case was presented, and the participant was instructed to begin. Participants were allowed up to fifteen minutes to work each case and always had to provide a diagnosis before quitting a case and continuing on to the next. After completing the last case, participants were debriefed, given the opportunity to ask questions, and thanked for participating.

During each case, the following elements were continuously displayed on screen (see Figure 1): (a) a brief version of the instructions; (b) a “pie clock” trial timer indicating the proportion of 15 minutes for solving the case that remained (the white pie slice turned red when 1/8 of the time remained); (c) “patient time” (how much time would have elapsed for a real patient, given actions taken so far); (d) a hierarchical menu of information and action options available; and (e) a display area for numeric, text, or image information.

Figure 1.

Figure 1.

Opening screen for presentation of a computer-based emergency care case.

Top-level menu items included the following, most of which contained submenus (and lower-level options):

  • Vital Signs

  • ·History and Physical

  • Physical Examination

  • Order Test(s)

  • Get Test Results

  • Procedures and Actions

  • Advance Time

  • Consultation for Admission

  • Diagnosis and Case Completion

The trial timer and patient time clock were started with the presentation of the case. Participants worked to complete each case by “mousing over” menu hierarchy items (shaped like arrows) and clicking on terminal items (rectangular buttons) to get information and take actions. Some actions weren’t allowed without first taking other actions (e.g., giving fluids before attaching intravenous lines). Some actions advanced patient time, as would occur in a real case (e.g., patient sent for CT scan). Other actions (e.g., getting a test result) were also time contingent. To manage time contingent events, participants could engage in actions that advance time or could manually advance patient time (the real-world equivalent of which would be waiting without taking any actions). Many actions effectively changed available information (e.g., vital signs and ECG images) in a time-dependent manner. Participants could see updated information by reselecting information buttons.

As this study involved clinician participants familiar with emergency medicine, the main goals for these cases—to stabilize the patient and consult an appropriate specialist/department for patient admission—were implicit. Participants also had to provide diagnoses and indicate their likelihood by positioning them on a vertical “likelihood arrow” in order to quit a case and move on. (See Figure 2.) For each diagnosis moved onto the likelihood arrow, its position was indicated by a percent value to its left (e.g., 95% Ventricular Tachycardia). If the patient in the case were to have died (also a possibility on Board exams) or the fifteen minutes allotted for a case were exceeded, only diagnosis options were available. After providing diagnosis information, the participant could then quit that case.

Figure 2.

Figure 2.

Screen options and “dragging” method for providing diagnoses and likelihood judgments.

Results

Errors and Experience

Regarding any relationship between errors and experience, our class-year group sizes were too small to generate meaningful inferential statistics about differences between them. Looking across the various performance criteria, 2nd- and 3rd-year residents committed about as many and as critical errors as 1st years, and 1st-year residents made none of the commission errors (non-indicated actions) that their more experienced cohort made. The only diagnostic error in this study was committed by a 1st-year, although the participant performed most patient management actions correctly—that is, before choosing not to admit the patient.

Diagnostic Errors

Participants included all diagnoses they considered relevant and rated the likelihood of each, as a percentage.

Almost no diagnostic errors were made for the primary diagnosis (M = 93.75%). One of eight participants failed to provide the correct primary diagnosis for the AMI case, and all participants gave the correct primary diagnosis for the PCP case. For the AMI case, estimated likelihoods were generally high, M = 92.1%, SD = 5.6%, with half of participants providing no alternate diagnoses and the others giving likelihoods from 22% to 73% lower in their secondary diagnoses. For the PCP case, estimated likelihoods were also generally high, M = 89.2%, SD = 6.8%, though all participants also provided alternate diagnoses (M = 3.1, SD = 1.4) with relatively high likelihoods. For example, all but one participant provided a secondary diagnosis of “community acquired pneumonia” with an average likelihood of 70.7% (SD = 9.1%), on average only 17.9% lower (SD = 13.3%) than primary diagnosis likelihoods.

Figure 3 shows the diagnoses provided by participants and patterns of likelihood estimates they provided for both the AMI (upper panel) and PCP (lower panel) cases. As Figure 3 clearly shows, greater numbers of alternative diagnoses were provided for the PCP case than the AMI case, t(7) = 3.87, p < .01. To summarize, for the AMI case, participants were generally confident in their diagnoses and included relatively few additional diagnoses. In contrast, for the PCP case, although participants rated their primary diagnoses as very likely, they included several other diagnoses as somewhat likely.

Figure 3.

Figure 3.

Participants’ estimated likelihoods for their diagnoses for AMI (upper panel) and PCP (lower panel) cases. Darker shades indicate higher likelihood estimates. (*The participant chose this diagnosis by dragging the label “Other (state verbally)” onto the likelihood arrow while verbally stating this diagnosis.)

Patient Management Errors

Anterolateral Myocardial Infarction

For the one diagnostic error made, misdiagnosis of an AMI, this error was directly associated with patient management error: The patient was discharged instead of being admitted for further care. Yet even with correct diagnosis, patient management errors were made. For these, failure to provide care in a timely manner was the most serious type of management error. When chest pain is suggestive of myocardial infarction, national guidelines indicate that an electrocardiogram (ECG) be taken within ten minutes of the patient’s arrival.12 For our AMI cases, although all participants ordered ECGs, only 37.5% did so within the first ten minutes, in patient time. Mean ordering time was 12 min, 36.9 s (SD = 3 min, 56.0 s). To support the validity of elapsed patient time as a meaningful measure, in terms of action steps, ordering an ECG was about the 23rd action taken (M = 22.8, SD = 11.0). Mean time to view an ECG (from the beginning of the case) was 26 min, 18.0 s (SD = 11 min, 6.9 s)—on average the 32nd action taken (M = 31.8, SD = 11.0).

Average time to consult for admission to cardiology or cardiac catheterization lab services (the misdiagnosed patient was not admitted) was 36 min, 9.5 s (SD = 13 min, 20.3 s).

Several lesser errors were also found. One participant who correctly diagnosed the patient did not administer either pain control (morphine) or vasodilator medication (nitroglycerine). 25% failed to check for allergies before giving medications. Wasteful actions, such as ordering X-rays or lab tests and not viewing them, also occurred.

Pneumosystis Pneumonia

Errors made in PCP case management were of a different sort. For PCP case management, immediate concerns were to bring the patient’s oxygen saturation, fluid level, and temperature to normal. A critical aid in doing so was the checking of the patient’s vital signs and mental status. Other critical PCP case management involved administering appropriate antibiotics.

For oxygen, 75% of participants had administered oxygen at a mean of 8 min, 30.8 s (SD = 3 min, 29.8 s) into the case, in patient time. This was around their 10th action (M = 9.5, SD = 6.9), done either just after or before checking vital signs. 25% of participants, however, gave oxygen much later, at an average of 30 min, 5.7 s into the case (SD = 8 min, 55.5 s). For them this was about their 45th action (SD = 4.2), about 29 steps (SD = 15.6) after their first check of the patient’s vital signs. For these participants, patient oxygen levels had fallen from an initial value of 88% saturation to an average of 82% (SD = 1.4%) before they took action.

Initial vital signs for the PCP case were consistent with hypovolemia (initial blood pressure 105/50, heart rate 110). Accordingly, all participants had administered on average 2.4 L (SD = 0.5) of fluids and initiated administration of fluids at an average blood pressure of 94.0/42.9 (SD = 3.1/2.2) and heart rate of 121 (SD = 3.1). 87.5% of participants effectively raised the patient’s blood pressure through administration of fluids. One participant, however, allowed substantial amounts of time to pass before reordering fluids, failing to recognize hypovolemia in time, thus allowing the patient’s blood pressure to drop to dangerously low levels and causing the patient to lose consciousness slip into a coma. Moreover, this participant had administered both norepinephrine and vasopressin (failing to recognize a hypovolemic versus cardiogenic shock), which otherwise could have caused a heart attack.

Additionally, 37.5% of participants opted to intubate the patient (including the above mismanagement of hypovolemia), when in each case the airway problem had already been solved. One of these, in failing to order relevant sedation, had ordered an awake, unsedated intubation of the patient.

For PCP, the indicated antibiotic is co-trimoxazole (sulfamethoxazole/trimethoprim, or Bactrim); yet, despite their correct diagnosis of the case, 25% of participants failed to give this medication. All participants had prescribed other antibiotics, as well.

At the start of the case, the patient had a fever of 102.9° Fahrenheit. One participant failed to provide any antipyretic medication, resulting in a temperature of 106.2° Fahrenheit by the conclusion of the case.

For this case, admitting the patient to respiratory isolation was indicated, yet one participant had admitted to internal medicine—an inadequate level of care for this patient.

In all, 62.5% of participants made at least one of the above errors. As with the AMI case, several lesser errors were also found, though no failures to check for allergies before giving medications. Wasteful actions, such as ordering lab tests and not viewing them, also occurred. And some also missed patient history information helpful to diagnosing the case.

Discussion

Summary of Findings

In this study, EC residents were presented with a series of “chest pain” cases to solve in a computer-based simulator, and the results of their performance were examined for errors. Across both study cases, participants made almost no diagnostic errors, yet they made a large variety and number of patient management errors, some with serious consequences.

Participants generally made fewer management errors on the AMI case than on the PCP case. This may have been due to tighter correlations between the AMI diagnosis, patient body systems status parameters, and indicated treatments. That is, for AMI, knowing the diagnosis more straightforwardly indicates how the patient is doing and what specific actions should be taken. The number of actions required for AMI may also be generally fewer than for PCP. In our study, AMI cases were completed in an average of 60.5 steps (SD = 8.3) and PCP cases completed in 70.9 (SD = 9.1)—that is, in significantly fewer steps, t(7) = 3.45, p = .01. For AMI cases, taking certain actions in a timely manner is of prime importance. A diagnosis of PCP, on the other hand, is likely not so tightly correlated with patient status, and status parameters may change considerably as the case unfolds, as would indicated actions for managing them.

A preliminary examination of the verbal data revealed a variety of diagnostic and patient management inferences, and erroneous inferences could often be directly linked to errors in actions. For example, AMI timing errors sometimes corresponded with incorrect assertions about urgency of care. Similarly, failure to administer appropriate treatments, as with the intubation of the PCP patient by several participants, corresponded to incorrect inferences about a specific body systems’ status. Likewise, responsivity to treatment was also apparently misconstrued when a participant treating the PCP case failed to recognize the potential/incremental success of fluids, failed to recognize an anticipated response rate, and chose an alternative strategy instead of waiting for evidence of an anticipated outcome, or allowing the primary strategy to run its course.

We identified five categories of inferences, based on stage of information acquisition and use: need for specific information, body systems’ status, urgency of action, appropriateness of specific interventions, and responsivity to treatment. From our preliminary analysis, these five groups appeared to encompass all patient management inferences. These inferences could also be further subdivided as referring to general status or any of several individual body systems. As discussed in the introduction, such multi-stranded complexity and transience of information presents serious challenges for cognitive resources and, thus, could potentially lead to patient management errors such as those found in this study.

Limitations

From action errors to inference errors

In general, one can infer reasoning errors by noting information available to study participants when they take certain actions, as well as from their verbal statements. Yet action errors such as those we found could variously have resulted from lack of knowledge, inappropriate knowledge structures (rendering known information less readily accessible when needed), insufficient cognitive resources (e.g., due to time constraints, problem complexity, etc.), or inadequate use of cognitive resources (i.e., ineffective thinking strategies). As we did not directly assess participants’ clinical knowledge (or their knowledge structures), we cannot unequivocally state that knowledge problems did not contribute to errors. Our cases were designed to be well within the range of knowledge and experience of our participants. Yet, as clinical residents, they could not likely have leveraged or benefitted from more efficient information processing that comes with clinical expertise.14

In light of the challenges to cognitive resources already discussed, errors found could also reflect an upper bound to reasoning, or insufficient cognitive resources, or these errors could have resulted from inadequate use of resources for the tasks at hand. When relevant information is available, from which an intelligent, well-trained clinician would have inferred correct action, if a likewise intelligent, well-trained clinician were to infer incorrectly (as evidenced by an action error), that erroneous inference would have likely resulted from poor use of cognitive resources. Further analysis of our verbal data may shed light on which of these sources contributed to those errors found.

Simulation versus real situations

There are several ways in which our case simulations differed from real EC situations. In a clinical team within a hospital, some of these errors may have been caught, as team members can often observe clinicians’ actions and must filter their directives through their own thought processes before carrying them out. Moreover, feedback from the patient and monitoring equipment, often in the form of visual and auditory cues (such as when trying to perform an awake, unsedated intubation), can prevent errors before they even get started.

Yet such “safeguards” are often absent in EC. Emergency physicians are trained to function in an independent, team leadership role, and nursing staff are trained to follow clinician directives. Even for EC residents in training, supervision often tends to be in the background, as residents are expected to reason about courses of action before consulting with the attending physician. Patients are often arrive unconscious or with altered mental states. Combined with the interruptions, noise, and other distractions typical of emergency departments, circumstances of real situations may not help recovery from errors like those found in our simulations. Accordingly, it would be ill advised to dismiss out-of-hand mistakes made on simulated cases simply because they don’t replicate reality in its entirety.

Diagnosis likelihood versus diagnostic confidence

In this study, instead of asking for confidence judgments for diagnoses, we had participants estimate likelihoods (as percentages) for each diagnosis they considered relevant. This seemed more fitting to the task, given that several diagnoses could be included. Including both likelihood and confidence measures would have made these judgments complex and confusing. There may be some basis for postulating a relationship between likelihood estimates and confidence levels, but the relationship would be complex. Higher likelihood probably corresponds to higher confidence, but a greater differential in likelihood ratings between mutually exclusive diagnoses also probably indicates greater confidence differences.

Patient time versus real time

Regarding our use of patient time to measure the timeliness of events, it could be argued that this construct was too artificial and unfamiliar to participants, rendering assertions about timing suspect. It should be noted, however, that our construct—though independently developed—is not unique, as a very similar manner of advancing time and changing patient status variables is used in the USMLE Step 3 Case Simulator,15,16,17 which can also present emergency care simulations. Our participants were familiarized with patient time and related effects during training, 75% of them used the “advance time” feature on cases, and all had accessed changing patient vital signs on multiple occasions. Moreover, as longer intervals generally coincided with our other measure, number of action steps taken, this supports our assertion that untimely actions were so due to poor prioritization in patient management.

Conclusion

In emergency care, the primary function of the clinician is to stabilize the patient. Whereas obtaining a diagnosis may be very helpful to patient management, it is often not possible to establish a clear, unequivocal diagnosis within the constraints of EC.5 Yet, as this study makes clear, establishing a correct diagnosis does not necessarily lead to “correct” care. Accordingly, for EC, patient management errors and their cognitive underpinnings represent an important area for future study.

To study both diagnostic and patient management errors, our simulator allowed for a more realistic mode of presentation than paper-based studies and a more efficient, detailed, accurate, and objective data collection process than expert-administered oral examinations. Besides allowing for a more fine-grained exploration of errors, the design features detailed for this study would also be amenable for training and evaluation purposes, as well as for experimentation on methods to improve EC reasoning performance.

Acknowledgments

This project was funded by a research grant, “Cognitive Complexity and Error in Critical Care,” from the James S. McDonnell Foundation (JSMF 220020152).

References

  • 1.Kripalani S, Williams MV, Rask K. Reducing errors in the interpretation of plain radiographs and computed tomography scans. In: Shojania KG, Duncan BW, McDonald KM, Wachter RM, editors. Making Health Care Safer A Critical Analysis of Patient Safety Practices. Rockville, MD: Agency for Healthcare Research and Quality; 2001. [PMC free article] [PubMed] [Google Scholar]
  • 2.Graber M, Gordon R, Franklin N. Reducing diagnostic errors in medicine: What’s the goal? Acad Med. 2002;77:981–992. doi: 10.1097/00001888-200210000-00009. [DOI] [PubMed] [Google Scholar]
  • 3.Croskerry P. Clinical cognition and diagnostic error: Applications of a dual process model of reasoning. Adv in Health Sci Educ. 2009;14:27–35. doi: 10.1007/s10459-009-9182-2. [DOI] [PubMed] [Google Scholar]
  • 4.Scott IA. Errors in clinical reasoning: Causes and remedial strategies. BMJ. 2009;339:22–29. doi: 10.1136/bmj.b1860. [DOI] [PubMed] [Google Scholar]
  • 5.Kovacs G, Croskerry P. Clinical decision making: An emergency medicine perspective. Acad Emerg Med. 1999;6:947–952. doi: 10.1111/j.1553-2712.1999.tb01246.x. [DOI] [PubMed] [Google Scholar]
  • 6.Rubinstein JS, Meyer DE, Evans JE. Executive control of cognitive processes in task switching. J Exp Psychol Human. 2001;27:763–797. doi: 10.1037//0096-1523.27.4.763. [DOI] [PubMed] [Google Scholar]
  • 7.Clapp WC, Rubens MT, Sabharwal J, Gazzaley A. Deficit in switching between functional brain networks underlies the impact of multitasking on working memory in older adults. PNAS. 2011;108:7212–7217. doi: 10.1073/pnas.1015297108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Wixted JT. The psychology and neuroscience of forgetting. Annu Rev Psychol. 2004;55:235–69. doi: 10.1146/annurev.psych.55.090902.141555. [DOI] [PubMed] [Google Scholar]
  • 9.Loftus EF. Planting misinformation in the human mind: A 30-year investigation of the malleability of memory. Learn Memory. 12:361–366. doi: 10.1101/lm.94705. [DOI] [PubMed] [Google Scholar]
  • 10.Tversky A, Kahneman D. Judgment under uncertainty: Heuristics and biases. Science. 1974;185:1124–1130. doi: 10.1126/science.185.4157.1124. [DOI] [PubMed] [Google Scholar]
  • 11.The Joint Commission Comprehensive Accreditation Manual for Hospitals. Oak Brook: Joint Commission Resources; 2010. [Google Scholar]
  • 12.Antman EM, Anbe DT, Armstrong PW, Bates ER, Green LA, Hand M, et al. 2004. ACC/AHA guidelines for the management of patients with ST-elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee to Revise the 1999 Guidelines for the Management of Patients With Acute Myocardial Infarction) Available at www.acc.org/clinical/guidelines/stemi/index.pdf. [Google Scholar]
  • 13.Okuda Y, Nelson BP, editors. New York (NY): Cambridge University Press; 2010. Emergency Medicine Oral Board Review Illustrated. [Google Scholar]
  • 14.Patel VL, Groen GJ, Patel YC. Cognitive aspects of clinical performance during patient workup: The role of medical expertise. Adv Health Sci Educ. 1997;2:95–114. doi: 10.1023/A:1009788531273. [DOI] [PubMed] [Google Scholar]
  • 15.U.S. Medical Licensing Examination 2011 Step 3 Content Description and General Information [internet]. 2011 [cited 2011 Mar 17]. Available from: http://www.usmle.org/Examinations/step3/2011Step3.pdf
  • 16.Primum Computer-based Case Simulations (CCS) for licensing doctors [internet]. [cited 2011 Mar 17]. Available from: http://www.jisc.ac.uk/media/documents/projects/higherorderskills.pdf
  • 17.Dillon GF, Boulet JR, Hawkins RE, Swanson DB. Simulations in the United States Medical Licensing Examination (USMLE) Qual Saf Health Care. 2004;13(Suppl 1):i41–i45. doi: 10.1136/qshc.2004.010025. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES