Abstract
AIMS
Dose calculation errors can cause serious life-threatening clinical incidents. We designed eDrugCalc as an online self-assessment tool to develop and evaluate calculation skills among medical students.
METHODS
We undertook a prospective uncontrolled study involving 1727 medical students in years 1–5 at the University of Edinburgh. Students had continuous access to eDrugCalc and were encouraged to practise. Voluntary self-assessment was undertaken by answering the 20 questions on six occasions over 30 months. Questions remained fixed but numerical variables changed so each visit required a fresh calculation. Feedback was provided following each answer.
RESULTS
Final-year students had a significantly higher mean score in test 6 compared with test 1 [16.6, 95% confidence interval (CI) 16.2, 17.0 vs. 12.6, 95% CI 11.9, 13.4; n = 173, P < 0.0001 Wilcoxon matched pairs test] and made a median of three vs. seven errors. Performance was highly variable in all tests with 2.7% of final-year students scoring < 10/20 in test 6. Graduating students in 2009 (30 months' exposure) achieved significantly better scores than those in 2007 (only 6 months): mean 16.5, 95% CI 16.0, 17.0, n = 184 vs. 15.1, 95% CI 14.5, 15.6, n = 187; P < 0.0001, Mann–Whitney test. Calculations based on percentage concentrations and infusion rates were poorly performed. Feedback showed that eDrugCalc increased confidence in calculating doses and was highly rated as a learning tool.
CONCLUSIONS
Medical student performance of dose calculations improved significantly after repeated exposure to an online formative dose-calculation package and encouragement to develop their numeracy. Further research is required to establish whether eDrugCalc reduces calculation errors made in clinical practice.
Keywords: education, medical student, medication errors, numeracy, patient safety, prescribing
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT
Dose calculation errors are an important cause of some of the most serious medication incidents in advanced healthcare systems.
A number of small studies have shown that dose calculations are poorly performed by hospital doctors, nurses and medical students.
THIS PAPER ADDS
The performance of dose calculations was highly variable amongst a large cohort of medical students from a leading UK medical school.
Calculations that involved converting concentrations expressed as percentage or calculating infusion rates were identified as major weaknesses.
The availability of an online self-assessment and education package, coupled with encouragement and twice-yearly formative assessment, led to a significant improvement in performance.
Introduction
Adverse medication incidents are common in National Health Service hospitals and >50 000 are reported to the UK National Patient Safety Agency each year [1]. The majority of prescription errors involve junior doctors and many result from serious dosing errors [1–3]. The Department of Health is committed to reducing such events as part of its strategy to improve patient safety [4].
An important factor in many of the most serious medication errors is incorrect dose calculation [1, 3], which may account for as many as one in six errors [3, 5, 6]. Several studies have questioned the calculation skills of practising doctors [7, 8], and also the numeracy of medical students [9]. Similar concerns have been raised amongst other healthcare groups [10, 11].
It is presumed that doctors, who are commonly required to make calculations in their clinical work, are well equipped to do so, based on the fact that they are highly selected, intelligent and have undertaken scientific training. In fact, only a proportion of medical school entrants have pursued mathematics to the end of their schooling, and basic numeracy skills have only recently become a criterion for entry into medical school [12] and barely feature in undergraduate summative assessments. Medical students are typically given limited encouragement to practise and develop numeracy. Moreover, in recent years there has been reduced emphasis on the traditional scientific disciplines that previously fostered such skills, for example in biochemistry or pharmacology practical classes [13].
This study describes the development and implementation of eDrugCalc, a simple web-based formative learning aid and self-assessment tool designed to develop, measure and audit calculation skills across all five undergraduate years of a large British medical school.
Methods
Overview and design
eDrugCalc was designed as a continuously available web-based program hosted within the Edinburgh Electronic Medical Curriculum (EEMeC), a virtual learning environment used to deliver support for the whole of the undergraduate curriculum. eDrugCalc was authored using bespoke software that allows teaching staff to write assessments in a format similar to the Microsoft Powerpoint slide editor. eDrugCalc was made accessible via ‘eDrug’, the medical school's popular online student formulary [14]. IP distribution and transfer are not restricted by external commercial factors, so eDrugCalc could be adapted with relative ease for use as a learning resource by other institutions.
An introductory page highlighted the rationale for the program and provided instructions on how to use it. This was followed by a voluntary self-rating of confidence on a five-point scale from ‘excellent’ to ‘poor’. We designed 20 calculation questions based on common clinical and laboratory scenarios likely to be encountered by junior doctors. Each required a logical approach to tackle the problem, including the need to break it down into components and identify which of the numbers provided in the scenario were irrelevant to the calculation. All of the questions could be completed without a calculator although an on-screen calculator was available. Some questions focused on areas we had identified as weaknesses in pilot studies, including conversions between units (e.g. microlitre and millilitre, microgram and milligram), and converting doses from mass (weight) to percentage concentrations. The basic question remained constant on each visit, but the variables altered within a realistic pre-specified range to ensure that a new calculation was required each time the program was accessed. After submitting each answer, the student was provided with the correct answer and guidance for deriving it before moving on to the next question (Figure 1). After completing all 20 questions, a total mark was provided on the screen. eDrugCalc recorded results in a database to provide the authors with longitudinal performance results across the full MBChB programme and potentially into postgraduate training in Foundation Years 1 and 2.
Figure 1.
Sample screen view showing appearance after the answer 10.0 has been typed into the box in response to question 7. A calculator was available on-screen, if required, and instant feedback was provided following each answer
Implementation
eDrugCalc was introduced as a formative self-assessment tool and the students were made aware that it did not contribute to their summative assessment. However, all students were encouraged to use eDrugCalc as part of their learning in the Pharmacology & Therapeutics theme. The introductory page reminded students of the contribution that calculation errors make to clinical incidents and encouraged them to improve calculation skills and achieve a score of 20/20 by graduation. To support implementation, students were requested to answer all 20 questions voluntarily twice a year during designated 2-week ‘test periods’ as part of their personal and professional development programme. This self-test was announced in advance online and two reminders were provided during the test period. Students were asked to answer all 20 questions in one sitting, unlimited by time but with 60 min suggested as a reasonable target, based on findings from a pilot study. An automated reporting system provided the matriculation number, score, duration of session and feedback information for the first use of eDrugCalc during the defined test period. At the end of the test, those who needed further help in performing drug calculations were referred to work books available in the library.
Occasional artefacts arose when web links were temporarily disrupted, causing the test score to be logged prematurely; affected students could e-mail the EEMeC help desk and ask for their score to be reset, then repeat the test. A semi-automated system was established for e-mailing students who had either not performed the self-test during the designated period or scored < 5 and were logged on for < 5 min. These students were asked to reply by e-mailing one explanation from a list of possible reasons for their apparent failure to post a score.
Feedback to and from students
A summary of the class results was displayed on the year electronic notice boards in EEMeC 2 days after the test period ended, as a single-sheet PDF document using the format shown in Table 1 and Figure 2. Students provided anonymous feedback after each session to the statement ‘I think that this P & T eDrugCalc online programme will help me to develop my ability and confidence in calculating drug doses via self-assessed progressive practice during my medical studies’ by selecting one of five options from ‘strongly agree’ (4) to ‘strongly disagree’ (0). Feedback on a five-point scale was expressed as a percentage of the maximum possible score, which was the number of replies multiplied by 4.
Table 1.
Summary of results for test 6 (May 2009) from medical students in years 1–5 showing the proportion of the cohort engaging with the voluntary self-test, the percentage of correct answers for each of the 20 questions, the time taken to answer all 20 questions and feedback from the students concerning the usefulness of eDrugCalc
No. | Calculation problem | Year 1 | Year 2 | Year 3 | Year 4 | Year 5 |
---|---|---|---|---|---|---|
% ✓ | % ✓ | % ✓ | % ✓ | % ✓ | ||
1 | Convert ml to µl | 94 | 97 | 91 | 94 | 96 |
2 | Convert stones to kg | 88 | 88 | 91 | 91 | 86 |
3 | Convert mass to moles | 80 | 79 | 74 | 71 | 82 |
4 | Number of tablets from daily dose | 98 | 100 | 97 | 98 | 98 |
5 | Mass of NaCl in saline infusion vol. | 65 | 73 | 68 | 67 | 67 |
6 | Body mass index calculation | 95 | 97 | 97 | 98 | 96 |
7 | Vol. of distribution of drug | 73 | 75 | 76 | 71 | 81 |
8 | Vol. of dilution to achieve conc. | 53 | 48 | 48 | 48 | 53 |
9 | Paediatric drug vol. based on dose | 78 | 81 | 76 | 79 | 86 |
10 | Saline infusion rate to volume | 94 | 92 | 89 | 90 | 93 |
11 | Convert % content to mass | 76 | 78 | 75 | 77 | 83 |
12 | Calculate dose from body mass | 90 | 90 | 91 | 86 | 94 |
13 | Setting infusion rate based on dose | 50 | 52 | 61 | 64 | 80 |
14 | Loading dose vol. based on conc. | 70 | 68 | 73 | 65 | 81 |
15 | Dose from % mass and vol. | 72 | 77 | 78 | 72 | 83 |
16 | Mass administered from % mass | 62 | 70 | 73 | 74 | 82 |
17 | Vol. infusion from mass and dose | 83 | 88 | 80 | 85 | 87 |
18 | Calculate dose from body mass | 74 | 81 | 76 | 82 | 81 |
19 | Setting infusion rate based on dose | 76 | 77 | 83 | 81 | 84 |
20 | Setting infusion rate based on dose | 57 | 60 | 60 | 63 | 74 |
Average score (%) | 76 | 79 | 78 | 78 | 83 | |
Number taking the test/total in year (% compliance) | 226/233 (97) | 216/238 (91) | 210/237 (89) | 193/253 (76) | 184/255 (72) | |
Median answer time (min) | 48 | 44 | 47 | 49 | 43 | |
% cohort scoring 17–20/20 | 42 | 40 | 44 | 44 | 58 | |
% cohort scoring 20/20 | 7 | 5 | 5 | 7 | 8 | |
Fastest 20/20 (min) | 13 | 22 | 24 | 19 | 29 | |
Feedback from students on usefulness of eDrugCalc in developing numeracy skills % max score (n replies) | 79 (206) | 81 (190) | 77 (179) | 73 (172) | 79 (159) |
Figure 2.
(a) Results for eDrugCalc test 1 undertaken by 1101 of 1243 (89%) students across years 1–5 during a 2-week period in October 2006. The median score for first-year students (14/20) was not significantly different from that of the fifth-year cohort (14.5/20; P > 0.05, Kruskal–Wallis test with Dunn's multiple comparison). (b) Results for test 6 undertaken by 1029 of 1205 (85%) students in years 1–5 during a 2-week period in May 2009 (the second test for students in year 1, the fourth test for those in year 2 and the sixth test for those in years 3–5). The median scores for years 1–4 did not differ significantly from each other, but scores in year 5 were significantly higher than those in years 1–4 (P < 0.05; Kruskal–Wallis test with Dunn's multiple comparison). Box plots show the median, lower and upper quartiles and the limits for the values
Statistical analysis
eDrugCalc scores (number of correct answers) posted during the test periods were collected and described as median and interquartile range. Statistical analysis of the significance of the difference in pooled independent results for each year cohort was made using the Mann–Whitney test, or for more than two groups the Kruskal–Wallis test with Dunn's multiple comparison post hoc. Matched data for individual student scores in sequential tests were analysed using the Wilcoxon matched pairs test, or for more than two groups the Friedman's test with Dunn's multiple comparison post hoc.
Results
Six eDrugCalc self-assessment tests were voluntarily undertaken by medical students in years 1–5 during the period October 2006 to May 2009. The twice-yearly tests provided data from a total of 1727 students in the three academic years 2006–2007, 2007–2008 and 2008–2009. The overall proportion of students averaged across all 5 years who participated in tests 1–6 was 88, 85, 83, 84, 89 and 85%, respectively.
Cross-section analysis of results
Figure 2 illustrates the results for self-assessment test 6, undertaken by 1025 of 1205 (85%) medical students in years 1–5 during a 2-week period in May 2009. There was considerable variability in performance and 2.7% of final-year students who participated scored < 10 out of 20.
Although the majority of students in years 1–4 failed to achieve an arbitrary target score of 17/20, 58% of those in the final year of study attained that score (see Table 1). Statistical analysis of the pooled year results for test 6 showed that the median score for year 5 was significantly higher than for the other 4 years (P < 0.05; Kruskal–Wallis test with Dunn's multiple comparison post hoc). The feedback provided by the final-year students demonstrated that their confidence in performing calculations increased significantly between tests 1 and 6 (Table 2). This subjective self-analysis by students in year 5 was reinforced by objective evidence showing that the scores in test 6 were positively correlated with the students' confidence rating (P < 0.001, Spearman ρ = 0.32, n = 178).
Table 2.
Confidence of the final-year student cohort in 2009 as reported at the end of their first and sixth self-tests
Subjective self-evaluation of ability to perform calculations | Test 1 (start of year 3) n = 240 | Test 6 (end of year 5) n = 178 |
---|---|---|
Pooled responses: Poor-average | 151 (63%) | 60 (34%) |
Pooled responses: Good-excellent | 89 (37%) | 118 (66%) |
The proportion of the cohort responding that their ability to calculate was ‘excellent–good’ increased significantly between test periods 1 and 6 (P < 0.0001, Fisher's exact test; n = number of responders).
Sequential matched analysis of individual results
We addressed the uncertainty introduced by low scores and nonresponders by undertaking a matched analysis. Since final-year students at the point of graduation would be most likely to benefit from a progressive increase in performance, we studied the outcome for the cohort graduating in summer 2009. The scores for those fifth-year students who posted scores in May 2009, just before completing their undergraduate study, were paired with the scores achieved by the same individuals in test 1 at the start of their third year in October 2006 (see Figure 3). The analysis showed that a significantly higher mean score was achieved in test 6 in comparison with test 1 [test 6 score 16.6, 95% confidence interval (CI) 16.2, 17.0; test 1 score 12.6, 95% CI 11.9, 13.4; P < 0.0001, Wilcoxon matched pairs test, n = 173].
Figure 3.
Matched results for students currently in year 5 who submitted valid scores for both tests 1 and 6 (October 2006 and May 2009). The individuals' median score increased significantly between tests 1 and 6 (Wilcoxon signed ranks test, P < 0.0001; n = 173). Overall, 95% of the 255 students in year 5 undertook three or more out of the possible six self-tests during the 30-month period; 5% did just two tests, which was the minimum involvement
Results from students currently in years 3–5 who posted valid scores in all six tests confirmed that individuals' average scores improved progressively and variability was reduced during the 3 years between tests 1 and 6 (test 1: 12.6, 95% CI 11.7, 13.6; test 6: 16.7, 95% CI 16.1, 17.3; P < 0.001 Friedman test with Dunn's multiple comparison, n = 116). Further evidence of sequential improvement in performance amongst the graduating cohorts was obtained by comparing eDrugCalc scores in the final test for those graduating in 2007, who had access to the program for only 1 year, with the scores of those graduating in 2009 who had access over 3 years. The score of the 2009 cohort was significantly higher than for the 2007 class (2009 mean 16.5; 95% CI 16.0, 17.0, n = 184; 2007 mean 15.1, 95% CI 14.5, 15.6, n = 187; P < 0.0001, Mann–Whitney test), with 58% of final-year students in 2009 achieving ≥17 correct answers in their sixth test, compared with only 42% of those who graduated in 2007 after exposure to only two tests (P < 0.002, Fisher's exact test).
To investigate whether the improvement in calculation skills might have occurred by student progression alone (uninfluenced by eDrugCalc), we compared the scores achieved in October 2006 by the new first-year students (n = 205) with those of the final-year students (n = 190) who were also taking the test for the first time, and found no difference (P > 0.05 Kruskal–Wallis test with Dunn's post hoc test).
Confidence
The median subjective score for the confidence of current fifth-year students in performing calculations improved from 3 (average) in test 1 at the start of year 3, to 2 (good) in test 6 at the end of year 5. This enhanced confidence applied to the whole cohort (P < 0.0001, Mann–Whitney test; n = 240 test 1 and 184 test 6).
Analysis of specific calculation problems
Table 1 illustrates the performance by each year across the individual question types during test period 6. Calculations depending on simple one- or two-stage multiplication or division were performed reasonably well, but some question types were poorly answered across all years. These included questions that depended on interpreting multiple parameters and discarding distractors (e.g. calculating infusion rates or volumes), questions involving interpretation of solution concentrations based on percent mass and deriving the volume of a diluent required to achieve a pre-specified concentration.
Feedback and reasons for nonparticipation
Student feedback on the value of eDrugCalc in sustaining and enhancing their ability to calculate doses was very positive across all 5 years of the curriculum and is summarized in Table 1. eDrugCalc was designed as a formative self-assessment tool, and participation, although encouraged, was on a voluntary basis. Table 3 summarizes the reasons why 175 students in years 1–5 did not participate in test 6. We hypothesized that there may be an association between nonparticipation and poor overall academic performance. During test 4 all 25 top-ranked final-year students posted scores, and only 11 of those ranked in the top half of the class of 267 students failed to do so, compared with 33 in the lower half (P = 0.0004, Fisher's exact test).
Table 3.
Potential options provided to the students as an explanation for failure to participate in eDrugCalc self-assessment test 6
Reply code | Frequency | % of total | Reason |
---|---|---|---|
1 | 10 | 6 | I didn't know about the test |
2 | 60 | 34 | I forgot to do the test |
3 | 11 | 6 | I didn't have time to do it |
4 | 27 | 15 | I was away on elective – will do it on return |
5 | 0 | 0 | I feel proficient at this kind of calculation |
6 | 2 | 1 | This test has low priority for me |
7 | 1 | 1 | This test is only formative |
8 | 20 | 11 | I had technical problems with my computer/link |
9 | 13 | 7 | I did the test properly – the computer is wrong |
10 | 0 | 0 | I have suspended my studies |
11 | 5 | 3 | Other explanation |
12 | 26 | 15 | No response |
Summary of reasons from 175 students across years 1–5 who, according to the computer database, did not do self-test 6 during the designated 2-week period in May 2009. Overall, 15% of the 1205 undergraduates in years 1–5 failed to take test 6, with 2% providing no explanation for not complying, despite repeated requests.
Discussion
There are several important findings from this study. First, medical students commonly make errors when undertaking important calculations that might be required in the clinical environment. Second, the provision of an online formative assessment tool enabling students to practise their skills was well received and led to a significant progressive overall improvement in performance. Third, some types of question were poorly performed, even after practice. Fourth, participation rates for a voluntary self-assessment tool testing an important practical skill were high. Finally, calculation skills and voluntary involvement in testing were positively associated with overall academic performance.
General performance
Although basic numeracy is generally assumed amongst high academic achievers, our study suggests that these skills cannot be taken for granted. There was a substantial variation in performance over the 20 questions in eDrugCalc across all year cohorts, largely because of the spread of the lowest quartile. It is disappointing that < 10% of students achieved maximum scores in a test that was not time limited, and that a small number of final-year students scored < 10 out of 20 in their final test before commencing work as junior doctors.
Overall, our findings are similar to those observed in other smaller cohorts of students [9, 15–17] and also amongst qualified doctors [7, 18, 19, 20]. There are a number of possible reasons for this poor performance. It might be that lack of appropriate secondary school achievement in numeracy is not a barrier to entry to medical school, although newer assessment procedures have started to address this issue [12]. Alternatively, established skills may atrophy in a busy medical curriculum that is now less focused on basic sciences with few opportunities to sustain numeracy. eDrugCalc was intended to be a simple way of addressing this issue without requiring significant changes in the curriculum or assessment process. Although we do not know how seriously some of the students in the lowest quartile approached the test, there must be some concern that this group includes some who struggle with numeracy. Indeed, the heightened national focus on numeracy in recent years prompted some of our students to express their anxieties over this skill.
Changing performance
It is increasingly accepted that poor numeracy is an issue [1], which raises the important question as to whether this skill can be improved in a medical school setting. We demonstrated a significant improvement in performance over time following the introduction of eDrugCalc, although these observations were uncontrolled. Others have also examined the impact of interventions aimed at improving numeracy amongst medical students. Degnan et al. [15] designed an interactive online tutorial with 12 multiple choice questions and three case studies together with explanatory notes covering pharmacokinetics, adverse drug reactions and calculation of drug doses; 44 students were randomly allocated to use the program or not. Those who used it were able to perform calculations more accurately in a simulated emergency scenario shortly afterwards, although longer-term effects were not reported. Wheeler et al. [16] examined the impact of an online learning package on performance in six questions on calculation skills in medical finals. Although the study was uncontrolled, it demonstrated a positive association between improved scores and the number of web pages visited.
Fast feedback of scores immediately after each test period seemed to be an important factor in the success of eDrugCalc. It enables individual students to compare their performance with that of their peers anonymously, encourages intrayear competition, and challenges all users to improve their score. This fostered engagement with the process and provided regular reminders of the importance of calculation skills for patient safety.
Individual question styles
We found that certain question styles were associated with lower scores, especially those that involved calculations around drug infusions, or required conversions between drug concentrations expressed as mass per unit volume, percentage or ratios (Table 1). This area of concern has been raised by others and led to calls for relabelling of clinical products to reduce the possibility of dosing errors [18, 21, 22]. We piloted eDrugCalc for 3 years initially and, for that reason, fixed the question styles to encourage familiarity with common scenarios and to standardize the assessment across all students. This might have led to narrow improvements for fixed scenarios and we cannot be sure that the improved performance in eDrugCalc implies the same effect when faced with numerical problems of a slightly different construction. In the future we intend to create a bank of more diverse questions that can be accessed at random by the students to avoid boredom and broaden learning, and will specify minimum year-based targets (e.g. Y1–5: 10, 12, 14, 16, 18 out of 20, respectively).
Student participation and acceptance
The participation rates in this voluntary self-assessment were impressively high in all years, and were significantly greater than reported for other voluntary online tools [15, 16]. Of the fifth-year students, 72% posted a score in May 2009 despite being primarily focused on the final MBChB examinations a month later and a fifth of the year undertaking elective periods, mostly overseas. The high overall participation by all years, with 95% of final year undertaking three or more of the six tests, implies a widespread recognition that numeracy is relevant to clinical practice and that calculation errors have potentially serious consequences; eDrugCalc, with its anonymity and immediate feedback, was considered to be a helpful learning tool (Table 1).
We are still concerned that, despite active promotion of eDrugCalc, a minority of medical students did not engage with this self-assessment. The feedback from three-quarters of this group suggested that forgetfulness, technical difficulties or lack of time were the main reasons. Few suggested that established proficiency or lack of importance were disincentives to take part. This cohort may contain the poorest performers who, for whatever reason, may not wish to post scores. This uncertainty could be resolved only by making eDrugCalc a compulsory summative assessment. However, we specifically developed and observed the impact of eDrugCalc as a voluntary exercise because (i) there was little appetite to add around 10 h of further summative assessment into an already crowded assessment calendar, (ii) this regular refreshment simulated the continuous professional development expected in postgraduate life, and (iii) a single examination would not lead to sustained increases in performance. Indeed, others have shown that, although performance can be improved by online practice in the short term, skills are not sustained unless they are reinforced [16]. We believe the strength of eDrugCalc is that it provides this reinforcement and can be introduced with minimal disruption to the rest of the curriculum.
Weaknesses
Our study has some important weaknesses. First, the data presented are uncontrolled and so we do not know how the same cohort of students might have progressed without access to eDrugCalc; however, the initial eDrugCalc test demonstrated no significant difference between years 1 and 5, suggesting the absence of any ‘spontaneous’ improvement in performance with time. Random assignment would have been scientifically desirable but was considered unethical when it involved education in skills so closely related to patient safety. Second, because the assessments were not undertaken in controlled conditions it is impossible to know what proportion of the low scores was due to lack of engagement and motivation, simple artefact or technical problems. Third, it is difficult to ascertain what part of the numeracy skills process we introduced had the most impact on performance. It could be the availability of the program itself, it could be the test periods, which were when the program was mainly used, or it could be the heightened awareness of issues of numeracy rather than any specific effect of eDrugCalc itself. Fourth, although we observed improved performance in a controlled setting, this does not prove that this will translate into a reduction in dosing errors on hospital wards. Providing proof through a longitudinal study will be difficult given the multiple confounding factors contributing to calculation errors [3, 23]. Nevertheless, there is now widespread agreement that proficiency in dose calculations should be an identified outcome of UK undergraduate medical education [24–26]. The future challenge for all medical schools will be to develop and sustain these skills and demonstrate that graduates are competent. eDrugCalc is a very cost-effective learning tool for addressing this issue and introducing the importance of numeracy in the context of undergraduate medical education. This approach may prove valuable for other clinical groups, e.g. pharmacists, nurses and midwives.
Conclusions
We have shown that the calculation skills of the majority of medical students improved significantly after repeated exposure to a simple online formative personal learning and assessment package, in combination with encouragement to develop their numeracy skills in preparation for medical practice. Further research is required to ascertain whether eDrugCalc improves performance in other kinds of mathematical problems and whether it has the potential to influence the number of calculation errors made in clinical practice.
Competing interests
There are no competing interests to declare.
REFERENCES
- 1.National Patient Safety Agency. National Reporting and Learning Service. Safety in Doses. Improving the Use of Medicines in the NHS. London: NPSA; 2009. [Google Scholar]
- 2.Dean B, Schachter M, Vincent C, Barber N. Prescribing errors in hospital inpatients: their incidence and clinical significance. Qual Saf Health Care. 2002;11:340–4. doi: 10.1136/qhc.11.4.340. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Aronson JK. Special issue: medication errors. Br J Clin Pharmacol. 2009;67:589–690. doi: 10.1111/j.1365-2125.2009.03420.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Department of Health. Building a Safer NHS for Patients: Improving Medication Safety. A Report by the Chief Pharmaceutical Officer. London: DoH; 2004. [Google Scholar]
- 5.Lesar TS, Briceland L, Stein DS. Factors related to errors in medication prescribing. JAMA. 1997;277:312–7. [PubMed] [Google Scholar]
- 6.Bates DW, Cullen DJ, Laird N. Incidence of adverse events and potential adverse drug events: implications for prevention. JAMA. 1995;274:29–34. [PubMed] [Google Scholar]
- 7.Simpson CM, Keijzers GB, Lind JF. A survey of drug-dose calculation skills of Australian tertiary hospital doctors. Med J Aust. 2009;190:117–20. doi: 10.5694/j.1326-5377.2009.tb02308.x. [DOI] [PubMed] [Google Scholar]
- 8.Oldridge GJ, Gray KM, McDermott LM, Kirkpatrick CMJ. Pilot study to determine the ability of healthcare professionals to undertake drug dosage calculations. Intern Med J. 2004;34:316–9. doi: 10.1111/j.1445-5994.2004.00613.x. [DOI] [PubMed] [Google Scholar]
- 9.Wheeler DW, Remoundas DD, Whittlestone KD, House TP, Menon DK. Calculation of doses of drugs in solution. Are medical students confused by different means of expressing drug concentrations? Drug Saf. 2004;27:729–34. doi: 10.2165/00002018-200427100-00003. [DOI] [PubMed] [Google Scholar]
- 10.Wright K. Student nurses need more than maths to improve their drug calculating skills. Nurse Educ Today. 2007;27:278–85. doi: 10.1016/j.nedt.2006.05.007. [DOI] [PubMed] [Google Scholar]
- 11.Hubble MW, Paschal KR, Sanders TA. Medication calculation skills of practicing paramedics. Prehosp Emerg Care. 2000;4:253–60. doi: 10.1080/10903120090941290. [DOI] [PubMed] [Google Scholar]
- 12.UKCAT Consortium Ltd. The UK Clinical Aptitude Test (UKCAT) for Medical and Dental Degrees. Manchester, UK. Available at http://www.ukcat.ac.uk/ (last accessed 3 November 2009)
- 13.General Medical Council. Tomorrow's doctors: recommendations on undergraduate medical education, 1993.
- 14.Maxwell SRJ, McQueen DS, Ellaway R. eDrug: a dynamic interactive electronic drug formulary for medical students. Br J Clin Pharmacol. 2006;62:673–81. doi: 10.1111/j.1365-2125.2006.02777.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Degnan BA, Murray LJ, Dunling CP, Whittlestone KD, Standley TD, Gupta AK, Wheeler DW. The effect of additional teaching on medical students' drug administration skills in a simulated emergency scenario. Anaesthesia. 2006;61:1155–60. doi: 10.1111/j.1365-2044.2006.04869.x. [DOI] [PubMed] [Google Scholar]
- 16.Wheeler DW, Whittlestone KD, Salvador R, Wood DF, Johnston AJ, Smith HL, Menon DK. Influence of improved teaching on medical students' acquisition and retention of drug administration skills. Br J Anaesth. 2006;96:48–52. doi: 10.1093/bja/aei280. [DOI] [PubMed] [Google Scholar]
- 17.Whittenbury KJ, Jersmann HP, Tonkin AL. Remediation required for drug-dose calculation skills in medical students. Med J Aust. 2009;190:655–6. doi: 10.5694/j.1326-5377.2009.tb02612.x. [DOI] [PubMed] [Google Scholar]
- 18.Rolfe S, Harper NJ. Ability of hospital doctors to calculate drug doses. Br Med J. 1995;310:1173–74. doi: 10.1136/bmj.310.6988.1173. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Nelson LS, Gordon PE, Simmons MD, Goldberg WL, Howland MA, Hoffman RS. The benefit of house officer education on proper dose calculation and ordering. Acad Emerg Med. 2000;7:1311–6. doi: 10.1111/j.1553-2712.2000.tb00481.x. [DOI] [PubMed] [Google Scholar]
- 20.Kelly DA, Henderson AM. Use of local anaesthetic drugs in hospital practice. Br Med J (Clin Res Ed) 1983;286:1784. doi: 10.1136/bmj.286.6380.1784. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Scrimshire JA. Safe use of lignocaine. Br Med J. 1989;298:1494. doi: 10.1136/bmj.298.6686.1494. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Wheeler SJ, Wheeler DW. Dose calculation and medication error – why are we still weakened by strengths? Eur J Anaesthesiol. 2004;21:929–31. doi: 10.1017/s0265021504000298. [DOI] [PubMed] [Google Scholar]
- 23.Wheeler DW, Wheeler SJ, Ringrose TR. Factors influencing doctors' ability to calculate drug doses correctly. Int J Clin Pract. 2007;61:189–94. doi: 10.1111/j.1742-1241.2006.01273.x. [DOI] [PubMed] [Google Scholar]
- 24.Medical Schools Council. Outcomes of the Medical Schools Council Safe Prescribing Working Group: Statement of Competencies in Relation to Prescribing Required by All Foundation Doctors (11 Feb 2008). Available at http://www.medschools.ac.uk/Publications/Pages/Safe-Prescribing-Working-Group-Outcomes.aspx (last accessed 27 August 2009)
- 25.Maxwell S, Walley T. Teaching safe and effective prescribing in UK medical schools: a core curriculum for tomorrow's doctors. Br J Clin Pharmacol. 2003;55:496–503. doi: 10.1046/j.1365-2125.2003.01878.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.General Medical Council. Tomorrow's doctors: recommendations on undergraduate medical education. September, 2009.