Abstract
Objective. To describe the redesigned assessment plan for a patient safety and informatics course and assess student pharmacist performance and perceptions.
Methods. The final examination of a patient safety course was redesigned from traditional multiple choice and short answer to team-based, open-ended, and case-based. Faculty for each class session developed higher level activities, focused on developing key skills or attitudes deemed essential for practice, for a progressive patient case consisting of nine activities. Student performance and perceptions were analyzed with pre- and post-surveys using 5-point scales.
Results. Mean performance on the examination was 93.6%; median scores for each assessed course outcome ranged from 90% to 100%. Eighty-five percent of students completed both surveys. Confidence performing skills and demonstrating attitudes improved for each item on post-survey compared with pre-survey. Eighty-one percent of students indicated the experience of taking the examination was beneficial for their professional development.
Conclusion. A team, case-based examination was associated with high student performance and improved self-confidence in performing medication safety-related skills.
Keywords: Patient safety, medication safety, pharmacy informatics, team examination, course assessment
INTRODUCTION
The Accreditation Council for Pharmaceutical Education (ACPE) Standards 2016 require student pharmacists to be proficient in topics related to health informatics, medication dispensing, distribution and administration, and patient safety.1 The Center for the Advancement of Pharmacy Education (CAPE) 2013 Outcomes suggest that students should be prepared to optimize the safety and efficacy of medication use systems (2.2), design, implement, and evaluate viable solutions to problems (3.1), and effectively communicate when interacting with an individual, group, or organization (3.6).2 Common student challenges associated with patient safety and related skills include lack of acceptance of culture of safety, poor ability to communicate about medication safety, and poorly developed skills to promote systems that reduce medication error potential.3-5 There is an identified need for innovative approaches to teaching and assessing medication safety skills, with emphasis on team-based collaborative learning.6 It has been recommended by multiple authors and the American Association of Colleges of Pharmacy (AACP) 2006-2007 Argus Commission that medication safety education in doctor of pharmacy programs be designed to be interactive, practice-oriented, and team-based.3,4,7-9 Despite this, most patient safety instruction in pharmacy curricula is lecture-based, without incorporated active or team-based learning strategies.10
In an innovative pharmacy practice laboratory experience, students’ patient safety knowledge and self-confidence improved when active learning was incorporated, although there was no team-based component to this activity.10 Another study evaluated a team-based, root-cause analysis (RCA) project as part of a patient safety course that involved student teams performing an RCA on a given medication error case.11 The majority of student teams achieved all associated RCA and team-based outcomes. Although there is robust literature on the benefits of team-based learning strategies, there is a paucity of literature on team-based final examinations in patient safety-related coursework and in general. Final examinations may be distinct from typical team-based learning activities because of the potential for a more high-stakes, high stress environment and the potential for more open-ended test items.
The doctor of pharmacy curriculum at Manchester University College of Pharmacy, Natural and Health Sciences, includes a required course dedicated to patient safety and informatics. Previously, student learning was assessed in a cumulative, traditional, multiple choice and short answer examination. Because of concerns regarding student performance, whether the examination was truly assessing course outcomes centered on student skills and attitudes relating to medication safety, and the importance of teamwork emphasized throughout the course, the final examination was redesigned as a case-based team examination consisting of nine activities. We hypothesized that this case-based, team examination could help improve student perceptions of self-confidence in performing medication safety-related skills and demonstrating positive medication safety-related attitudes. Our objectives were to describe the redesign of the patient safety and informatics course assessment plan and assess the team final examination in terms of student performance and perceptions.
METHODS
Patient Safety and Informatics is a required, second professional year (P2) course offered in the fall semester. Course content is broadly grouped into three units: medication misadventures, tools for patient safety, and pharmacy informatics. Examples of medication misadventure content includes safety culture, prescription and medication order evaluation, and medication error causes, prevention, and assessment. Examples of tools for patient safety content includes root cause analysis (RCA), health care failure modes and effects analysis (HFMEA), transitions of care and risk communication, and medication use evaluation (MUE). Examples of pharmacy informatics content includes electronic medical record, clinical decision-support, technology and automation, interfaces and interoperability. Course outcomes and topics, illustrated in Table 1, were developed through review of the 2007 and 2016 ACPE Standards,1,12 published textbooks,13,14 and input from clinicians toward an overarching goal that by the end of the course student pharmacists would be prepared to minimize, identify, report, and evaluate medication errors and adverse drug events.
Table 1.
Team Final Examination Activities, Learning Objectives, Course Outcomes, and Points
The course was delivered using a weekly two-hour class session taught in a lecture style that incorporates significant active learning (eg, think-pair-share, practice activities, discussion). Course faculty agreed to a standard practice of including at least one active learning activity per lecture objective. It is taught by five interdisciplinary faculty members from social and administrative sciences and pharmacy practice, as well as a guest lecturer who practices in pharmacy informatics. Students were grouped into cross-curricular learning teams based on a balance of personality types, learning styles, performance in the program, and StrengthsFinder (Washington, DC) assessment data.15,16 There were 16 teams of three to five students per cohort. Students were aware that the results from various assessments were used in the formation of their learning teams, but they did not receive specific instruction on how to interact with the various personality types, learning styles, or strengths. They were required to sit with their teams and collaborate on in-class activities.
In fall of 2014, the majority of student course grades (54%; n=39) decreased by at least one-half a letter grade based on final examination performance (see Table 2 for an overview of course enrollment and performance). This drop in performances was not anticipated, as students had completed several similar large quizzes incorporated throughout the semester. These quizzes were representative of material that appeared on the final. Rather than assume that students were less prepared for the final, course faculty hypothesized that the final examination format, which covered a semester’s worth of detailed, knowledge-based material, may not have been truly representative of course outcomes, which were of a broader perspective and more skills-based. Despite generally positive and constructive feedback from student evaluations, the faculty decided to re-evaluate the approach taken to assessment within the course. Course faculty desired an assessment method that would allow students to demonstrate their skills through a more meaningful and authentic experience.
Table 2.
Student Characteristics and Performance in 2014 and 2015
Table 3 compares assessments in 2014 to the redesign in 2015. Course projects (ie, Culture of Safety, Case Study Presentation, MUE Presentation) remained unchanged because of their success in evaluating whether students met the following course outcomes: compare and contrast organizational cultures in terms of their ability to protect the safety of patients and other individuals; assess a given medication misadventure in terms of type, cause, and potential preventative strategies; and given a safety, formulary, or quality issue, design an effective medication use evaluation. These projects were well-received by students on course and instructor evaluations and frequently cited as effective aspects of the course. Three large quizzes, designed to measure student performance in lecture objectives, were increased to five briefer quizzes spread throughout the semester to promote more consistent study habits. This change was a direct result of student feedback on course evaluations. All other course assessments, topics, and the general teaching approach remained unchanged.
Table 3.
Course Assessments and Point Allocation in 2014 Compared to 2015
The most significant change in the assessment redesign was the transformation of the final examination from a traditional, cumulative, individual, multiple choice and short answer examination into a cumulative, team, open-ended, case-based examination. This decision was made to create an examination that fit better with the constructivist approach17 taken in the more well-received and effective project-based assessments. Creating a case-based, open-ended examination would provide the students with more authentic tasks to complete, allow for greater complexity in the questions and scenarios, and connect to larger, practice-oriented skills. Similarly, incorporating the use of predefined teams that were already utilized in the course for active learning and major projects (ie, Case Study Presentation, MUE Presentation) would provide individual students with extra support and give the students practice in considering alternative ideas and viewpoints. Providing the students with extra support and alternative viewpoints was deemed essential due to the complex scenarios expected to be introduced in the case-based format. It also was expected that a team-based examination would more closely mirror actual practice, where patient safety issues would rarely be resolved by an independently acting individual. Finally, this approach better meets published recommendations for best practices in teaching and assessing science of safety in an authentic, practice-based manner.3,4,7-9
To create the redesigned final examination, instructors for each two-hour class session were invited to develop one or two lecture objectives that focused on developing a key skill or attitude deemed by the instructor to be essential for practice and to ensure that the objective was closely mapped to a course outcome. Specific learning objectives and mapped course outcomes assessed on the final examination are described in Table 1. All other lecture objectives were assessed during quizzes. Instructors also provided a description of the type of activity they would recommend to assess each objective selected for inclusion on the final examination. The course coordinator then developed a progressive patient case (ie, student teams could not move on to part 2 until part 1 was complete) consisting of six parts, with each part including at least one activity based on instructor recommendations. The patient case was a highly complicated scenario involving a patient who presented with symptomatic hypotension, eventually determined to be due to a medication error. More information was gradually revealed about the patient and the situation as students progressed through the examination. The examination was circulated and approved by all course faculty. An overview of the examination items, mapped to learning objectives, course outcomes, and topics is presented in Table 1.
Several strategies were employed to facilitate student success on the team final examination. During each class period, instructors emphasized the agreed upon key objective(s) and incorporated a related activity like those developed for the final examination. Each also agreed to map each instructional slide to a lecture objective to provide transparency to the students. During the week of the final examination, students were encouraged to attend a voluntary review session with their teams to practice a set of activities similar to the final examination. Although attendance was not formally recorded, all teams were represented by nearly all of their members.
The examination was administered in a conference-style setting, with each student team seated at a circular table for the duration of the examination. Once a team submitted its written answers for each of the six parts to a centrally located examination proctor, that individual provided it with the subsequent part. A second proctor was responsible for moving around the room to ensure academic integrity and answer logistical questions. Students were not allowed outside resources. Teams were given two hours to complete the examination and provided with a suggested time to budget for each part. Suggested time allotments ranged from 10 to 35 minutes per part for a total of 120 minutes. Teams submitted a single answer for each item, and all members received the same score. Individual items were graded by the instructor who taught the content using a collaboratively developed rubric (available from the corresponding author upon request). This was the first time the rubric had been used, and it has not yet been validated.
Students were required to complete blinded evaluations for each course and all instructors teaching for at least four hours in a required course. Because there were several different novel components in the final examination, it was determined that additional assessment specific to this project outside normal course assessment was essential to investigate student perceptions and incorporate student input into future iterations of the course. Additionally, since the assessment was conducted on a team rather than individual basis, it was important to instructors to identify whether students perceived they had achieved the learning objectives.
To gather information regarding students’ perceptions of the examination and their own performance, a two-part, cross-sectional, quasi-experimental study consisting of voluntary electronic surveys was conducted. The pre-survey was conducted immediately after fall semester orientation and prior to the beginning of the course. The post-survey was conducted after the final examination. All students enrolled in the 2015 course cohort were invited to participate in the pre- and post-survey. Two reminders to participate were sent, and each survey was closed after one week. Survey items were generally formatted as 5-point rating scales and were developed based on previous studies assessing pharmacy student perceptions of medication safety-related skills and attitudes.5,18 Questions focused on four major areas: direct assessment of student attitudes (since this would be difficult to assess in a team examination), confidence in performing medication safety skills and demonstrating attitudes, perceptions of the validity of the team examination (post-survey only), and perceptions of the team examination compared to a traditional examination (post-survey only). Similar studies describing teaching and learning activities related to medication safety were reviewed to inform survey item development.10,11 Assessed attributes were developed from evidence-based key learning mediators (eg, interest, motivation)19 and other mediators postulated to be affected by the unique testing format (eg, whether it was chaotic or stressful). The complete survey is available from the corresponding author upon request. Students could earn 2 bonus points added to their course score (representing 0.6% of total course points) for taking each survey to improve response rate. Students were informed that results would be kept confidential and student names would not be linked to responses if they wanted to earn the bonus points.
Data were analyzed using descriptive statistics, including mean with standard deviation or median with interquartile range (IQR) or range as appropriate based on whether results were parametric. Differences in student perceptions of their attitudes and skills from the pre- to post-survey were assessed using the Wilcoxon Signed Rank and McNemar tests as appropriate. SPSS Version 21 (Armonk, NY) was used to conduct inferential statistics. The study was granted expedited approval by the Manchester University Institutional Review Board.
RESULTS
There were 74 student pharmacists enrolled in 2015. An overview of student demographics, course, and final examination performance is provided in Table 2. Demographic data is from time of matriculation into the program. All students passed the course on the first attempt. Mean percentage score in the course was 89.4 (4.5). The most common grades earned by the 2015 cohort were an A- (36.5%) or B+ (24.3%).
Mean percentage and SD of students’ performance on the final examination were 93.6 (4.9). Performance on each part of the examination, mapped to pre-defined learning objectives and course outcomes, is provided in Table 1; median scores were 90% or higher for each part. Students scored highest on their ability to achieve the outcome “given the medication use process at an institution or community pharmacy, identify and correct error prone workflow and behaviors” (median score 12 of 12 points). Students scored lowest on their ability to achieve the outcomes “describe how pharmacists can use electronic medical records (EMRs) to improve pharmaceutical care” and “explain how pharmacy informatics can be used to improve safety and quality of medication use” (median score 9 of 10 points for each item).
A total of 63 students in the 2015 cohort participated in both the pre- and post-surveys once duplicates were removed (response rate 85%). It appeared that several students forgot they had taken the survey; if there were duplicates, the first response was used. Student confidence in performing medication safety skills and demonstrating attitudes that were part of the final examination improved for each assessed item (Table 4). Similar to performance on the final examination, student confidence in ability to perform medication safety skills and attitudes was high, with all median scores at 4 or 5 on a 5-point scale. In a direct measurement of student attitudes comparing the post- to the pre-survey, students were more likely to strongly agree that medication errors represent a serious problem within the health care system (81% vs 63%, p=.017) and that pharmacists are responsible for (76% vs 58%, p=.017) and capable of (81% vs 66%, p=.041) improving safe medication use throughout the health care system. They were also more likely to strongly agree that a small number of medication errors will occur despite our efforts to reduce risk (48% vs 27%, p=.003). Few students strongly agreed on the post- or pre-survey that highly competent (10% vs 5%, p=.453) and vigilant (10% vs 3%, p=.219) individuals generally will not produce medication errors.
Table 4.
2015 Cohort (N=63) Self-Confidence Pre-Course vs Post-Course
Table 5 describes student perceptions of the team final. When asked to assess their own and their team members’ performance on the examination, the majority indicated that their score and their team members’ scores reflected their true performance in the course (76% and 51%, respectively). A substantial minority indicated their score was lower than their true performance in the course (25%), and their team members’ scores were higher than their true performance in the course (30%). The majority of responders indicated all team members substantially contributed to team performance (62%), but a minority (22%) disagreed with that statement. Most students acknowledged that completing the examination as a team was beneficial for their course performance (78%) and professional development (81%). The majority of students associated the team examination with the following attributes, compared to a traditional examination (Table 6): engaging (76%), open-ended (73%), interesting (71%), preferable (64%), and meaningful (52%). Additionally, although not formally assessed, the results from relevant student course evaluations improved from 2014 to 2015 as follows: “class assignments helped me understand the course material,” 55.6% vs 71.6% strongly agree; “assessments covered important course materials and content,” 63.9% vs 73.0% strongly agree; “the course enabled students to be active participants in the teaching and learning process,” 56.9% vs 79.7% strongly agree.
Table 5.
2015 Cohort (N=63) Perceptions of the Team Final Examination
Table 6.
2015 Cohort (N=63) Perceptions of the Team Final Examination Compared to a Traditional Examination
DISCUSSION
The team final examination format is a unique assessment model for a medication safety course.6,7 While team-based learning is commonly employed across health care education, a search of the literature identified that published team-based assessments generally do not extend beyond group or team readiness assessment tests (GRATs and TRATs). Faculty anticipated that implementing a more summative team assessment would contribute to a deeper understanding of the material and help student pharmacists achieve better performance. A key outcome of the evaluation was improved final examination scores leading to increased overall grades as compared to previous cohorts that used a traditional examination. However, the movement toward a completely case-based approach may have more important implications. Namely, it allowed for assessment of more authentic tasks, increased complexity and a greater connection to real-world application of medication safety skills and concepts. This transition in final examination design was more consistent with the constructivist learning theory integrated throughout the course pedagogy.
Medication errors and other safety events that occur in actual practice often are managed in a team format, and there is a benefit in teaching related skills in a similar way.3,4,9 This novel approach to assessment allowed students to apply their skills as a team and utilize the strengths and viewpoints of various team members. While there may be a chance that not all students were proficient in all content areas prior to the examination, each student was exposed to main course concepts through team discussion and addressing the cases on the final examination. Negotiating the most appropriate response, discussing the pros and cons of methods of addressing the cases, and applying skills as a team represent a much more real-life manner of dealing with a medication error as compared to a traditional multiple choice examination. Having all open-ended test items allows students to bring forth ideas that may not have been considered by the instructor, but are still valid. Although not formally measured, use of teams minimized grading time that might be associated with having more open-ended test items. Although students were provided the concepts that would be included in the final examination, this is in keeping with the school’s pedagogy of using course and lecture objectives to guide students.
The positive impact of this examination format can be seen in several areas. Firstly, students were able to employ a team approach to a practice-based case regarding medication errors. Student confidence increased in the areas of identifying errors, investigating processes surrounding medication errors and creating risk reduction strategies, potentially in part as an outcome of taking the examination with peers. Overall, students perceived the team examination to be beneficial in solidifying knowledge, improving abilities related to medication safety-skills, and aiding their professional development.
It is important to note that there were no major changes to how the course was designed outside of the altered assessment plan. Performance on quizzes and projects was, anecdotally, similar between cohorts. Course projects, in particular, were introduced using identical descriptions and assessed with identical rubrics. Finally, although there were minor changes in point allocation as well as size and timing of quizzes, their overall value in the course was similar between cohorts. Students reported that frequent quizzing helped them stay abreast of the course material in light of competing priorities. Although the 2014 and 2015 cohorts cannot be directly compared because the final assessments were different, there were several notable trends. Improvements were noted in the overall course score, and there was a considerable increase in mean performance on the final exam. Table 2 shows that more students achieved a B+ or higher in the 2015 cohort, and no students failed the course in either cohort.
Our results support the inclusion of a team examination to improve outcomes, and this format of the final examination will be continued in subsequent offerings of the course. However, we did identify several limitations. First, there was some discussion among course faculty about whether or not this format truly holds each student accountable for meeting course outcomes. One concern was that the high performing students were leading their groups to higher scores, while low performing students were not substantially contributing to the team effort and therefore received a high score in error. Similarly, faculty were concerned that some students may not maximize their effort in the team because they could rely on others. This was reflected in Table 5, where about 16% of students strongly agreed or agreed that their score was higher than their true performance, but 30% of students strongly agreed or agreed that their team members’ scores were higher than their true performance. These responses imply that some students may have overestimated their performance and underestimated others, or it may highlight the fact that some students relied on their group members. Although we considered assessing student performance and perceptions in a randomized, crossover approach, allowing each student to receive either the individual-based or team examination, and then to retake opposite assessment after a “washout” period, we decided against this more rigorous assessment in favor of uniformity. Our approach decreased risk for carryover effect, mitigating potential student concerns regarding multiple “final” examinations.
Another concern was that students may have been less motivated to prepare for the five knowledge-based quizzes because the final examination would be taken as a team and would account for approximately 29% of the final grade. We originally planned to require students to earn at least a 70% average across all other assessments in order to be eligible to take the final examination as a team. Because of faculty concerns that not allowing students to participate in the team final would inadvertently publicize grade information and violate the Family Educational Rights and Privacy Act (FERPA), this restriction was removed. Although no student in the course had less than 70% at that point this semester, the problem of individual accountability remains. Additionally, because of a change in our learning management system, student performance on individual learning outcomes could not be directly compared between the 2014 and 2015 cohorts; however, considering significant improvement in overall scores, high performance on individual items, and similarity in assessed outcomes, it is likely that improvements were substantial. A final, unavoidable, limitation is that this study was conducted at a single school of pharmacy. If multiple institutions were utilizing this approach, the results may have been more robust and allowed for more generalization to other programs.
Future directions to minimize these limitations, particularly the fact that nearly 25% of students disagreed that everyone contributed to the team’s final examination performance, were discussed by faculty in preparing for the next offering of the course (fall 2016). First, the final examination could be made a hybrid of an individual section and a team section. While this may be more labor intensive, it would allow for more emphasis to be placed on individual preparation for the examination. Incorporating both aspects would better distinguish students who effectively performed both as individuals and in teams. Students could receive the combined total of the sections, they could receive the average of their individual grade and the team grade, or they could be required to pass the individual section in order to receive the team grade as their final examination score. To address low performers while also respecting FERPA, students receiving less than a 70% quiz average throughout the semester could also be required to complete an additional individual assignment outside of class. Ultimately, the faculty opted for the latter approach in the 2016 offering of the course, with no students falling below a 70%. Additionally, the faculty adopted a course policy where students who failed to attend at least 70% of class sessions (and therefore would not be contributing to the team learning environment) would receive a 2% deduction from their overall course grade. Finally, it was emphasized during the final examination itself that individual points could be deducted for lack of professionalism, which was defined to include disengagement.
There is a need for further research regarding effective teaching techniques and sustained retention of information within the realm of patient and medication safety.6 Course faculty are interested in continuing their investigation into both the presence and impact of patient safety education within our curriculum. Planned areas of study include looking longitudinally at where patient safety principles are incorporated into courses, how students are assessed on their ability to interpret and verify safety of prescriptions and medication orders, and enhancing understanding and appropriate use of MedWatch and the Vaccine Adverse Event Reporting System to report errors. Additionally, a longitudinal assessment of skills and attitudes could help better describe impact of the course as students progress through the program and into practice. Finally, courses involving patient safety and informatics are well positioned to benefit from the greater emphasis on interprofessional education taking place throughout the academy,1,2 and we are considering incorporating interprofessional activities in this course. We also hope to continue to develop the informatics aspects of the course in order to provide more value to students and improve student performance in this area, potentially by better integrating valuable content provided by a guest lecturer expert throughout the medication safety-focused aspects of the course.
CONCLUSION
Incorporating a team-based final examination that simulates real-world application of medication safety skills resulted in high student performance in a patient safety course and increased student confidence regarding medication safety skills as compared to the beginning of the course.
ACKNOWLEDGMENTS
The authors wish to thank S. Renee Mickens for her assistance in revising, editing, and proofreading the final manuscript.
REFERENCES
- 1.Accreditation Council for Pharmacy Education. Accreditation standards and key elements for the professional program in pharmacy leading to the doctor of pharmacy degree. https://www.acpeaccredit.org/pdf/Standards2016/pdf/Standards2016FINAL.pdf. Accessed July 21, 2016.
- 2.Center for the Advancement of Pharmacy Education. Educational Outcomes 2013. http://www.aacp.org/documents/CAPEoutcomes071213.pdf. Accessed July 21, 2016.
- 3.Vogt EM, Robinson DC, Chambers-Fox SL. Educating for safety in the pharmacy classroom. Am J Pharm Educ. 2011;75(7):Article 140. doi: 10.5688/ajpe757140. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Warholak TL, Holdford DA, West D, et al. Perspectives on educating pharmacy students about the science of safety. Am J Pharm Educ. 2011;75(7):Article 142. doi: 10.5688/ajpe757142. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Rickles NM, Noland CM, Tramontozzi A, Vinci MA. Pharmacy student knowledge and communication of medication errors. Am J Pharm Educ. 2010;74(4):Article 60. doi: 10.5688/aj740460. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Kiersma ME, Plake KS, Darbishire PL. Patient safety instruction in US health professions education. Am J Pharm Educ. 2011;75(8):Article 162. doi: 10.5688/ajpe758162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.West-Strum D, Basak R, Bentley JP, et al. The science of safety curriculum in US colleges and schools of pharmacy. Am J Pharm Educ. 2011;75(7):Article 141. doi: 10.5688/ajpe757141. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Fassett WE. Key performance outcomes of patient safety curricula: root cause analysis, failure mode and effects analysis, and structured communications skills. Am J Pharm Educ. 2011;75(8):Article 164. doi: 10.5688/ajpe758164. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Nahata MC, Beck DE, Draugalis JR, et al. The academy’s agenda for improving the safety of medication use: report of the 2006-2007 Argus Commission. Am J Pharm Educ. 2007;71(6):Article S18. [Google Scholar]
- 10.Kiersma ME, Darbishire PL, Plake KS, Oswald C, Walters BM. Laboratory session to improve first-year pharmacy students’ knowledge and confidence concerning the prevention of medication errors. Am J Pharm Educ. 2009;73(6):Article 99. doi: 10.5688/aj730699. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Schafer JJ. A root cause analysis project in a medication safety course. Am J Pharm Educ. 2012;76(6):Article 116. doi: 10.5688/ajpe766116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Accreditation Council for Pharmacy Education. Accreditation standards and key elements for the professional program in pharmacy leading to the doctor of pharmacy degree. https://www.acpe-accredit.org/pdf/FinalS2007Guidelines2.0.pdf. Accessed July 21, 2016.
- 13.Cohen MR. Medication Errors. 2nd ed. Washington, DC: American Pharmacists Association; 2007. [Google Scholar]
- 14.Dumitru D. The Pharmacy Informatics Primer. Bethesda, MD: American Society of Health-Systems Pharmacists; 2008. [Google Scholar]
- 15.Gryka RJ, Frame TR, Chen AMH, Kiersma ME, et al. Evaluation of team perceptions regarding personality types and learning styles. Poster session presented at: Annual Meeting of the American Associations of Colleges of Pharmacy. 2013 [Google Scholar]
- 16.Gallup Strengths Center. Gallup. https://www.gallupstrengthscenter.com. Accessed December 13, 2016.
- 17.Simons PRJ, Bolhuis S. Constructivist learning theories and complex learning environments. Ox Stud Compar Educ. 2004;13(1):13–25. [Google Scholar]
- 18.Gavaza P, Bui B. Pharmacy students’ attitudes toward reporting serious adverse drug events. Am J Pharm Educ. 2012;76(10):Article 194. doi: 10.5688/ajpe7610194. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Bye D, Pushkar D, Conway M. Motivation, interest, and positive affect in traditional and nontraditional undergraduate students. Adult Educ Quart. 2007;57(2):141–158. [Google Scholar]






