Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2014 Nov 15;78(9):160. doi: 10.5688/ajpe789160

A Faculty Toolkit for Formative Assessment in Pharmacy Education

Margarita V DiVall a,, Greg L Alston b, Eleanora Bird c, Shauna M Buring d, Katherine A Kelley e, Nanci L Murphy f, Lauren S Schlesselman g, Cindy D Stowe h, Julianna E Szilagyi i
PMCID: PMC4453077  PMID: 26056399

Abstract

This paper aims to increase understanding and appreciation of formative assessment and its role in improving student outcomes and the instructional process, while educating faculty on formative techniques readily adaptable to various educational settings. Included are a definition of formative assessment and the distinction between formative and summative assessment. Various formative assessment strategies to evaluate student learning in classroom, laboratory, experiential, and interprofessional education settings are discussed. The role of reflective writing and portfolios, as well as the role of technology in formative assessment, are described. The paper also offers advice for formative assessment of faculty teaching. In conclusion, the authors emphasize the importance of creating a culture of assessment that embraces the concept of 360-degree assessment in both the development of a student’s ability to demonstrate achievement of educational outcomes and a faculty member’s ability to become an effective educator.

Keywords: formative assessment, faculty development, pharmacy education, teaching-learning process

INTRODUCTION

The 2013 revision of the Center for Advancement of Pharmacy Education (CAPE) outcomes once again reminds the academy that the goal of its professional programs is to produce pharmacy graduates capable of achieving specific educational outcomes.1 Current pharmacy education can be described as competency-based, focusing on the achievement of learning outcomes, rather than just completion of requirements. This approach to education addresses what the learners are expected to be able to do at the completion of their education, rather than what they are expected to learn during their education. Whether utilizing the CAPE outcomes or school-specific curricular outcomes, educational goals, in terms of measurable abilities pharmacy graduates should possess, represent the knowledge, attitudes, and behaviors required to successfully perform as a pharmacist. Because competency-based education requires acknowledging that teaching and learning are not synonymous, faculty members are tasked with ensuring an optimal learning environment and measuring student progress toward achieving learning outcomes. To accomplish these tasks, faculty members must utilize a variety of assessment measures in the classroom, including formative and summative evaluations.

In the late 1960s, Michael Scriven first coined the terms “formative” and “summative” in the context of program evaluation.2 In 1968, Benjamin Bloom expanded on this to include formative assessment as a component of the teaching-learning process.3 For both Scriven and Bloom, an assessment was only formative if it was used to alter subsequent educational decisions. According to Bloom, the purpose of formative assessment was to provide feedback and allow for correction at any stage in the learning process.4 Subsequently, Black and William suggested expansion of the definition of formative assessments to include evidence that student achievement was used by teachers and learners to make decisions pertaining to subsequent steps in instruction that were likely to be better than the decisions they would have taken in the absence of such evidence.5 In her 2005 AJPE series of articles on assessment, Anderson provided this definition: “Formative: an assessment which is used for improvement (individual or program) rather than for making final decisions or accountability. The role of formative assessment is to provide information which can be used to make immediate modifications in teaching and learning and in the program.”6

The primary differences between formative and summative assessments are when they occur in the teaching-learning process and what is done with the information acquired from each. The summative assessment happens at the end of the teaching-learning process, while formative assessment occurs during that process. Information obtained from formative assessment can be used to immediately modify the instructional experience based on how well students are progressing in their achievement of intended outcomes. Because summative assessments occur at the end of the teaching-learning process, information obtained that might improve the process cannot be applied until the next offering of the course, leaving no opportunity for students currently enrolled to benefit from such changes.

To better clarify the definitions of and stress the roles of formative and summative assessments, many educators opt to use the term “assessment for learning” to describe formative assessment and the term “assessment of learning” to describe summative assessment. With this approach, the assessment of learning (summative) is designed to confirm what students have learned and can do at the end of instruction—particularly if they can demonstrate proficiency related to intended curricular outcomes. Summative assessments are typically “high-stakes,” meaning they determine student progression to the next phase of the curriculum or graduation.

In contrast, formative assessments are “low-stakes.” Assessment for learning (formative) activities are typically instructionally embedded in a class activity and are designed to guide instructional decisions. These activities aim to gather information about what students know and can do, what preconceptions and confusions exist, and what educational gaps exist. Because these activities occur while material is still being taught, they are designed to yield diagnostic information to allow students to self-assess their own achievement, to provide faculty with an understanding of student progress toward achievement of outcomes, and to guide instructional decisions. Faculty can utilize the information to identify the learning needs of students and adapt educational strategies to help students move forward in their learning. Key components of assessment for learning are summarized in Table 1.

Table 1.

Key Components for Assessment for Learning (Formative)

graphic file with name ajpe789160-t1.jpg

The AACP Assessment Special Interest Group leaders have discussed the importance of further promoting the culture of assessment and increasing the proficiency of various assessment techniques among the members of the academy. Formative assessment was identified as a priority because of its powerful impact on learning outcomes and faculty member development. For this paper, rather than targeting individuals specifically involved in assessment at the programmatic or institutional level, the target audience is all faculty members involved in didactic and experiential education. This paper aims to increase understanding of and appreciation for formative assessment and its role in improving student outcomes and the instructional process, while educating faculty on formative techniques readily adaptable to various educational settings. All faculty members, whether responsible for classroom or experiential teaching, can utilize formative assessment activities described in this paper to ensure the achievement of educational outcomes, rather than only using summative assessments to determine the degree to which students have achieved outcomes at the end of the educational experience. The authors relied on their expertise and knowledge of relevant literature rather than conducting an extensive and comprehensive literature evaluation when preparing this paper.

FORMATIVE ASSESSMENT OF STUDENT LEARNING

Classroom Strategies

A variety of classroom assessment techniques are available that can provide formative feedback. Such assessment techniques can range from a simple, nonlabor intensive tool to a complex process requiring considerable preparation time. The following examples require minimal preparation and analysis yet can provide valuable feedback to students and instructors alike (see Table 2). For a more comprehensive source of classroom formative assessment methods refer to Angelo and Cross.7

Table 2.

Summary of Formative Assessment Strategies that Can be Used in the Classroom

graphic file with name ajpe789160-t2.jpg

Prior Knowledge Assessment: Knowing what your students bring with them to a course or class period is valuable in determining a starting point for a lesson. 7 In order to collect information regarding the level of preparedness of students at the beginning of a class, a prior knowledge assessment can be administered. The assessment typically takes the form of a few open-ended questions, short answers, or multiple-choice questions. This method requires preparation in advance of the class and time to review the scores or responses and, possibly, time in class to administer the assessment. Because students realize there are no stakes involved with completing the assessment, they may not feel the need to provide an accurate or complete answer.

Minute Paper/Muddiest Point: Both the minute paper and muddiest point methods provide useful feedback and encourage students to listen and reflect on what they have learned in class.7 For the minute paper, it is useful, but not necessary, to focus on a specific topic or concept. For these assessments, time is allotted, usually at the end of class, to allow students to reflect on and write down the most important thing they learned and to acknowledge what questions they have as a result of the class. Along with identifying the most important point(s) from the lecture, students can also explain why they felt it was the most important point. This calls for more thought on the part of the student. With the muddiest point, the instructor simply asks the students to share what they thought was the lecture’s “muddiest” point, that is, what was unclear or confusing.

These methods provide immediate feedback and a manageable amount of information that can be quickly reviewed. Overusing these techniques may cause students to lose interest or not view the exercises as important. Muddiest point feedback and any questions that remain can help instructors identify areas of difficulty for the students and address them in a timely manner, either in class or through the use of technology, such as discussion boards available in learning management systems.

Audience response systems: “clickers:” The use of “clickers” has become common in the classroom.8 Clickers are an example of audience response systems, which are used for a variety of purposes such as quizzes, voting, and active learning. Multiple tools and technologies are available that allow the instructor to project or pose questions to the class and gather answers from students. Each student or a group of students use a clicker that connects to a receiver or personal electronic devices (eg, smartphones, tablets, or computers) that transmit answers via the Internet. Some instructors then opt to discuss the results while others have the students discuss the results among themselves and repeat the selection process. Clickers engage all students in the classroom in traditional question-and-answer active-learning activities while providing assessment data. Although this technique is very useful as a formative assessment strategy, it can also be used in a summative way.9

Audience response systems positively affect student engagement, active learning, and learning outcomes, and the use of such systems is well received by students.10-14 This strategy, however, requires moderate preclass preparation on the part of the instructor to prepare the questions and response options in the system and then requires class time to administer, answer, and discuss the questions.

Case studies: Case studies are useful tools that can converge principles, concepts, and knowledge.15,16 The process involves providing cases representing patients with one or several diseases, symptoms, or conditions. The cases can include as much information (lab values, physical assessment observations, etc.) as necessary for the level and background of the students. Students, working independently or in groups, are asked case-related questions regarding a diagnosis, drugs of choice, monitoring parameters, counseling points, and long-term consequences. After a period of time to prepare responses, students respond to the questions while the facilitator engages the students in discussion. Students participating in this method of assessment learn to develop critical-thinking, problem-solving, and communication skills. The instructor can use this exercise to assess student performance, difficulty of material, and success of instruction. A drawback is the considerable amount of preclass preparation and in-class administration time. However, time outside class, such as a skills laboratory, can be also used to complement and supplement material presented in lecture and still allow for collection of formative assessment data.

Case studies can be particularly helpful in teaching foundational sciences; however, an instructor’s lack of clinical expertise and/or students’ lack of background knowledge of the subject and drug therapy can make this a daunting undertaking. Fortunately, instructors who lack clinical expertise to develop their own case studies can seek the expertise of their colleagues or available resources such as published books of case studies or online case collections. For example, the National Center for Case Study Teaching in Science provides free access to an award-winning collection of more than 450 peer-reviewed case studies in sciences appropriate for high school, undergraduate, and graduate education. 17

Laboratory and Experiential Settings Strategies

In laboratory or experiential settings, instructors can observe and provide feedback on student performance in all 3 domains of learning: cognitive, affective, and psychomotor.18,19 Students can be assessed on all levels of the Bloom’s Taxonomy learning pyramid from recalling information to making judgments. When designing an assessment and evaluation strategy in these settings, it is important to begin with identifying specific goals, objectives, and criteria for performance.18

A wide variety of formative assessment tools and methodologies can be used for in laboratory and experiential settings.18-21 One of the most commonly used assessment strategies in the laboratory setting is objective structured clinical examinations (OSCEs). This strategy is designed to assess specific competencies in a planned and structured way with particular attention paid to the objectivity of the examination.22 Typically, OSCEs consist of multiple stations with standardized tasks to assess students’ knowledge, skills, and abilities.22,23 OSCEs are often used as a form of summative and high-stakes assessment; however, they are also useful as a formative assessment strategy to provide feedback for improvement prior to a summative assessment. Developing OSCEs that are valid and reliable is challenging and resource intensive.24

Another common method of assessment in laboratory and experiential settings is an instructor’s direct observation of students. These observations can range from informal and spontaneous as part of practice activities or structured and rubric-guided with predefined criteria for evaluation and feedback. Through direct observation and the feedback that follows, faculty members have the opportunity to help students acquire and improve skills needed for pharmacy practice and patient care. Most institutions and course instructors develop their own evaluation instruments and rubrics, which are based on specific objectives for given activities. The value of the rubric is in both the standardization of criteria for grading and in the definition of an appropriate performance for students.25 The goal is to establish validity and reliability of these instruments and to share them with the academy.

Feedback quality is paramount in further developing our learners. The acronym SMART (Specific, Measurable/Meaningful, Actionable/Accurate, Respectful, Timely), often used for developing goals and objectives,26 can be used as a quick reminder for elements of effective feedback (Table 3). The person directly observing the student should provide the feedback, which should be a 2-way conversation that allows the student to self-assess and reflect on a particular task.18 Feedback sessions should offer a balance of positive observations and recommendations for improvement and lead to a mutually agreed upon action plan, which includes ongoing observation and feedback. Formative feedback in general should avoid placing a judgment or a score on particular performance.

Table 3.

Elements of Effective Feedback – the SMART System

graphic file with name ajpe789160-t3.jpg

Formative Assessment in Interprofessional Education

Formative assessment is integral to the learning process of interprofessional education and the development of high-performing teams. Course developers must not only take into account the level and experience of the learner, but also prior assumptions about his or her role and other team members’ scope of practice. Traditionally, simulation has been used to build skill performance within a discipline (eg, surgical techniques). Many schools are now using unfolding cases (cases that evolve over time in a manner that is unpredictable to the student), simulation training, and the debrief session to build competency in areas essential to collaborative practice.27

The debrief session is effective in allowing learners to reflect on and analyze what they have experienced as a team. During debrief sessions, students are given an opportunity to not only reflect upon the clinical decisions made, but to also discuss gaps in their knowledge, possible errors they made, and improvements they can make in their performance. An effective debrief session also allows students to express their feelings about stressful and rapidly changing situations (eg, cardiac arrest codes) and to generate solutions as a team. As cases become more difficult, questions addressing what went well, what the team could have done better, and what they would do differently next time become important. Providing comprehensive feedback to novice learners in a safe and supportive environment contributes to a better understanding of the experience and how the learning can be applied to new situations.

Engaging students in peer assessment as part of simulation exercises enhances constructive criticism, collegiality, and accountability within the team. Learning how to assess the performance of a team brings new insights to individual learners about ways they can improve their own performance on teams. The Performance Assessment of Communication and Teamwork (PACT) Tool Set, developed by the University of Washington Macy Assessment Team, includes readily accessible tools designed for use by either a student or faculty rater.28 The Agency for Health Research and Quality has made available on its website TeamSTEPPS, a tool that provides guidelines for conducting a team debrief in a progressive manner, from exploring the roles and responsibilities of the team, using specific skills, situational awareness, and transitions of care, to exploring strategies for raising concerns about clinical decisions and how to ensure appropriate action is taken.29

Reflective Writing and Portfolios

Given that formative assessment occurs during the learning process and is used for improvement, using reflective exercises in learning is consistent with these principles. Reflective learning requires the ability to self-assess, be aware of one’s own learning (metacognition), and develop lifelong learning skills.30 Learning from education and practical experiences, articulating what has been gained, and considering why it is important are essential to the development of a practitioner with reflective capacity. Schon identified reflective practitioners as those who use reflection to purposefully think about their experiences with the goal of learning from them and using that information for future problem-solving in the clinical setting.31 Faculty members cannot possibly teach students all the knowledge and skills they will ever need in their future practice. Instead, the goal is to foster students’ ability to appreciate the uncertainty in knowledge and be able to clinically reason when encountering a problem with no clear solution.

Writing is a tool that can help cultivate thoughts, feelings, and knowledge on a subject.32 Reflective writing promotes self-directed learning and various models may be used. At its most simple, reflective writing may be a description and analysis of a learning experience within a course or clinical experience. It may include a review of what has been learned to a certain point or an analysis of a critical incident. Often, a model of “what/so what/now what” is used to guide the learner’s reflection. Reflective writing can be incorporated throughout the curriculum (didactic, experiential, and service learning) to encourage development of a reflective practitioner.

Reflective portfolios may also be considered as a type of formative assessment as they are used to monitor growth in students’ personal and professional development.33 As opposed to a showcase portfolio, a reflective portfolio includes artifacts that a student selects over time to show how proficiency has changed, whereas the showcase portfolio is a collection of the student’s best work. For example, a pharmacy student may include a patient counseling video from his or her first professional year and write a reflection on what was learned from the experience and how he or she could improve upon it. For the next patient counseling encounter, the student may review the video, reflection, and instructor feedback. Reflective portfolios turn random assignments with no connection or reflection into a visibly cohesive learning tool for the student. In addition to emphasizing more student accountability for their own learning, portfolios facilitate a deeper learning by involving students in ongoing, reflective analysis of projects and assignments. This type of portfolio allows students to track their growth and development and, with good scaffolding, demonstrates how they can integrate and apply their learning.

The key to an effective reflective portfolio is the combination of documentation and reflection, along with feedback from faculty members. Instructor-student interaction around portfolio development is important to clarify why the student chose specific artifacts and what the artifacts mean in the context of a course or program. The feedback provided by the instructor helps give the student direction for improvement, along with providing a connection between the instructor and student. Ideally, the development of a reflective portfolio with appropriate structure, support, and instructor feedback will also increase the student’s engagement in learning.

Technology’s Role in Formative Assessment

The major goal of formative assessment is to provide feedback in a timely fashion for improvement. The collection, storage, and dissemination of formative data will likely require the use of computerized technology. While many vendors are competing to provide products and services in this arena there are a few principles that should guide the use of assessment data: 1) only data that is useful in improving outcomes should be collected and stored; 2) no amount of statistical manipulation can alter the usefulness of bad data; 3) data must be quality-controlled and accurate; 4) faculty members and students must be trained on the proper interpretation and use of the data—even if the correct data is accurately collected, it will not lead to improvement in curricular outcomes if misused; 5) data must be readily available to those who need it when they need it.

With modern computing power and virtually unlimited data storage capacity, the challenge is no longer in collecting and storing data but in using data to create effective improvements. Faculty members should investigate available technologies adopted by their institutions and seek out training to determine whether these can be used in a meaningful way to facilitate formative assessment in the didactic, laboratory, or experiential settings.

Formative Assessment of Faculty Teaching

Concepts and examples of formative assessment for enhancing student learning and development are also applicable to improvements in teaching effectiveness. Moreover, items that improve student performance such as classroom assessment techniques, student teaching evaluations, portfolios, self-reflections, performance evaluations, and peer assessments create opportunities for faculty members to strengthen their teaching abilities.

During the last several decades, survey research has shown that student teaching evaluations were the primary source institutions of higher education used for evaluating teacher effectiveness.34 Student evaluations normally occur at the end of a class and often serve as summative assessment measures for promotion and tenure decisions. Numerous published articles debate the pros and cons of using summative student teaching evaluations. As a single source to rate teaching, faculty members question the validity and reliability of student perceptions. Although students are able to evaluate certain aspects of teaching, some research points to areas of teaching performance that students are not qualified to assess in terms of the quality (ie, content expertise, teaching methods, knowledge, etc.).35 Other research, however, indicates student ratings are “reliable, valid, relatively unbiased, and useful.”36 The challenges are applying student ratings to determine what actions should occur to improve teaching and getting comprehensive and constructive feedback at the end of the course from students unmotivated to provide it as they are not likely to benefit from any future course improvements.

Elzubeir and Rizk found faculty members paid more attention to written comments than the mean scores when reviewing student teaching evaluations in a medical education environment.37 Thus, formative feedback could be gathered from students midsemester, specifically asking for written comments rather than ratings. Mid-semester feedback could be facilitated by assessment personnel, campus staff dedicated to improvement of teaching and learning, faculty members, or student leaders. Such feedback would allow faculty members to make adjustments in instruction or the course content and immediately benefit the students providing it. Student ratings for decision-making purposes are not sufficient evidence to evaluate teaching effectiveness.35

Several colleges and schools of pharmacy include faculty member peer reviews of classroom teaching.38-40 used in conjunction with student teaching evaluations, peer reviews often serve as another summative measure for promotion and tenure decision-making. Turpen et al indicated in preliminary findings that even when institutions used both student teaching evaluations and peer reviews, they tended to rely on student teaching evaluations only for faculty performance evaluations.41 Feedback from peers, however, serves as a powerful tool for formative assessment to enhance the quality of teaching.42

Current research suggests that using multiple sources of feedback is more effective in evaluating teaching and professionalism.43,44 Berk refers to this process as the “360o multisource feedback model to evaluate teaching and professionalism.”43 Successful implementation of formative feedback requires an understanding of the teaching environment and agreed-upon best practices for the institution as well as student and faculty member development on how to provide effective constructive feedback.

Faculty members often base the assessment of their own teaching effectiveness on student test performance. Choosing formative assessments that provide feedback on how well students are learning can enhance the quality of teaching in the classroom. For example, in the classroom faculty members may notice students not paying attention or displaying a look of confusion on a particular topic. By implementing classroom assessment techniques, such as the minute paper or clickers, faculty members can gauge understanding, obtain immediate feedback, and allow for teaching modifications. Midsemester and end-of-semester student feedback may help enhance effective communication or organization of the course or lecture. Similar techniques can be used in experiential settings. Peer feedback can also be instrumental in further refining instructional strategies, organization of content, student engagement, and assessment methods. Research-based assessments and student performance on examinations, quizzes, or homework can demonstrate attainment of educational outcomes and thus verify teaching effectiveness. Reflective teaching portfolios could be another valuable method for assessing teaching strengths and weaknesses and determining focused development in these areas. Action research, a disciplined process of inquiry for the purpose of reflection and quality improvement, allows faculty to engage in rigorous examination of their teaching practices.37,45 Documentation of evidence of successful teaching and student learning, as well as self-reflection should be included in such portfolios.

CONCLUSION

A variety of formative assessment techniques can be applied in didactic or experiential settings and can be used to perfect student knowledge, skills, and attitudes. Formative feedback aimed at instructors develops faculty members and improves the quality of teaching. We recommend that professional pharmacy programs create a culture that encourages “formative failure” in a safe environment as a key element of learning. Formative failure allows the student to make mistakes and learn before being officially graded on the assignment. Formative failure allows teachers to experiment in order to further refine their teaching abilities.

Further, we recommend that, in order for students to achieve the optimal learning outcomes, faculty members and preceptors must learn to effectively integrate a variety of formative assessment strategies into their teaching. Formative assessment of students and faculty members should employ the 360o multisource feedback approach. Sources of feedback for students include competency benchmarks, instructors, peers, preceptors, advisors, employers, patients, and self-reflection. Teachers should reflect on their performance by triangulating feedback from students and peers and evidence of learning from formative and summative assessments.

The proper use of formative assessment of both student outcomes and teaching skills should liberate the instructor to teach important material, the students to learn from their mistakes, and the culture of the institution to refocus on the development of outcomes. Institutions may want to utilize technology to manage the process of compiling, evaluating, and reporting on student progress effectively to provide timely data for learners and teachers to quickly address learning gaps. A well-structured assessment program that includes a balance of formative and summative assessments will help programs document student achievement of educational outcomes and provide data necessary for the continuous improvement of the curriculum.

REFERENCES

  • 1.Medina MS, Plaza CM, Stowe CD, et al. Center for the Advancement of Pharmacy Education 2013 Educational Outcomes. Am J Pharm Educ. 2013;77(8):Article 162. doi: 10.5688/ajpe778162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Scriven M. The methodology of evaluation. In: Tyler RW, Gagne R, Scriven M, editors. Perspectives of Curriculum Evaluation: American Educational Research Association Monograph Series on Curriculum Evaluation, No. 1. Chicago, IL: Rand McNally; 1967. pp. 39–83. [Google Scholar]
  • 3.Bloom BS. Learning for Mastery. Instruction and Curriculum. Regional Education Laboratory for the Carolinas and Virginia, Topical Papers and Reprints, No. 1. Los Angeles, CA: University of California Press; 1968. pp. 9–10. http://files.eric.ed.gov/fulltext/ED053419.pdf. Accessed March 3, 2014. [Google Scholar]
  • 4.Bloom BS. Some theoretical issues relating to educational evaluation. In: Tyler RW, editor. Educational Evaluation: New Roles, New Means. The 63rdyearbook of the National Society for the Study of Education, Part II, 69(2). Chicago, IL: University of Chicago Press; 1969. p. 48. [Google Scholar]
  • 5.Black PJ, Wiliam D. Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability. 2009;21(1):5–31. [Google Scholar]
  • 6.Anderson HM. Preface: a methodological series on assessment. Am J Pharm Educ. 2005;69(1):Article 11. [Google Scholar]
  • 7.Angelo TA, Cross PK. Classroom Assessment Techniques. 2nd ed. San Francisco, CA: Jossey-Bass; 1993. [Google Scholar]
  • 8.Liu Y, Mauther S, Schwarz L. Using CPS to promote active learning. In: Song H, Kidd T, editors. Handbook of Research on Human Performance and Instructional Technology. Hershey, PA: IGI Global; 2010. pp. 106–117. [Google Scholar]
  • 9.Kelley KA, Beatty SJ, Legg JE, McAuley JW. A progress assessment to evaluate pharmacy students’ knowledge prior to beginning advanced pharmacy practice experiences. Am J Pharm Educ. 2008;72(4):Article 88. doi: 10.5688/aj720488. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Berry J. Technology support in nursing education: clickers in the classroom. Nurs Educ Perspect. 2009;30(5):295–298. [PubMed] [Google Scholar]
  • 11.Gauci SA, Dantas AM, Williams DA, Kemm RE. Promoting student-centered active learning in lectures with a personal response system. Adv Physiol Educ. 2009;33(1):60–67. doi: 10.1152/advan.00109.2007. [DOI] [PubMed] [Google Scholar]
  • 12.Medina MS, Medina PJ, Wanzer DS, Wilson JE, Er N, Britton ML. Use of an audience response system (ARS) in a dual-campus classroom environment. Am J Pharm Educ. 2008;72(2):Article 38. doi: 10.5688/aj720238. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Patterson B, Kilpatrick J, Woebkenberg E. Evidence for teaching practice: The impact of clickers in a large classroom environment. Nurse Educ Today. 2010;30(7):603–607. doi: 10.1016/j.nedt.2009.12.008. [DOI] [PubMed] [Google Scholar]
  • 14.DiVall MV, Hayney MS, Marsh W, et al. Perceptions of the students, faculty, and administrators on the use of technology in the classroom at six colleges of pharmacy. Am J Pharm Educ. 2013;77(4):Article 75. doi: 10.5688/ajpe77475. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Pai VB, Kelley KA, Bellebaum KL. A technology-enhanced patient case workshop. Am J Pharm Educ. 2009;73(5):Article 86. doi: 10.5688/aj730586. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Schwarz LA, Eikenburg D. Integrating the basic and clinical sciences in small-group, clinical/pharmacology case-solving sessions. Am J Pharm Educ. 2011;75(5):Article 105. [Google Scholar]
  • 17.National Science Foundation. National Center for Case Study Teaching in Science. http://sciencecases.lib.buffalo.edu/. Accessed March 3, 2014.
  • 18.Beck DE, Boh LE, O’Sullivan PS. Evaluating student performance in the experiential setting with confidence. Am J Pharm Educ. 1995;59(Fall):236–247. [Google Scholar]
  • 19.Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9):S63–67. doi: 10.1097/00001888-199009000-00045. [DOI] [PubMed] [Google Scholar]
  • 20.Epstein RM, Hundered EM. Defining and assessing professional competence. JAMA. 2002;287(2):226–235. doi: 10.1001/jama.287.2.226. [DOI] [PubMed] [Google Scholar]
  • 21.Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. JAMA. 2009;302(12):1316–1326. doi: 10.1001/jama.2009.1365. [DOI] [PubMed] [Google Scholar]
  • 22.Harden RM. What is an OSCE? Med Teach. 1988;10(1):19–22. doi: 10.3109/01421598809019321. [DOI] [PubMed] [Google Scholar]
  • 23.McAleer S, Walker R. Objective structured clinical examination (OSCE) Occas Pap R Coll Gen Pract. 1990;46:39–42. [Google Scholar]
  • 24.Sturpe DA. Objective structured clinical examinations in doctor of pharmacy programs in the United States. Am J Pharm Educ. 2010;74(8):Article 148. doi: 10.5688/aj7408148. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Stevens DD, Levi AJ. Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Sterling, VA: Stylus Publishing; 2005. [Google Scholar]
  • 26.Doran GT. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manag Rev. 1981;70(11):35–36. [Google Scholar]
  • 27.Interprofessional Practice Education Collaborative. Core Competencies for Interprofessional Collaborative Practice: Report of an expert panel. 2011. http://www.aacn.nche.edu/education-resources/ipecreport.pdf. Accessed March 3, 2014.
  • 28.Chiu CG, Brock D, Abu-Rish E, et al. Performance Assessment of Communication and Teamwork (PACT) Tool Set. University of Washington Center for Health Science Interprofessional Education, Research, and Practice. http://collaborate.uw.edu/educators-toolkit/tools-for-evaluation/performance-assessment-of-communication-and-teamwork-pact-too. Accessed January 3, 2014.
  • 29.TeamSTEPP S. U.S. Department of Health and Human Services. Agency for Healthcare Research and Quality. http://teamstepps.ahrq.gov/. Accessed March 3, 2014.
  • 30.Hamilton S. Innovative Educators. Development in reflective thinking. http://www.innovativeeducators.org/v/vspfiles/IEfiles/03_21_369_Development_in_Reflective_Thinking.pdf. Accessed March 3, 2014.
  • 31.Schon D. The Refective Practitioner: How Professionals Think in Action. 1st ed. San Francisco: Jossey-Bass; 1983. [Google Scholar]
  • 32.Huba ME, Freed JE. Learner-Centered Assessment on College Campuses: Shifting the Focus from Teaching to Learning. 1st ed. Upper Saddle River, NJ: Pearson; 1999. [Google Scholar]
  • 33.Plaza CM, Draugalis JR, Slack MK, et al. Use of reflective portfolios in health sciences education. Am J Pharm Educ. 2007;71(2):Article 34. doi: 10.5688/aj710234. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Seldin P, Hutchings P. Changing Practices in Evaluating Teaching: A Practical Guide to Improved Faculty Performance and Promotion/Tenure Decisions. 1st ed. San Franscisco, CA: Jossey-Bass; 1999. [Google Scholar]
  • 35.Arreola RA. Common questions concerning student ratings: what 80 years of research tells us. In: Arreola RA, editor. Developing a Comprehensive Faculty Evaluation System. Bolton, MA: Anker Publishing Inc; 2000. pp. 79–92. [Google Scholar]
  • 36.Williams BC, Pillsbury MS, Stern DT, Grum CM. Comparison of resident and medical student evaluation of faculty teaching. Eval Health Prof. 2001;24(1):53–60. doi: 10.1177/01632780122034786. [DOI] [PubMed] [Google Scholar]
  • 37.Elzubeir M, Rizk D. Evaluating the quality of teaching in medical education: are we using the evidence for both formative and summative purposes? Med Teach. 2002;24(3):313–319. doi: 10.1080/01421590220134169. [DOI] [PubMed] [Google Scholar]
  • 38.Hansen LB, McCollum Paulsen SM, et al. Evaluation of an evidence-based peer teaching assessment program. Am J Pharm Educ. 2007;71(3):Article 45. doi: 10.5688/aj710345. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Trujillo JM, DiVall MV, Barr J, Gonyeau M, Van Amburgh J, Qualters D. Development of a peer teaching-assessment program and a peer observation and evaluation tool. Am J Pharm Educ. 2008;72(6):Article 147. doi: 10.5688/aj7206147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Barnett CW, Matthews HW. Teaching evaluation practices in colleges and schools of pharmacy. Am J Pharm Educ. 2009;73(6):Article 103. doi: 10.5688/aj7306103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Turpen C, Henderson C, Dancy M. Faculty perspectives about instructor and institutional assessments of teaching effectiveness. AIP Conference Proceedings. 2012;1413(1):371–374. [Google Scholar]
  • 42.DiVall MV, Barr JT, Gonyeau M, et al. Follow-up Assessment of a faculty peer observation and evaluation program. Am J Pharm Educ. 2012;76(4):Article 61. doi: 10.5688/ajpe76461. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Berk RA. Using the 360° multisource feedback model to evaluate teaching and professionalism. Med Teach. 2009;31(12):1073–1080. doi: 10.3109/01421590802572775. [DOI] [PubMed] [Google Scholar]
  • 44.Berk RA. Top five flashpoints in the assessment of teaching effectiveness. Med Teach. 2013;35(1):15–26. doi: 10.3109/0142159X.2012.732247. [DOI] [PubMed] [Google Scholar]
  • 45.Hesketh EA, Bagnall G, Buckley EG, et al. A framework for developing excellence as a clinical educator. Med Educ. 2001;35(6):555–564. doi: 10.1046/j.1365-2923.2001.00920.x. [DOI] [PubMed] [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES