Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2011 May 10;75(4):73. doi: 10.5688/ajpe75473

Intergroup Peer Assessment in Problem-Based Learning Tutorials for Undergraduate Pharmacy Students

Vicky S Kritikos 1, Jim Woulfe 1, Maria B Sukkar 1, Bandana Saini 1,
PMCID: PMC3138352  PMID: 21769149

Abstract

Objective

To develop, implement, and evaluate a process of intergroup peer assessment and feedback using problem-based learning (PBL) tutorials.

Methods

A peer-assessment process was used in a PBL tutorial setting for an integrated pharmacy practice course in which small groups of students graded each others’ PBL case presentations and provided feedback in conjunction with facilitator assessment.

Assessment

Students' quantitative and qualitative perceptions of the peer assessment process were triangulated with facilitator feedback. Students became more engaged, confident, and motivated, and developed a range of self-directed, life-long learning skills. Students had mixed views regarding the fairness of the process and grade descriptors. Facilitators strongly supported the peer assessment process.

Conclusions

Peer assessment is an appropriate method to assess PBL skills and is endorsed by students as appropriate and useful.

Keywords: pharmacy students, peer assessment, problem-based learning

INTRODUCTION

The academic and practice components of pharmacy courses are increasingly geared toward developing therapeutic expertise as well as critical-thinking, problem-solving, teamwork, reflection, and negotiation.1 This change in direction is driven largely by a shift in the professional practice of pharmacy over the last 20 years from traditional product supply toward the provision of primary care services, including patient education, medication and lifestyle management, health promotion, disease monitoring, screening, and prevention.2-4 Because the delivery of these services requires effective interdisciplinary cooperation,5 pharmacists must develop skills that foster interprofessional relationships. To adequately equip students with the diverse skills required for pharmacy practice today, traditional models of didactic teaching are being replaced with student-centered and group-based teaching methods, such as problem-based learning.6-8 This changing scope of pharmacy practice is reflected in the stipulation of the Accreditation Standards and Guidelines for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree stipulate that students should be encouraged to participate in the education of others, including fellow students and healthcare providers.9

PBL fosters a deep approach (ie, not just surface learning) to learning and promotes self-directed, life-long learning skills.10 It encourages active learning and collaboration between students and provides a context designed to promote internal motivation through the provision of pragmatic goals. As PBL emphasizes the development of proficiency in the “real-time” resolution of clinical problems, it would be appropriate for the assessment of student skills, processes, and attitudes to take place in tutorials at the same time that the problem is presented and solved rather than by means of formal examinations and tests conducted much later.11 However, assessment of student progress within PBL tutorials has remained a challenge because more traditional forms of assessment are not aligned with and do not readily assess what is being learned in PBL tutorials.11

Peer assessment in higher education is a process whereby students engage with criteria and standards and apply them to evaluate the work of their peers.12-15 The process can be formative or summative and also can include qualitative feedback relating to the grading criteria used rather than a quantitative focus on the actual grade.13 Peer assessment can occur in the context of individual or group work, the latter taking 1 of 3 forms: intragroup, wherein each member of a group rates the performance or contribution of the other individual group members to the shared product; intergroup, wherein 1 or more members in a group rate the performance or product of another group; and extragroup, wherein individuals who are not group members assess the performance or product of 1 of the groups.13

Peer assessment provides a powerful avenue for students to receive feedback on their learning.12-27 In the context of group work, peer assessment improves student learning and increases confidence in future collaborative work by contributing to the development of a variety of skills, such as self-directed learning, critical reasoning, reflection, negotiation, professional judgment, teamwork, and self-awareness.17-24 Peer assessment also can benefit teaching staff members by reducing their workload,13 providing new insights into student learning processes,24 and encouraging staff members to provide greater transparency regarding assessment objectives and grading criteria.25,27 Given that both peer assessment and PBL focus on group collaboration and share key objectives and philosophies, peer assessment seems an appropriate evaluative process for the PBL tutorial setting.

PBL techniques are used in 18 of the 48 credit hours that students must attain during the final year of the bachelor of pharmacy (BPharm) programs at the University of Sydney, Australia. Assessing learning in these PBL courses has been a challenge. Problems posed to senior students working in small groups are usually highly complex; often with incomplete data as in real life, and involve many interrelated factors, such as pathology results, polypharmacy, pyschosocial determinants of medication use, prescribing or medication use error; and often have more than 1 reasonable solution or approach. In this cohort, peer assessment was considered an innovative method of assessing higher-order learning in PBL tutorials.

Although peer assessment by small groups has been applied in different settings encompassing a diversity of study designs,28 no previous study has investigated the use of intergroup peer assessment within the PBL setting, particularly in pharmacy undergraduate curricula. In contrast to other studies that have examined assessment of a process by small groups, group assessment of a discrete product or performance needs to be studied.13 This study aimed to apply the peer-assessment process in a PBL tutorial setting in which small groups of students grade each other's PBL case presentations and provide feedback in conjunction with facilitator assessment. The specific objectives of this study were to implement and evaluate a process of intergroup peer assessment and feedback in the PBL tutorial setting. It was hypothesized that students undertaking PBL tutorials would be able to understand and engage in group peer assessment and the PBL process.

DESIGN

At the University of Sydney, Australia, all BPharm students take Integrated Pharmacy Practice, a 12 credit-hour course, in the first semester of their fourth year. An overview of this course is provided in Table 1. Integrated Pharmacy Practice integrates 3 components: clinical chemistry, experiential learning, and applied therapeutics. Applied therapeutics is delivered through a mix of lectures and PBL tutorials. Within each PBL tutorial, students work in 2 groups of 6 to 8 students and undertake two 2-hour sessions of PBL tutorial time in each week throughout a 13-week semester. The structure of the PBL tutorials and cases are described in Figure 1. Working in a collaborative environment within their small groups, students analyze a case, formulate hypotheses, try to describe issues in the patient's disease handling process, and make recommendations for management of identified issues. The whole group carries out the PBL tasks each week and, on an alternating weekly basis, half the group is responsible for giving the corresponding 12- to 15-minute clinical case presentation. Conventionally, the facilitator-assessed clinical case presentations account for 20% of the final grade for the course.

Table 1.

Overview of the Integrated Pharmacy Practice Course

graphic file with name ajpe73tbl1.jpg

a

PBL = peer-based learning

b

Clinical placements can be hospital- or community pharmacy-based.

Figure 1.

Figure 1

PBL Case Structure.

Over the last couple of years, facilitators have observed that students were passive and disinterested in their peers’ presentations. Facilitators of the PBL cases felt it was necessary to develop methods to keep students motivated and engaged during clinical case presentations, as these presentations are not only part of the overall assessment but also a vehicle for further learning, especially regarding alternative approaches to case management. The facilitators determined that 1 possible way to reduce lack of interest and passivity would be to actively engage students in the process by requiring them to assess the clinical case presentations using established criteria, just as the facilitators do. This would allow for immediate and transparent feedback both for the presenters and the peers assessing the clinical case presentation.

These observations provided the concept for the current study, which was conducted during the first semester of 2009 within the Integrated Pharmacy Practice course. Ethics approval to conduct the study was obtained from the Human Research Ethics Committee of the University of Sydney (HREC Approval Number 11707).

Subgroup clinical case presentations were assessed by all members of the other group. Peer assessment was done in conjunction with facilitator assessment for cases 3 to 8, thus accounting for 15% of each student's final course grade (Table 1).

Assessment Criteria

Assessment criteria and grade descriptors were developed for use in peer evaluations of clinical case presentations based on: (1) an extensive review of the literature on peer assessment and clinical reasoning skills, (2) academic staff and clinical practitioner teachers’ experience in conducting PBL in the same course, and (3) input by a panel of experts from the Faculty's education unit and the University of Sydney's Institute of Teaching and Learning. The grading criteria (Appendix 1) assessed all domains of Bloom's Taxonomy of Learning (cognitive, affective, and psychomotor)29 and were framed along 4 key areas of assessment: clinical reasoning skills (cognitive), reflection on practice (cognitive/affective), teamwork (affective), and presentation (psychomotor). Grade descriptors were detailed and established standards for a clinical case presentation assessment of high distinction (>85%), distinction (75% to 84%), credit (65% to 74%), pass (50% to 64%) and below pass (<50%) levels (Appendix 2).

Training In Peer Assessment

Eleven facilitators who were unfamiliar with peer assessment were trained by Institute of Teaching and Learning experts to facilitate peer assessment using the developed criteria and grade descriptors within the PBL component of the course. Facilitators were mostly experienced in teaching the PBL component in this course and were asked to review and comment on the grading criteria and descriptors prior to implementation. During the first week, the peer assessment process was explained to students in an introductory lecture, the use of the assessment criteria and grade descriptors was demonstrated, and the students were asked to give their consent to participate during the first PBL tutorial. All materials (grade descriptors, assessment criteria, and clinical case presentation examples) were posted on the course e-Learning Web site. The first 2 PBL tutorials were devoted to a practice case (case 0, which did not contribute toward the final assessment for the course), during which students were initiated into the peer assessment process, provided with tips on how to give constructive peer feedback by their PBL facilitators, and provided with a solved template example of what would be expected in their clinical case presentations. The tips on constructive feedback were based on published references, which were made available to students in their course handbook.30,31 Students were instructed on how to use the grading criteria and feedback form and encouraged to write constructive comments in the space provided. Over the following 2 weeks, facilitators led clinical case presentation assessments for cases 1 and 2, which exemplified the use of the grading criteria and feedback form.

Peer-Assessment Process Design

For cases 3 through 8, students assessed clinical case presentations delivered by student members of the other group. In an attempt to eliminate individual bias, students were asked to provide peer assessment as a group, rather than as individuals. After each presentation, students were given 10 minutes as a group to negotiate and agree on a final grade for the clinical case presentation presented by their peers. Facilitators independently assessed the presented clinical case presentations at the same time. After both presentations were delivered and assessed, facilitators allowed each group an additional 5 minutes to provide reciprocal verbal feedback based on their written comments. This was followed by facilitator feedback and case debriefing. The students' assessment was required to be in grade format following the grading criteria form, ie, assessment result was given by peers as credit, pass, etc, as opposed to numerical marks, ie, 1 out of 5, 2 out of 5, etc (Appendix 2). When grades awarded by peers were not consistent with the facilitator's grade, peers were required to justify the grade they assigned and negotiate with the facilitator about the final grade. If the students could not clearly justify their grade, the facilitator was allowed to override the grade with one that was justifiable. Facilitator grade override was an extreme process and had to be brought to the attention of the course coordinator (B.S.). All group members received the same grade, unless otherwise determined by their assessing peers. Grades awarded were converted by facilitators into marks for each presenting member of either group.

EVALUATION AND ASSESSMENT

All processes were implemented according to plan with 16 PBL groups (total N= 235) and 11 facilitators. There were no occasions reported to the coordinator in which facilitators had to override peer-assessed grades, nor were there any other complaints made to the coordinator from either the students or the facilitator during the semester.

Student Peer Assessment Feedback Questionnaire

Students were invited to complete a voluntary, anonymous 7-item questionnaire in the final week of the semester to assess student perceptions of peer assessment (6 items) and satisfaction with the group work (1 item). These 7 items were adapted from a measure originally developed by Gatfield for assessing perceptions32 and used a 5-point Likert scale to record responses (1 = strongly agree, 5 = strongly disagree or 1= extremely satisfied, 5 = extremely dissatisfied). In addition to these 7 items the research team added an additional item that assessed respondents’ perceptions regarding the grading criteria on a 5-point Likert scale (1 = extremely easy to use and 5 = extremely difficult to use), and 2 open-ended questions that allowed respondents to provide information about the assessment and feedback process. A section at the end of the questionnaire requested respondent's' gender, age group, nationality, hours worked in pharmacy per week over the past year, and prior experience in peer assessment of individual and group work.

All data collected were deidentified. Quantitative data analyses were performed using SPSS, Version 17. Mean ratings for each item in the questionnaire were calculated. Likert scale responses 1 and 2 (strongly agree and agree) and responses 4 and 5 (disagree and strongly disagree) were combined for each item, and the proportion of student responses to each item was calculated. Exploratory analyses were undertaken using student t tests for continuous variables. A 2-tailed, 5% (0.05) level of significance was used for all statistical procedures.

Of the 235 students invited to participate in the survey, 220 returned a completed questionnaire (94% response rate). Sixty-four percent of the respondents were female; 95% were aged 21 to 25 years, and 90% were born in Australia. Eighty-two percent had previously peer-assessed either individual student work or group work. The mean hours that participating students had worked per week in pharmacy over the previous year were 9.4 ± 6.5 (mean ± SD) hours. No correlations were found between age, gender, nationality, or work experience and perception of peer assessment on the questionnaire items. The majority of respondents (96%) indicated that they understood the peer-assessment process (item 1); and more than 70% agreed that it is an appropriate group assessment method (item 2) and that students should assess their peers (item 3). Mean ratings across questionnaire items 1 through 3 (Table 2) show a positive acceptance of the peer assessment process by students. In contrast, less than 50% agreed that the peer-assessment process is a fair way to divide grades (item 4), that grades are a fair reflection of students’ efforts (item 5), and that peers are capable of assessing fairly (Item 6). Mean ratings across items 4 to 6 indicate that on average, students held an approximate level of neutrality (ie, a score of 3 on the 5-point Likert scale). Eighty-two percent of students were satisfied with the group work process (mean rating 2.1 ± 0.7). Seventy percent of students found the grading criteria extremely easy or easy to use in grading their peers’ clinical case presentations (mean rating 2.3 ± 0.8), with only 18% indicating ambivalence about the ease of use.

Peer Assessment Marks Versus Facilitator Marks in Previous Year

Facilitator-assessed average course grades for 2008 clinical case presentations of students undertaking the same course with the same structure were compared to peer facilitator coassessed average course grades for this component of the 2009 course. This was the only element of the course that related solely to the applied therapeutics component.

The mean grades (out of 20) for the clinical case presentation component attained by students undertaking the same course with the same structure in 2008, ie, facilitator assessment (17.1 ± 2.1, n = 230) vs. 2009, ie, peer facilitator co-assessment (17.3 ± 1.3, n=235), showed there was no significant difference between facilitator- and peer-assessed average class marks for this component of the course in (P = 0.22).

Qualitative Feedback

To gauge levels of satisfaction with the peer-assessment process and to understand whether students felt it had helped them engage with their peers’ presentations, qualitative methods were employed to explore the perceptions and observation of both students and facilitators. At the end of the semester, students were invited to attend a focus group session designed to elicit further comments about the peer-assessment activity. A student focus-group session guide used an open-ended approach to querying about the overall peer assessment experience, what components were liked or disliked, whether it should be retained in future courses, and if retainable, how it could be improved. Focus groups were facilitated by 1 of the researchers, and PBL facilitators were invited to attend a debrief session facilitated by 2 members of the research team. Feedback was sought regarding their overall teaching experience in the PBL tutorials, their specific experiences and observations in implementing the peer assessment process, and suggestions to change or improve the process.

Thirty students (13% of total class) participated in 4 focus groups at the end of semester. Seven of the 16 PBL tutorial classes were represented by these 30 students. Although non-respondents would have held different views, focus group students provided both positive and negative feedback, and content analyses of the focus group conversations revealed a saturation of ideas and feedback.

The focus group sessions were tape-recorded, transcribed, and thematically summarized. To ensure consistency, coding of the focus group transcripts, session notes from student sessions, and qualitative commentary from the questionnaires were undertaken independently by 2 researchers.

The focus group participants stated that the peer assessment process gave them a clear understanding of the standards expected of them and made it easier for them to learn after. Students stated that peer assessment in PBL tutorials helped them “learn from each other” and become more engaged, attentive, reflective, analytical, critical in reasoning, confident, and self-aware. Consistent with the responses on the feedback questionnaire, the main negative aspects of the assessment process reported by students were ambivalence about the fairness of the process and lack of confidence in their own ability and the ability of their peers to assess fairly for several reasons. Some focus group participants stated that the grade descriptors were too detailed and difficult to use and needed to be simplified, while others stated that they were quite useful and did not need to be improved as “it was just a matter of becoming familiar with them.” All found the grading criteria easier to use than grade descriptors.

Other themes obtained from focus groups indicated that the peer assessment and feedback process helped students become better team members by improving their skills in professional judgment, assertiveness, negotiation, oral presentation, leadership, engagement, time-management, and delegation. Students valued the feedback they received from peers, explaining that it was a “different kind of feedback” because their peers had researched the same information and were on the “same plane” as the students they were assessing. Negative aspects of the feedback process reported by students included that feedback was initially “taken a bit personally,” resulting in students reacting defensively, and that feedback provided toward the end of the semester became “a bit picky.”

Facilitator Feedback

At the end of the semester, the 11 facilitators participated in a debriefing session. The facilitator sessions were tape-recorded, transcribed, and thematically summarized just as the focus groups sessions were. To ensure accurate coding of the debriefing session transcripts, notes from facilitator sessions were reviewed by 2 researchers.

All facilitators evaluated the training provided to them quite highly and supported the peer assessment process. They reported that the process kept students engaged and motivated, resulting in better-quality clinical case presentations being presented, and that there were few episodes of bias, such as reciprocal marking and collusion between student groups. Facilitators revealed that there were few occasions of mismatch between facilitators’ and students’ assessments and that their grades often coincided exactly with those of the peer assessors. Most facilitators suggested that students should receive more guidance in the tone/style and wording of feedback provided. Other suggestions for improvement included giving students more time to complete the feedback process and limiting peer feedback to qualitative peer feedback without grades. Most facilitators felt that the grading criteria and descriptors had worked well and may have accounted for their grades often coinciding with the peer-assessed grade. They reported that students mostly referred to the criteria rather than the descriptors for assessing.

DISCUSSION

This study is the first to investigate the use of peer assessment in the PBL tutorial setting for small groups of fourth-year pharmacy students who graded each others’ presentations and provided feedback in conjunction with facilitator assessment and feedback. The characteristics of this study have been described in a framework28 originally developed by Topping,25 which covered the full scope of details relating to assessment structure and design. This framework enables researchers and educators to accurately replicate and compare our study with other studies and synthesize the results. This is an important step that should be part of any study reporting on peer-assessment research. The current study, which uses a posttest design, is strengthened by the use of items from previously validated questionnaires about peer assessment and the triangulation of quantitative and qualitative data. Based on facilitator feedback, the intergroup peer assessment and feedback process was shown to be effective in reducing student passivity and lack of interest, thus addressing the problem for which it was initiated. Students reported that the peer assessment process increased their level of confidence, motivation, satisfaction, and exposure to feedback, as well as promoted collaboration, teamwork, and a broad range of self-directed, life-long learning skills that are aligned with the PBL method's key objectives.10 No differences were found in average class grades between peer-assessed and peer-facilitator coassessed cohorts for the same component of the course over 2 consecutive years. Peer assessment, therefore, is an appropriate assessment method for skills taught in PBL tutorials and works well with final-year undergraduate pharmacy students. Study participants mostly valued the experience and endorsed the appropriateness of the method; hence, the hypotheses proposed can be reasonably accepted.

We believe that the highly positive feedback from students may be a result of the carefully designed process. In this study, considerable effort was expended in training. The development of the structures to scaffold student learning about how to assess their peers’ work were time consuming and intricate, involving didactic descriptions, case examples, and the show-and-tell technique, in which facilitators demonstrated the use of grading criteria and descriptors (cases 1 and 2). This effort seems to have been well spent based on an overwhelming majority of students understanding the process quite well.

A unique feature of the study design was the parallel facilitator co-assessment, which balanced the allocation of grades by student assessors. The success of this design feature is illustrated by the few discrepancies between peer and facilitator assessments and that the peer grade usually held as the final grade. The easiest way to reconcile mismatches between facilitator and peer grades would be to either average the grades or have the facilitator grade override the peer grade. However, our study used a collaborative co-assessment process that involved negotiation and discussion between the students and facilitators. This unique provision probably resulted in fewer episodes of collusion or reciprocal marking, as reported by facilitators. Also, students were allowed to self-select into groups, making it more likely that friends would work together in the same group and less likely that students in one group would have influential relationships with students in another group.33

Other key design features of the assessment included intergroup rather than individual assessment. Intergroup grading involves an entire group of students grading another group, making the process less threatening to individuals, while providing instant feedback on a delivered product. The use of this approach is valuable because it delivers the benefits of peer assessment without necessitating elaborate methods to ensure anonymity for individuals evaluating their peers. Further, students were aware that their assessment of 6 cases was “summative” and accounted for 15% of the total grade for the course (facilitator assessed Cases 1 and 2 were worth 5%), which may have led students to actively engage in and be diligent in their peer assessments.34,35 Knowing that each student's grades would be the same as that of the group enhanced positive interdependence, which has been associated with greater individual accountability and task ownership.36 The use of the clinical case presentation as a product to be peer-assessed is also a good choice. A study with Nigerian medical students learning pathophysiology through group clinical presentations revealed that most students found the presentations to be fun, informative, creative or innovative, and, most importantly, beneficial to their learning.37 The majority of students felt that this exercise improved their understanding of pathophysiology, taught them to research independently, and encouraged better class interactions and group learning.37 In the current study, facilitators remarked on the creativity and innovative presentation methods used by the groups, which made the process more interesting to assess. Thus, students possibly were engaged not only with the peer assessment process but also with the idea of clinical case presentations.

Benefits of the peer assessment and feedback process reported by students are consistent with those of other related studies supporting the appropriateness of this process in intergroup settings.16-27 However, roughly a third of the students responding to the feedback questionnaire expressed concerns about the fairness of the process. This finding is consistent with that of other studies that show students are equivocal about the fairness of the peer-assessment process because they often lack confidence in their own ability or the ability of their peers to assess fairly for a number of reasons.17 These reasons include students feeling unqualified to assess others’ work,33 finding it difficult to assign grades to their peers’ work, 12 disliking a cognitively challenging assessment process, 24 having difficulty being objective, tending to award higher grades to friends,38 being reluctant to award low grades to peers even if they were deserved,12 lacking ability to provide constructive feedback,20 being skeptical about their peers’ ability to grade fairly,20 and questioning the value of their peers’ comments.33 In our study, inexperience or lack of confidence are the more probable reasons for ambivalence or concerns about the fairness of peer assessment. This problem could be addressed by increasing students’ confidence in their ability to assess fairly by dedicating more time to the student preparation phase. For studies that already include a thorough preparatory phase for students, as our study did, another possible way to boost confidence would be to increase student awareness of the benefits and problems associated with peer assessment derived from this research and other previous investigations. Another possibility is to introduce peer assessment earlier in the pharmacy undergraduate program so students have more opportunities over the course of their education to gain experience, master their skills, and boost their confidence. Because some students reported that grading descriptors were rather complicated to use, students could be included in the development of grading criteria.12-14,34 Future research on the peer-assessment exercise should include students feedback about how to simplify the terminology of grade descriptors to a less academic style.

There are several potential limitations to the current investigation. This study did not use a controlled group design because it is difficult to accomplish in naturalistic research settings. To avoid respondent fatigue, the peer-assessment questionnaire was not administered before and after the peer assessment exercise.38 Agreement between peer and facilitator grades was assessed only qualitatively. Grading criteria and descriptors were customized specifically for the course, and students were not involved in the development of these criteria. Because the Integrated Pharmacy Practice course is not a standard pharmacotherapy course, standard criteria for measuring either pharmacotherapeutic knowledge or presentation skills may not be applicable. Further, not all students attended the debriefing focus groups. In implementing the peer-assessment process in other institutions, these possible limitations should be addressed. The peer assessment process highlighted in our study can be used in any course dependent on group work and self-directed learning and in which presentations are part of the course evaluation.

CONCLUSION

A structured quality-controlled peer-assessment process in a nonthreatening collaborative PBL tutorial setting is an appropriate and effective assessment method for pharmacy student-centered teaching approaches. Based on student endorsement of this process and the value of feedback from their peers, peer assessment is an appropriate method for evaluating skills taught in PBLs and works well with final-year undergraduate pharmacy students. Future investigations should address students’ perceptions regarding the fairness of their peers’ assessment, provide more guidance to students on giving and receiving feedback, and simplify grade descriptors.

Table 2.

Student Mean Ratings and Level of Agreement to Items Regarding Peer Assessment of Clinical Case Presentations (n=220/235)

graphic file with name ajpe73tbl2.jpg

a

Scores based on a 5-point Likert scale, where 1 = strongly agree, 2 = agree, 3 = neutral, 4 = disagree, and 5 = strongly disagree.

Grade and constructive comments for the group
Clinical Reasoning
Scope of hypotheses - students show an exploration of a broad range of issues in their hypotheses and apply their pre-existing knowledge and reasoning skills to create and refine their hypotheses
Problem-solving skills - students demonstrate the ability to identify, prioritize, and resolve patient health-related issues evident in the case
Research evidence - student recommendations for case management are referenced and supported by current evidence from the scientific literature
Reflective Practice
Application of knowledge - students are able to identify a practical application of knowledge acquired from their case into their observed or current pharmacy practice experiences
Teamwork
Cohesiveness- the presentation made by the group appears cohesive in that there is continuity of facts and ideas and agreement in solution, demonstrating that the group has worked effectively together.
Presentation skills
Summary - students summarize the case and demonstrate an understanding of important issues
Case organization - students organize the case well, using relevant criteria, eg, chronology, issue type, etc.
Aesthetics - student presentation is visually and graphically appealing and contributes to increased understanding of the case
Creativity- students use creative and fresh ideas to present their ideas on the PBL case scenario

Overall Grade for the group

For each of the criteria, please look at grade descriptors, and assign a grade for those criteria. The overall grade should be assigned based on an average of the grades assigned to each of the criteria.

High Distinction Distinction Credit Pass Fail
CLINICAL REASONING SKILLS Scope of Hypotheses The presentation traces a logical process of hypotheses generation and application of case evidence and literature to refine the hypotheses. Clear and methodical reasoning skills are demonstrated. The presentation shows clear evidence that a range of issues has been examined in the process of hypotheses generation; pre-existing knowledge and reasoning skills have been methodically applied to refine the hypotheses. The presentation contains clear evidence of hypotheses generation and application of pre-existing knowledge and reasoning skills to refine their hypotheses. The presentation contains some evidence of an attempt to explore range of issues in generating and logically refining hypotheses. Evidence of hypotheses generation and refinement is limited.
Problem Solving Skills Patient health related issues evident in the case are prioritised, and resolved accurately. Patient health related issues evident in the case are identified, prioritised and some ability to resolve them accurately is demonstrated. Patient health related issues evident in the case are identified, and some ability to resolve them accurately is demonstrated. The obvious patient health related issues evident in the case are identified. Patient health related issues evident in the case are incorrectly or incompletely identified.
Research Evidence Recommendations for case management are accurately referenced and supported by current evidence from the scientific literature. The evidence base for patient management recommendations is incorrect or inadequately referenced.
APPLICATION Application of Case Practical applications of the learning from the case, and instances where the learning can be transferred into professional practice are provided. Practical applications of the learning from the case are provided. A practical application of the learning from the case is provided. Practical application of the learning is missing from the presentation.
TEAMWORK Cohesiveness The presentation shows a high level of cohesiveness. Facts, ideas and arguments contribute consistently to the overall recommendations. The work clearly represents the collated thoughts and ideas of the small group. The presentation shows a high level of cohesiveness. Facts, ideas and arguments contribute consistently to the overall recommendations. The presentation appears internally consistent, without contradictions or duplications in the presented material. The presentation shows evidence of disparate individual contributions without leading to contradiction or duplication. The group presentation contains contradictions and / or duplications.
PRESENTATION SKILLS Summary The case summary is a concise and comprehensive picture of the patient's salient issues. The case summary provides the salient issues related to the patient. The presentation includes a case summary. The presentation fails to include a case summary.
Case Organization The presenters greatly enhance the audience's understanding of the case with clear organisation and structure. The presenters facilitate the audience's understanding of the case with clear organisation and structure. Organisation of the case aids and the audience's understanding. Some organisation is evident. The organisation of the presentation is not apparent.
Aesthetics The presentation is enhanced with appropriate graphical and audio-visual features which add to the understanding of the case
Creativity Use of innovative and fresh ideas makes the PBL case scenario presentation compelling and communicative.

ACKNOWLEDGEMENTS

We acknowledge the Teaching Improvement and Equipment Scheme of the Faculty of Pharmacy, University of Sydney, Australia for rendering financial support for this project. All participating facilitators and students are acknowledged for their time and support.

Appendix 1. Grading Criteria Form Used by Student Groups and Peer-Based Learning (PBL) Facilitators

graphic file with name ajpe73app1.jpg

Appendix 2. Grading Descriptors Used by Student Groups and PBL Facilitators to Assign a Relevant Grade

graphic file with name ajpe73app2.jpg

REFERENCES

  • 1.Blouin RA, Joyner PU, Pollack GM. Preparing for a Renaissance in pharmacy education: the need, opportunity, and capacity for change. Am J Pharm Educ. 2008;72(2):72. doi: 10.5688/aj720242. Article 42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Holland RW, Nimmo CM. Transitions, part 1: beyond pharmaceutical care. Am J Health-Syst Pharm. 1999;56(17):1758–1764. doi: 10.1093/ajhp/56.17.1758. [DOI] [PubMed] [Google Scholar]
  • 3.Roberts AS, Benrimoj SI, Chen TF, Williams KA, Aslani P. Implementing cognitive services in community pharmacy: a review of facilitators. Int J Pharm Pract. 2006;14:163–170. [Google Scholar]
  • 4.Reid LD, Posey LM. The changing face of pharmacy. J Am Pharm Assoc. 2006;46(3):320–321. doi: 10.1331/154434506777069552. [DOI] [PubMed] [Google Scholar]
  • 5.Dolovich L, Pottie K, Kaczorowski J, et al. Integrating family medicine and pharmacy to advance primary care therapeutics. Clin Pharmacol Ther. 2008;83(6):913–917. doi: 10.1038/clpt.2008.29. [DOI] [PubMed] [Google Scholar]
  • 6.Marriott JL, Nation RL, Roller L, Costelloe M, Galbraith K, Stewart P, Charman WN. Pharmacy education in the context of Australian practice. Am J Pharm Educ. 2008;72(6):72. doi: 10.5688/aj7206131. Article 131. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Hubball H, Burt H. Learning outcomes and program-level evaluation in a four-year undergraduate pharmacy curriculum. Am J Pharm Educ. 2007;71(5):71. doi: 10.5688/aj710590. Article 90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Beck DE. Where will we be tomorrow? We need a 2020 vision. Am J Pharm Educ. 2002;66(2):208. [Google Scholar]
  • 9.Accreditation Standards and Guidelines for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree. Chicago, IL 2006: The Accreditation Council for Pharmacy Education. http:/www.acpe-accredit.org/standards/default.asp (Last accessed: October 10, 2010). [Google Scholar]
  • 10.Kelson AC, Distlehorst LH. Groups in problem-based learning (PBL): essential elements in theory and practice. In: Everson DH, Hmelo CE, editors. Problem-Based Learning: A Research Perspective on Learning Interactions. Mahwah, NJ: Lawrence Erlbaum Associates; 2000. [Google Scholar]
  • 11.Eva KW. Assessing tutorial-based assessment. Adv Health Sci Educ. 2001;6(3):243–257. doi: 10.1023/a:1012743830638. [DOI] [PubMed] [Google Scholar]
  • 12.Falchikov N. Peer feedback marking: developing peer assessment. Innovat Educ Teach Int. 1995;32(2):175–187. [Google Scholar]
  • 13.Falchikov N. Involving students in assessment. Psych Learn Teach. 2003;3(2):102–108. [Google Scholar]
  • 14.Falchikov N. Group process analysis: self and peer assessment of working together in a group. Educ Tech Train Int. 1993;30:275–284. [Google Scholar]
  • 15.Boud D, Cohen R, Sampson J. Peer learning and assessment. Asses Eval Higher Educ. 1999;24(4):413–426. [Google Scholar]
  • 16.Dochy F, Segers M, Sluijsmans D. The use of self-, peer and co-assessment in higher education: a review. Stud Higher Educ. 1999;24(3):331–350. [Google Scholar]
  • 17.Ballantyne R, Hughes K, Mylonas A. Developing procedures for implementing peer assessment in large classes using an action research process. Asses Eval Higher Educ. 2002;27(5):427–441. [Google Scholar]
  • 18.Vickerman P. Student perspectives on formative peer assessment: an attempt to deepen learning? Asses Eval Higher Educ. 2008;1:1–10. [Google Scholar]
  • 19.Papinczak T, Young L, Groves M. Peer assessment in problem-based learning: a qualitative study. Adv Health Sci Educ. 2007;12:169–186. doi: 10.1007/s10459-005-5046-6. [DOI] [PubMed] [Google Scholar]
  • 20.McDowell L. The impact of innovative assessment on student learning. Innovat Educ Train Int. 1995;32(4):302–313. [Google Scholar]
  • 21.Hanrahan SJ, Isaacs G. Assessing self- and peer-assessment: the students’ views. Higher Educ Res Dev. 2001;20(1):53–70. [Google Scholar]
  • 22.Searby M, Ewers T. An evaluation of the use of peer assessment in higher education: a case study in the school of music. Asses Eval Higher Educ. 1997;22(4):371–383. [Google Scholar]
  • 23.Somervell H. Issues in assessment, enterprise and higher education: the case for self-, peer- and collaborative assessment. Asses Eval Higher Educ. 1993;18(3):221–233. [Google Scholar]
  • 24.Topping KJ, Smith EF, Swanson I, Elliot A. Formative peer assessment of academic writing between post graduate students. Asses Eval Higher Educ. 2000;25(2):149–166. [Google Scholar]
  • 25.Topping K. Peer assessment between students in colleges and universities. Rev Educ Res. 1998;68(3):249–276. [Google Scholar]
  • 26.Papinczak T, Young L, Groves M, Haynes M. An analysis of peer, self, and tutor assessment in problem-based learning tutorials. Med Teacher. 2007;29:122–132. doi: 10.1080/01421590701294323. [DOI] [PubMed] [Google Scholar]
  • 27.Boud D. Enhancing Learning Through Self-Assessment. London: Kogan Page; 1995. [Google Scholar]
  • 28.Gielen S, Dochy F, Onghena P. An inventory of peer assessment diversity. Asses Eval Higher Educ. 2010;1:1–19. [Google Scholar]
  • 29.Bloom BS. Taxonomy of Educational Objectives. The Classification of Educational Goals. Handbook1: Cognitive Domain. New York, NY: McKay; 1956. [Google Scholar]
  • 30.Ende J. Feedback in clinical medical education. JAMA. 1993;250:777–781. [PubMed] [Google Scholar]
  • 31.Westberg J, Jason H. Collaborative Clinical Education: The Foundation of Effective Health Care. New York: Springer Publishing; 1993. [Google Scholar]
  • 32.Gatfield T. Examining student satisfaction with group projects and peer assessment. Asses Eval Higher Educ. 1999;24(4):365–377. [Google Scholar]
  • 33.Magin D. Reciprocity as a source of bias in multiple peer assessment of group work. Stud Higher Educ. 2001;26(1):53–63. [Google Scholar]
  • 34.Orsmond P, Merry S. The importance of marking criteria in the use of peer assessment. Asses Eval Higher Educ. 1996;21(3):239–250. [Google Scholar]
  • 35.Swanson D, Case S, VanderVleuten C. Strategies for student assessment. In: Boud D, Feletti G, editors. The Challenge of Problem Based Learning. London: Kogan Page; 1991. [Google Scholar]
  • 36.Prins FJ, Sluijsmans DMA, Kirschner PA, Strijbos J. Formative peer assessment in a CSCL environment: a case study. Asses Eval Higher Educ. 2005;30(4):417–444. [Google Scholar]
  • 37.Higgins-Opitz SB, Tufts M. Student perceptions of the use of presentations as a method of learning endocrine and gastrointestinal pathophysiology. Adv Physiol Educ. 2010;43(2):75–85. doi: 10.1152/advan.00105.2009. [DOI] [PubMed] [Google Scholar]
  • 38.Cheng W, Warren M. Having second thoughts: student perceptions before and after a peer assessment exercise. Stud Higher Educ. 1997;22(2):233–240. [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES