Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2016 Aug 25;80(6):103. doi: 10.5688/ajpe806103

Students’ Opinions on Summative Team Assessments in a Three-Year Concentrated Pharmacy Curriculum

Frederick R Tejada 1,, Dana R Fasanella 1, Marwa Elfadaly 1
PMCID: PMC5023974  PMID: 27667840

Abstract

Objective. To investigate student opinions of team assessment.

Methods. University of Maryland Eastern Shore School of Pharmacy first-year (P1) to third-year (P3) students (n=125) completed an online survey regarding team assessments. Students rated their opinions on a Likert scale. Responses were examined using Mann-Whitney U test with respect to academic performance and class.

Results. One hundred twenty-five students (75%) completed the survey. A majority of students agreed that team assessment was beneficial (90%). In contrast, 78% of the students perceived that the discussion helped clarify misconceptions. Students were not in agreement on occurrence of free riders (51%) and the use of peer evaluation (38%). Overall, students ranked the benefits of team assessment as improving individual score, then promoting collaboration, followed by enhancing understanding of material.

Conclusion. Students had favorable opinions regarding team assessment. Educational benefits of team assessments include enhanced understanding of the material, being a meaningful activity for promoting collaboration, and developing communication skills.

Keywords: team assessment, three-year pharmacy program

INTRODUCTION

Team assessment is intended to address professional competencies in the Accreditation Council for Pharmaceutical Education (ACPE) Standards, including knowledge acquisition and application, communication, interpersonal skills, and teamwork skills.1 Additionally, the 2016 ACPE Standards stress that students should be “team-ready” and prepared to directly contribute to patient care working in collaboration with other health care providers.2 Educators have the responsibility to teach pharmacy students teamwork and collaboration. Literature is available on team-based learning or cooperative learning instructional model.3-6 In contrast, a dearth of literature is published on summative team assessment as part of the determinant of course grades.

Literature on team assessment or collaborative testing within health related and nonhealth related fields primarily focuses on the use of team assessment during a course with effectiveness measured by an individual final examination or other individual assessment.7-16 Bacon evaluated the effectiveness of team testing in an undergraduate marketing program using direct and indirect measures.17 Among the undergraduate marketing students studied, learning was not increased by team testing, but student perception of learning was enhanced. Stark’s study within a college of business program described the use of team assessment as a form of postassessment review.18 According to the study, team assessments made postexamination feedback a more student-directed and student-centered activity. Additionally, team assessment provided an experiential exercise for teaching the value of teams.18 In a review of nine studies of the use of collaborative testing in nursing education, Sandhal found that students exposed to team assessments consistently performed better on measures of short-term retention. However, results of measures for long-term retention were mixed.14

In foundational science, Rosenberg et al discussed implementation and experience with team testing in a biochemistry and molecular biology course in a block curriculum within a 4-year doctor of pharmacy (PharmD) curriculum.19 They asserted that the process of engaging in team assessment provided necessary skills for students, including oral communication, critical thinking, and deductive reasoning, while providing a deeper understanding of the material and helping students develop confidence in expressing views and opinions.

The current study was conducted at the University of Maryland Eastern Shore School of Pharmacy (UMES-SOP). The school offers an accelerated 3-year PharmD program that uses a mastery-learning model and a modular, block system of curricular design. Assessment blocks typically span a 2-week period. At the end of each assessment block, student mastery of the material is assessed through a 2-part examination process consisting of an individual assessment followed by a team assessment. This is essentially a final examination covering the material presented during that assessment block. Overall, there are 18 assessments during the first year and 18 during the second year. These assessments cover all modular courses offered at the school.

The primary objective of this study was to investigate the opinions of students regarding team assessment in the didactic curriculum of a 3-year pharmacy program. The secondary objective was to assess the association of students’ opinions of team assessment with academic performance.

METHODS

The study population comprised 167 students from the classes of 2013, 2014, and 2015, who were enrolled in the PharmD program and had taken team assessments. At the start of the first year, the Office of the Student Affairs assigns teams based on Pharmacy College Admissions Test (PCAT) performance in biology, chemistry, and composite scores, as well as on gender, race, and results of a standardized personality test. Typically, a team is composed of 5-7 students. The summative assessment occurs every other Friday, during which students take the assessment individually then as part of a team. The individual assessment is usually a 2-hour, multiple-choice question examination but can also include short-answer, fill-in-the-blank, true/false or select-all-that-apply questions. After the individual assessment, students proceed to their pre-assigned breakout rooms to take the same assessment in teams. They are given an hour and a half to complete the assessment. If a team scores at least 95%, team members will get an additional 5% added to the individual assessment score. Students who fail to achieve a 85% on the individual assessment, after addition of group points if obtained, are encouraged to attend a faculty-led review on the same day. The following Monday, students are required to take a different but equivalent reassessment examination. There is no team examination for reassessment; however, group points obtained on a Friday carry over to reassessment. Students must achieve a score of 85% on their reassessment examination to pass.

Fourteen Likert-type questions were used to measure students’ opinions and perceptions of team assessment. E*Value (MedHub, Minneapolis, MN) was used to collect responses from students. The survey was sent out to students at the end of March. At this time, first-year (P1) students have taken 14 of the 18 team assessments and second-year (P2) students, 28 out of 36 assessments. Third-year (P3) students have taken all 36 team assessments. Students were informed that their participation in the study was voluntary. No identifying subject information was included on the survey. The study also collected demographic characteristics of participants. Surveys with incomplete data were excluded.

Responses were based on the following Likert scale: strongly agree, agree, neutral, strongly disagree, and disagree (1=strongly disagree and 5=strongly agree). Responses to the Likert-type items were combined into three categories for analysis: strongly disagree/disagree, neutral, and agree/strongly agree. Agree/strongly agree responses equal to or greater than 75% were considered a desired and acceptable level of agreement. Students were also asked to rank seven benefits of team assessments using a 7-point scale (1=greatest benefit and 7=least benefit). A ranking average was used to determine the most beneficial aspect of team assessment overall. The ranking average was determined as follows: w=weight of ranked position and x=response count for answer choice ([x1w1 + x2w2 ... xnwn] ÷ [Total]).

The most beneficial aspect of team assessment, which respondents ranked as #1, had the largest weight, and the least beneficial aspect of team assessment (ranked in the last position) had a weight of 1. The aspect with the highest average ranking or score was the most beneficial aspect.20

Responses between high performing and low performing students and between P1 and P2 students were compared. Students were categorized into high-performing and low-performing students based on the median cumulative (P1 and P2) assessment scores. High performers were defined as students with a cumulative assessment scores above the median and low performers, with cumulative assessment scores at or below the median. For this study, the students’ individual assessment scores were used. This study received approval from the UMES Institutional Review Board.

Student data were initially entered into an Excel spreadsheet, and then converted for analysis using GraphPad Prism, v6 (GraphPad Software Inc., San Diego, CA). Descriptive statistics, such as means and standard deviations, were computed for all study variables. The analyses for nonparametric variables (ie, Mann-Whitney U test) were conducted at a 95% confidence interval.

RESULTS

One hundred sixty-seven surveys were distributed, and a 75% response rate (125 students) was achieved (Table 1). Thirty-two percent of P1 students, 36% of P2, and 32% of P3 students responded to the survey. The mean age of the students was 25 (SD=5) years (range=18 to 44 years). Fifty-six percent of the students were female, 62% had a bachelor’s degree, 48% had previous pharmacy work experience, and 45% were African-American.

Table 1.

Demographic Data of Students Participating in the Survey on Team Assessments

graphic file with name ajpe806103-t1.jpg

As shown in Table 2, 90% of the students agreed that team assessment was beneficial overall, and 98% agreed that students discussed the material during team assessments. Eighty-six percent agreed that getting team points depended on the composition of the team. Seventy-eight percent of students perceived that the discussion helped clarify misconceptions. Only 42% of students agreed/strongly agreed with the item “Members who exert less effort unfairly get a good grade based on the efforts of others.” Fifty percent of students did not favor peer evaluation.

Table 2.

Students’ Opinions on Team Assessment by Student Performance

graphic file with name ajpe806103-t2.jpg

The students were further categorized into high-performing and low-performing students (Table 2). Based on cumulative assessment scores, 61 (49%) respondents were categorized as low performers, and 64 (51%) were categorized as high performers. High and low performers were in agreement on five of the eight items. For example, both groups reported that students discussed the material during team assessments and that getting team points depended on group composition. Eighty-nine percent of the low performers believed that all members contributed in a significant manner during discussion of the team assessment in contrast to 70% of high performers (p=0.01). Neither high nor low performers reached a level of agreement (range=39% to 52%) on two items: (1) peer evaluation and (2) members unfairly getting good grades.

First-year and P2 students had the same level of agreement in five of the eight items (Table 3). Both groups agreed that students discussed the material during team assessments and that getting the team points depended on group composition. Second-year students (75%) were in agreement that all members contributed in a significant manner during the discussion in contrast to 73% of P1 students. Similarly, P1 and P2 students did not reach a level of agreement, at 38% and 51% respectively, on items regarding peer evaluation and members unfairly getting good grades. Additionally, 89% of the P2 students preferred to keep the same teams throughout the two years compared to only 65% of the P1 students (data not shown).

Table 3.

Students’ Opinions on Team Assessment by Class

graphic file with name ajpe806103-t3.jpg

Of the seven aspects of team assessments, all students ranked “improves individual score” as the most beneficial aspect (rank=1), followed by “promotes collaboration” (rank=2), then “enhances understanding of the material” (rank= 3) (Table 4). Similar rankings were observed for high-performing (n=64) and low-performing (n=61) students. By class, P1 students (n=40) ranked “improves individual score” as the most beneficial aspect of team assessment (rank=1), followed by “enhances material understanding” (rank=2), then “promotes collaboration” (rank=3). The P2 students (n=44), ranked “improves individual score,” “promotes collaboration,” then “develops communication skills” as 1, 2, and 3, respectively. For individual and team scores, graphs representing the variability within teams and between teams are shown in Figure 1.

Table 4.

Student Ranking of the Benefits of Team Assessment

graphic file with name ajpe806103-t4.jpg

Figure 1.

Figure 1.

Representative Graphs of Individual Cumulative Scorea (mean) and Cumulative Team Score (mean) within a Teamb.

DISCUSSION

The majority of students were in agreement that they engaged each other in discussion of the questions and possible answers during team assessment. However, high performers, in contrast to low performers, reported feeling that not all members contributed in a significant manner. It can be difficult to ensure balanced communication among team members as students may assume familiar roles of undercontributing or overcontributing. Reinig et al found that students’ satisfaction with team assessment groups in a graduate level tax accounting course was strongly influenced by the discordance regarding answer choice.21 The negative effect of discordance in the group was mitigated, however, by the number of correct answers the group ultimately chose.21 Lack of student preparedness may also significantly lend to a lack of contribution to the discussion. By class, P2 students were more engaged during team assessments than P1 students. This is not surprising as this may have been the first time P1 students participated in team assessments. Consequently, some students may have had difficulty integrating into their team. This may have been particularly true for P1 students, who were still adjusting to the concept of team assessment and becoming more comfortable with the team in the second year as team skills were developed. Therefore, faculty members may need to motivate students new to the process.

Team assessment shares similarities with the fundamental principles of team-based learning, such as team permanence, learning and team development, peer evaluation and feedback, and individual and team accountability.22 Because team assessments were given shortly after the individual assessments, students were able to provide immediate feedback to each other. Regardless of academic performance and class, students indicated that team assessments helped clarify misconceptions. This may be attributable to the feedback mechanism, which may have helped those who did not understand the question or concept during the individual assessment but then had it explained to them during the team assessment. Even students with the highest individual assessment score generally do not score the best on every question. Weaknesses of high performers can be complemented by other team members’ strengths.

In terms of accountability, individual accountability was accomplished by administering the individual assessment first. Teams were rewarded as each member received additional points toward individual scores based on performance on the team assessment. At UMES-SOP, the teams stay the same throughout the two years of the didactic curriculum. This allows them to develop into an effective self-managed team. The school does not use a peer-evaluation process.

Although team work has positive aspects, a potential problem is “free riding,” when one member does not bear a proportional amount of the work, yet shares the benefits of the team.23 In team assessment, these are members who exert less effort but unfairly get better grades because of the team. However, UMES-SOP students in general were neutral on the occurrence of free-riding during team assessment. Peer evaluation may prevent free riding by providing helpful feedback to members and ensuring accountability. Peer evaluation is recommended as a reliable method of professional assessment.24 Levine et al reported that peer evaluation, as part of team-based learning, improved individual preparedness and team contribution and had a modest correlation on medical students’ performance.25 However, students are often dissatisfied with peer evaluation because they fear retribution from members and its negative effects on team relationships.5 These are possible reasons why only half the students surveyed agreed with peer evaluation. Eighty-six percent agreed that getting the extra points depended on the composition of the group. Because UMES-SOP teams are assigned based on PCAT scores, gender, race, and results of a standardized personality test, a diversity of resources (education and experience) is ensured, and students work as a team to communicate and solve problems.

Both high and low performers recognized the overall benefits of team assessment. Given the high stakes nature of the UMES-SOP assessment process, it was not surprising that both groups ranked improvement of individual score as the most beneficial aspect of team assessment. Reinig et al found there was no significant difference in satisfaction with collaborative testing groups among high and low performing students in a graduate-level tax accounting course.21 They further indicated that groups more successful at coming to a consensus and with a larger number of correct answer choices were more likely to report satisfaction with their groups.21

Furthermore, both high performers and low performers recognized that team assessments promoted collaboration, enhanced understanding of the material, and developed communication skills. Communication can take various forms during team assessments, such as peer-teaching, facilitating the session, or justifying answers. Enhancement of material understanding may be attributed to peer-level teaching and learning. This process can have a significant impact on student performance, particularly with low-performing students. Koles et al reported an association between team-based learning and performance on examination questions of low-performing students.26 The “teachers” of this type of peer learning may also benefit by increasing their own understanding as they present their explanations to their team members. Moreover, being tested on material can be as effective for learning material as time spent reviewing or studying.27,28 This concept also holds for retesting material.29 Additionally, when working in teams, students can explore and share different paths of reasoning with team members to develop a sound justification for answering a question. Our results indicated that the process of team assessment motivates the low performers (rank=fifth) more than the high performers (rank=sixth) to perform better.

By class, P1 and P2 year students ranked improvement of score as the most beneficial aspect of team assessment. For P1 students, team assessment enhanced understanding of the material and promoted collaboration. This is consistent with the Rosenberg et al study, conducted in a 4-year pharmacy program, where faculty members and students felt that the biochemistry team examinations helped achieve a deeper understanding of the material.19 Team assessments can have a significant impact on student performance, particularly during the first year. The higher level science courses in the P1 year and the accelerated nature of the curriculum may pose challenges to students with no prior degree. Furthermore, Schauner et al reported that 44% of poor grades were earned in the first year of pharmacy school and that students tended to struggle in courses such as biochemistry, cellular biology, and physiology.30 The UMES-SOP student population is a diverse group based on educational background and experiences. Thirty-three percent of P1 students had no degree prior to pharmacy school. During team assessments, peer teaching can occur, particularly between those with a prior degree and those without. In a concentrated curriculum, every opportunity that students get to learn the material is important. This is especially true for students who were not successful on the Friday assessment and were then required to take a reassessment examination. On the other hand, P2 students believed that team assessment promoted collaboration and developed communication skills. The P2 assessments were mostly case-based questions, which may have encouraged more discussion among team members.31

This research study had a few limitations. First, as with many surveys, not all students eligible to participate in the survey were willing to do so. The study only included students at UMES-SOP, a 3-year PharmD program and may not be generalizable to students at other universities.

CONCLUSION

In general, UMES-SOP students had favorable opinions about team assessment. Educational benefits of team assessments include enhanced understanding of the material and meaningful activity for promoting collaboration and developing communication skills. The overall team assessment quality can be improved to further enhance knowledge-based and behavioral student learning outcomes.

REFERENCES

  • 1.American Council on Pharmaceutical Education. Accreditation standards and guidelines for the professional program in pharmacy leading to the doctor of pharmacy degree. Adopted January 15, 2006. https://www.acpe-accredit.org/pdf/s2007guidelines2.0_changesidentifiedinred.pdf. Accessed June 8, 2015.
  • 2.American Council on Pharmaceutical Education. Accreditation standards and guidelines for the professional program in pharmacy leading to the doctor of pharmacy degree. Standards 2016. Released February 2, 2015. https://www.acpe-accredit.org/pdf/Standards2016FINAL.pdf. Accessed Sept 2, 2015.
  • 3.Allen R, Copeland J, Franks AS, et al. Team-based learning in US colleges and schools of pharmacy. Am J Pharm Educ. 2013;77(6):Article 115. doi: 10.5688/ajpe776115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Bleske B, Remington T, Wells TD, et al. Team-based learning to improve learning outcomes in a therapeutics course sequence. Am J Pharm Educ. 2014;78(1):Article 13. doi: 10.5688/ajpe78113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Farland M, Sicat BL, Franks AS, Pater KS, Medina MS, Persky AM. Best practices for implementing team-based learning in pharmacy education. Am J Pharm Educ. 2013;77(8):Article 177. doi: 10.5688/ajpe778177. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Ofstad W, Brunner LJ. Team-based learning in pharmacy education. Am J Pharm Educ. 2013;77(4):Article 70. doi: 10.5688/ajpe77470. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Bloom D. Collaborative test taking: benefits for learning and retention. Coll Teach. 2009;57(4):216–220. [Google Scholar]
  • 8.Cortright RN, Collins HL, Rodenbaugh DW, DiCarlo SE. Student retention of course content is improved by collaborative-group testing. Adv Physiol Educ. 2003;27(1-4):102–108. doi: 10.1152/advan.00041.2002. [DOI] [PubMed] [Google Scholar]
  • 9.Giuliodori MJ, Lujan HL, DiCarlo SE. Collaborative group testing benefits high- and low-performing students. Adv Physiol Educ. 2008;32(4):274–278. doi: 10.1152/advan.00101.2007. [DOI] [PubMed] [Google Scholar]
  • 10.Lusk M, Conklin L. Collaborative testing to promote learning. J Nurs Educ. 2003;42(3):121–124. doi: 10.3928/0148-4834-20030301-07. [DOI] [PubMed] [Google Scholar]
  • 11.Meseke CA, Nafziger R, Meseke JK. Student course performance and collaborative testing: a prospective follow-on study. J Manipulative Physiol Ther. 2008;31(8):611–615. doi: 10.1016/j.jmpt.2008.09.004. [DOI] [PubMed] [Google Scholar]
  • 12.Meseke JK, Nafziger R, Meseke CA. Facilitating the learning process: a pilot study of collaborative testing vs individualistic testing in the chiropractic college setting. J Manipulative Physiol Ther. 2008;31(4):308–312. doi: 10.1016/j.jmpt.2008.03.007. [DOI] [PubMed] [Google Scholar]
  • 13.Meseske CA, Bovée ML, Gran DF. Impact of collaborative testing on student performance and satisfaction in a chiropractic science course. J Manipulative Physiol Ther. 2009;32(4):309–314. doi: 10.1016/j.jmpt.2009.03.012. [DOI] [PubMed] [Google Scholar]
  • 14.Sandahl SS. Collaborative testing as a learning strategy in nursing education. Nurs Educ Perspect. 2010;31(3):142–147. [PubMed] [Google Scholar]
  • 15.Wiggs CM. Collaborative testing: assessing teamwork and critical thinking behaviors in baccalaureate nursing students. Nurs Educ Today. 2011;31(3):279–282. doi: 10.1016/j.nedt.2010.10.027. [DOI] [PubMed] [Google Scholar]
  • 16.Zimbardo PG, Butler LD, Wolfe VA. Cooperative college examinations: more gain, less pain when students share information and grades. J Exp Educ. 2003;71(2):101–125. [Google Scholar]
  • 17.Bacon DR. Comparing direct versus indirect measures of the pedagogical effectiveness of team testing. J Market Educ. 2011;33(3):348–358. [Google Scholar]
  • 18.Stark G. Stop “going over” exams! The multiple benefits of team exams. J Manage Educ. 2006;30(6):818–827. [Google Scholar]
  • 19.Rosenberg H, Coffman R, Jafari MF, Prabhu S, Tallian K. New approach to teaching basic science courses: biochemistry and molecular biology in the block system of curricular design. Am J Pharm Educ. 1998;62(1):76–82. [Google Scholar]
  • 20.SurveyMonkey. https://www.surveymonkey.com/. Accessed June 1, 2015.
  • 21.Reinig BA, Horowitz I, Whittenburg G. Determinants of student attitudes toward team exams. Account Educ. 2014;23(3):244–257. [Google Scholar]
  • 22.Michaelsen LK, Davidson N, Major CH. Team-based learning practices and principles in comparison with cooperative learning and problem-based learning. JECT. 2014;25(3 and 4):57–84. [Google Scholar]
  • 23.Albanese R, Van Fleet DD. Rational behavior in groups: the free-riding tendency. Acad Manage Rev. 1985;10(2):244–255. [Google Scholar]
  • 24.Ramsey PG, Wenrich MD. Peer ratings. an assessment tool whose time has come. J Gen Intern Med. 1999;14(9):581–582. doi: 10.1046/j.1525-1497.1999.07019.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Levine R, Kelly A, Karakoc T, Haidet P. Peer evaluations in a clinical clerkship: Students’ attitudes, experiences, and correlations with traditional assessments. Acad Psychiatry. 2007;31(1):19–24. doi: 10.1176/appi.ap.31.1.19. [DOI] [PubMed] [Google Scholar]
  • 26.Koles PG, Stolfi A, Borges NJ, Nelson S, Parmelee DX. The impact of team-based learning on medical students’ academic performance. Acad Med. 2010;85(11):1739–1745. doi: 10.1097/ACM.0b013e3181f52bed. [DOI] [PubMed] [Google Scholar]
  • 27.Cull W. Untangling the benefits of multiple study opportunities and repeated testing for cued recall. Appl Cognit Psychol. 2000;14(3):215–235. [Google Scholar]
  • 28.Dempster F. Using tests to promote learning: a neglected classroom resource. J Res Develop Educ. 1992;25(4):213–217. [Google Scholar]
  • 29.Sparzo FJ, Bennett CM, Rohm RA. College student performance under repeated testing and cumulative testing conditions: report on five studies. J Educ Res. 1986;80(2):99–104. [Google Scholar]
  • 30.Schauner S, Hardinger KL, Graham MR, Garavalia L. Admission variables predictive of academic struggle in a PharmD program. Am J Pharm Educ. 2013;77(1):Article 8. doi: 10.5688/ajpe7718. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Bick R, Oakes J, Actor J. Interactive teaching: problem solving and integration of basic science concepts into clinical scenarios using team-based learning. J Int Assoc Med Sci Educ. 2009;19(1):26–34. [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES