Skip to main content
Journal of Medical Education and Curricular Development logoLink to Journal of Medical Education and Curricular Development
. 2016 Aug 22;3:JMECD.S18930. doi: 10.4137/JMECD.S18930

Group Learning Assessments as a Vital Consideration in the Implementation of New Peer Learning Pedagogies in the Basic Science Curriculum of Health Profession Programs

Charlotte L Briggs 1,, Alison F Doubleday 2
Editor: Steven R Myers
PMCID: PMC5736269  PMID: 29349309

Abstract

Inspired by reports of successful outcomes in health profession education literature, peer learning has progressively grown to become a fundamental characteristic of health profession curricula. Many studies, however, are anecdotal or philosophical in nature, particularly when addressing the effectiveness of assessments in the context of peer learning. This commentary provides an overview of the rationale for using group assessments in the basic sciences curriculum of health profession programs and highlights the challenges associated with implementing group assessments in this context. The dearth of appropriate means for measuring group process suggests that professional collaboration competencies need to be more clearly defined. Peer learning educators are advised to enhance their understanding of social psychological research in order to implement best practices in the development of appropriate group assessments for peer learning.

Keywords: peer learning, group assessment, team-based learning, problem-based learning, basic sciences, health professions, group learning

Introduction

Over the past decade, longstanding traditions in higher education have begun to cede ground to more varied instructional approaches. Education scholars and disciplinary associations have voiced strong support for student-centered pedagogies, and approaches such as “flipped classroom,” “blended learning,” and “team-based learning” have gained popularity. Similarly, health profession programs have adopted a wider variety of instructional approaches, including various peer learning pedagogies in which student interaction and cooperation play a vital role in the learning process. Peer learning has been demonstrated to support the development of skills associated with working collaboratively with others, effective communication, critical and reflective thinking, and lifelong learning, as well as supporting self-regulated learning and encouraging students to take responsibility for their learning.1,2

The terms cooperative learning and collaborative learning likewise are used to describe group learning approaches in which individual learning is thought to be enhanced by interactions with peers who provide explanations, alternative perspectives, critiques and critical thinking challenges, feedback, social and task support, and pooled labor to achieve larger, more complex assignments than might otherwise be possible for individual students working alone. While some proponents of these approaches assert significant distinctions among them, others accept an interchange-ability of terms but worry considerably about whether the practices themselves are carried out–-under any name–- with adequate attention to structure.3 In this paper, we use the terms peer, cooperative, and collaborative learning interchangeably, whenever possible favoring the term used by authors cited. Group work and team work will also be used interchangeably.

A particular area of concern deals with the structure and role of assessments. While clinical aspects of health profession programs have often included peer collaboration, such practices are less common in the basic science aspects. As basic science faculty plan and implement new peer learning curricula, they and their students will benefit from proactive consideration of the potential options, rewards, and pitfalls of assessment in the context of group learning.

Goals for Peer Learning

The rationale for peer learning is well documented elsewhere, but a brief overview is warranted because the effectiveness of peer learning is considered highly contingent upon the design of assignments and assessments that support rather than undermine cooperative behaviors among group members. In comparison to traditional lecture-based learning, peer learning is intended to give students more frequent and rich opportunities to process course material, whether at a low level for comprehension and retention or at a higher level for application and synthesis. Early proponents of peer teaching in higher education (eg, Whitman and Fife)4 noted that, compared to faculty, student peers are more socially interesting to each other, often more successful at explaining things to each other, and far more available, both in their numbers in the classroom and in their accessibility outside of class. As such, they are a largely untapped instructional resource capable of providing just-in-time tutoring and immediate feedback to each other, both of which are known to enhance learning.5 In addition, considerable research supports the common notion that those who teach others will gain improved organization, clarity, and nuance in their own knowledge as well.4 These benefits come in part from rehearsal and in part from reorganizing one's knowledge to make it accessible to others. Yet, peer teaching comes at a cost: It may be cognitively taxing and is certainly time consuming, and its benefits may not be obvious to the student who previously has experienced working alone as more effective than learning in groups with less-advanced or less-motivated peers.2,3,69

Where group learning is an ongoing and significant feature of instruction, faculty usually include group process skills as goals for student learning, citing their value to future success and their high demand by employers.10 As a subset of group process skills, self- and peer-assessment skills are valued for their contributions to metacognition, skills for giving and receiving feedback, and the internalization of important discipline-specific frameworks for reasoning and evaluation.8,10,11 Again, group processes can be taxing, and a student's prior experiences, mediated by the incentives or punishments created by the assessment system, will influence motivation to contribute to group cohesion and productivity. The contributions of members to the group will, in turn, determine the success of the group and the student's satisfaction with the group experience.

Positive Interdependence and the Paradox of Individual Accountability

Group learning assessments, therefore, must incentivize and reward cooperation, or at least not create barriers or disincentives for it. While cooperative learning methods vary, proponents widely subscribe to two principles: positive interdependence and individual accountability.3,68,10,11

Positive interdependence refers to conditions in which group members genuinely need each other's contributions to achieve a task.3,9,10,12 By themselves, instructions to work together, or assessments that cause one student's grade to depend on others, do not create positive interdependence any more than sharing a life boat and the last sea ration would create positive interdependence within a group of castaway sailors. On the other hand, if having more people to row the boat improves everyone's likelihood of survival, motivation to cooperate is genuine and positive interdependence is said to exist, even if some members are stronger rowers than others. Similarly, positive interdependence can exist even if members contribute to the group in very different ways–-one rows, one fishes, and one sends mayday signals. The challenge for cooperative educators is developing learning tasks that genuinely require or benefit from contributions from all group members, including the weakest. Generally speaking, the way to achieve this is to increase the complexity and rigor of tasks and assessments, which–-when effectively scaffolded–-further increases the benefits of cooperative learning.13,14

The principle of positive interdependence (often coupled with grading efficiency) leads some faculty to assert that group efforts should be assessed at the group level, with all group members receiving the same grade. However, the majority of cooperative learning educators strongly advise against undifferentiated group grades, citing the principle of individual accountability as fundamental to both assessment validity and to the ethic of cooperation itself. Assessments that do not reflect individual performance, they say, fail to provide students with accurate feedback to aid their learning and obscure meaning for other audiences making decisions based on those grades.3,7

According to Millis,15 “Individual accountability–- probably the most abused principle in less-structured forms of group work–-means that students receive the grades they earn. They are not allowed to ‘coast’ on the work of others. Teachers do not ‘rubber stamp’ grades for projects in which the product is assigned a group grade without taking into account the contributions of the individual student.” (p. 5) Proponents of individual accountability argue that undifferentiated group grades undermine motivation to cooperate by rewarding freeloaders and punishing the strongest and hardest working students. Instead, holding students individually accountable for their own grades is paradoxically imperative to gain students’ buy-in and continued commitment to cooperation.68,15

One of the most philosophical arguments for individual accountability in peer learning was made by Bruffee.11 Bruffee espoused peer learning to overcome the tradition of faculty as uncontestable knowledge authorities and to build, instead, a culture that views knowledge as socially negotiated and constructed through consensus. Bruffee described his students as working in their small groups to achieve consensus about what is known, which they then compared with other groups to achieve a class consensus, which was ultimately compared with the consensus held by the larger disciplinary community. Despite Bruffee's definition of knowledge as group consensus, he asserted that all evaluation of learning should be at the individual level. He explained that “it is the writing that students produce individually as a result of this process that counts in evaluating them. It is with their writing, after all, that students apply for official membership in the communities–-of chemists, lawyers, sociologists, classicists, whatever–-that are larger, more inclusive and authoritative than any plenary classroom group, reaching well beyond the confines of any one college or university campus” (p. 48).

Toppins,16 in contrast, offered a persuasive example of giving group grades on examinations in her introductory human resources course, where goals for her students included, “a positive attitude toward the development of human resources and to practice the skills needed to reach agreement with other people” (p. 96). She followed a procedure similar to team-based learning (TBL) in which students individually took a test, and then retook it as a group after discussion; however, all members of the group receive only the group grade, without impact from the individual test. On rare occasions when the group grade was lower than either the group's average individual score or the score of the best performing student, she sent students back to their group to discuss “What happened in the group?. How well were the resources of the group used?” (p. 98). Toppins accompanied this system of group learning and assessment with substantial training in collaborative group processes.

Approaches to Grading in Group Learning Contexts

As suggested by our discussion of positive interdependence and individual accountability, grading procedures can vary considerably within the context of peer learning. Practitioners and scholars of peer learning, however, share an unequivocal admonition against norm-referenced grading, often known in the US as grading on a curve, and insist that grading must be criterion referenced, such that any student who meets stated standards can receive a high grade.8,9,12,17 Norm-referenced grading, they argue, inherently places students in competition with each other. Additionally, the majority of cooperative educators advocate either grading students solely on their individual performance, or–-more commonly–-a combination of individual and group performance.3,10,12 Beyond those two conventions, grading can vary in a number of dimensions as listed below:

  • The context for assessment

  • Who conducts the assessment

  • What is assessed

  • How group and individual performance contribute to a grade.

Context for assessment

Contexts for assessing peer learning include products, observations, and oral examinations, all of which can be used to assess either individual or group performance. Assessments are often traditional examinations and assignments, such as projects, papers, presentations, and portfolios. The examination or other products can be completed cooperatively by the group, or by individual students. Individual students can complete examinations either in preparation for group learning or after group learning has taken place. Individuals can complete assignments either in the context of peer support and feedback (such as peer writing circles), or by themselves after having engaged in cooperative learning in the group. Johnson et al9 recommend using groups to “bookend” individual assessments, by helping group members prepare before the assessment and helping each other review their test results after the assessment. A key consideration when assigning group examinations or assignments is the ease and validity with which individual contributions (either in terms of learning outcomes or in terms of effort) can be distinguished. In most cases, if individual scoring is needed, provisions must be made for additional sources of assessment.

Observations of students performing a task or deliberating within a group may be time consuming but are important if group process skills are themselves an outcome to be developed and assessed.10 Group observations can also be one of the richest and most valid sources of assessment data regarding individual student's abilities to explain and apply course learning, demonstrate critical reasoning and discourse within disciplinary conventions, or perform clinical skills involving interpersonal communication.10,12 In health profession programs, observational assessments are common in the clinical aspects but less so in the basic science aspects. Yet even within the basic science aspects, observations of student performance within a group context can be a more rigorous and authentic measure of competence than traditional examinations or academic papers, which rely largely on memorization or “regurgitation” of material. Whether group observations are used to assess group process or individual learning, advocates of peer learning recommend using an assessment framework or rubric to improve the reliability of the assessment and its value as feedback to students.10

Oral examinations are likely to be familiar to health profession educators, in more or less structured formats, within clinical training, both to ensure foundational knowledge of health and disease processes, and to develop appropriate habits of professional communications for documentation, hand-offs, consultations, and referrals. Perhaps the most structured form of oral examination intended to ensure individual accountability in the context of peer learning is the Triple Jump examination, which will be discussed in more detail below. In addition, where group projects or presentations might result in fragmented knowledge due to divvyed-up tasks, oral interviews of individual students have been recommended as an option to assess understanding of the whole assignment.10

Who conducts the assessment

Regardless of instructional format, the faculty member is responsible for the student's grade and therefore also responsible for assessments; however, evaluative input from other parties can be a valuable source of assessment data. In peer learning, self- and peer-assessments are a common source of data for assessing group process skills and contributions to group products.3,8,10,12 Because faculty typically cannot observe project work that takes place outside of class, the student and peers are often the source of this information, although Barkley3 warns that peers may be reluctant to assess each other for a summative grade. Most proponents of self- and peer-assessments emphasize the importance of providing students with assessment criteria–-or guiding them as a group to establish their own criteria–-early in the learning process, and building one or more points of formative assessment into the learning process before the criteria are employed for grading purposes.8,10 This preparation not only improves students’ competence and comfort with self- and peer-assessment but also helps students internalize the assessment criteria and develop metacognition as learners.

What is assessed

In traditional learning formats, grades typically are intended to reflect the achievement of learning outcomes alone, although they often also reflect other factors, such as attendance, effort, and compliance with deadlines. In peer learning formats, assessments are commonly conducted of group and individual learning achievement, group process, and the individual's contribution to the group.2,8,10,12 Many peer learning advocates stress the importance of assessing group process and individual contributions to the group so that students will take these issues seriously.2,8,10,12 Many also argue that if group process is assessed for a grade (rather than formatively only), it is important for both fairness and success that students receive training in it.10 The likelihood that students receive group process training and grades is largely driven by the predominance of group work in the learning process, which in turn is often influenced by the value placed on collaboration skills and self-regulated learning as outcomes. Assessments for group process can be as simple as deducting points from a group grade if someone does not participate,14 but they usually involve more criteria. A review of the literature on peer learning suggests that most advocates use rubrics for assessing group process that are developed either by students themselves or by the instructor based on lay preferences and understandings of group process.

An approach to simultaneously assess group learning and group process is to grade students based on their group's improvement in learning assessments over time. Kagan6 describes a system based on Robert Slavin's ILE Percentile Improvement Scoring System in which students take weekly quizzes that are compared to their running average score. Individual improvement points are summed to arrive at the group's improvement score that all members of the group receive. Kagan explains that a group improvement–grading system (in combination with individual assessments) supports group cooperation because “an improvement scoring system allows weak students as much chance to receive top scores as strong students. It is motivating for the top students as well, because they must strive to best their own usual performance (which is difficult) rather than trying only to beat other students (which for them is easy)” (p. 16:3).

How group and individual performance contribute to a grade

Where faculty grade individual students based on group performance, the most obvious approach is to score an assignment or examination that students have completed together and then give the same group grade to all members of the group. An alternative approach is to have each group member individually complete an examination, and then base the group grade on some calculation of individual scores, such as the highest member's score, the lowest member's score, the average score, or a randomly selected member's score.17 A majority of peer learning educators, however, appear to grade their students either entirely on individual assessments or a combination of individual and group assessments. Group product scores are typically combined with some measure of individual contribution to the group effort, such as peer assessment, or an assessment of individual learning achievement such as a project reflection or oral examination.10 Group and individual examination scores can be combined in numerous ways to arrive at grades for individual students. Johnson and Johnson17 suggest four options, all of which assume that students have studied together or compared their individual examination answers before taking the group examination:

  • 1.

    Individual score plus bonus points based on all members’ reaching a criterion [minimum score]

  • 2.

    Individual score plus bonus points based on lowest score among group members

  • 3.

    Individual score plus group average

  • 4.

    Individual score plus bonus based on improvement scores (p. 120–121).

Weighting of individual and group grades also varies. Some authors describe adding bonus points for individual scores to group scores, while others describe adding bonus points for group scores to individual scores. According to Johnson and Johnson,17 “The way grades are given depends on the type of interdependence the instructor wishes to create among students” (p. 120).

Peer Learning and Assessments in Health Profession Programs

Two of the most commonly practiced forms of peer learning in health profession education are team-based learning (TBL) and problem-based learning (PBL). Both depend upon immediate and continuous feedback from peers for student success. TBL and PBL both employ a combination of individual and group assessments, although in distinctly different ways.

TBL and group assessments

In TBL, students study outside class and then take an individual readiness assurance test (IRAT) in class, followed by a group readiness assurance test (GRAT) that requires discussion and consensus. These tests are followed by a mini lecture on the more challenging topics, and finally an application problem that is tackled as a group.18 Although TBL-grading schemes vary, they typically involve a combination of individual and group scores, and often, as well, peer assessment of contributions to the group.

Many institutions find TBL an attractive alternative to PBL because it is seen as less resource intensive and more cost effective. Even so, TBL requires extensive planning, especially in the design of effective assessments, as the readiness assurance process is the core of TBL and plays the dual role of ensuring individual accountability through the IRAT and positive interdependence through the GRAT.

The aims of assessment in TBL include measuring critical thinking, collaboration skills, and group cohesion. Michaelsen et al19 point out that the process of challenging each other's ideas is what promotes higher-order thinking and fosters team development. Roberson and Franchini20 argue that this type of collaborative decision making is intended to model what health profession students will eventually do in their professional careers. Numerous studies have also suggested that implementation of TBL in the curriculum leads to significantly improved performance on content assessments and on standardized examinations.21

PBL and group assessments

Since its inception in the 1970s, PBL has evolved to include a range of variations, but all possess several central characteristics: student-centered focus, small group format, limited instructor role as facilitator rather than content expert, problems as stimuli for learning, and a collaborative, self-directed learning process.22 Typically, students work in small groups to discuss prompts in a case designed with progressive disclosure of key information. During the case discussion, students explain their existing knowledge to each other and identify knowledge gaps. They then conduct research outside of class before returning to the small group to process this new learning, and then to proceed to the next phase of the case. Like TBL, the heart of the PBL process is ongoing group assessment of its own understandings.

The literature on assessment in PBL is more ambiguous than in TBL, and Swanson et al23 note that “there is little agreement on assessment among PBL advocates” (p. 260). However, a common thread is that PBL should employ “process oriented assessment techniques” to assess individual students in a way that mirrors the learning process used by the group, which itself is intended to simulate professional practices of patient assessment, evidence-based practice, and collegial discourse and collaboration. The triple jump examination is one example of this. Developed at McMaster University, the triple jump consists of three steps: (1) written or oral analysis of a clinical case based on existing knowledge and identification of personal knowledge gaps; (2) independent research targeting the knowledge gaps; and (3) an oral assessment in which the student presents the results of research conducted in step 2, along with an analysis of how this new knowledge impacts interpretation of the case. Step 3 also typically includes reflection on knowledge limitations and identification of new knowledge gaps, as well as a self-assessment. The triple jump provides both objective and subjective measures of the individual student's proficiency in the clinically relevant cognitive and communication skills fostered through the PBL process.24

Other examples of individual-level, process-oriented assessments used in PBL curricula of health profession programs include simulation exercises, most notably the objective structured clinical examination (OSCE), a station-to-station examination to test knowledge and skills in the context of clinical problem solving. In some programs, students take the same or similar OSCE each year of their program to chart their progress. Although a goal of PBL is to guide all students toward established learning objectives, it is unlikely that all groups arrive at the same point at the same time, particularly in some programs that strive most committedly to allow students to determine their own goals for learning within any given case. Swanson et al23 argue that PBL poses a challenge for timing tests due to the open-ended nature of group case discussions. Thus, repeated progress tests provide a means to ensure that individual students, and the class as a whole, achieve program objectives within an acceptable time frame, despite variations among groups due to the student-centered nature of PBL.

Many institutions include more conventional content-based assessments in PBL curricula, including progress tests; however, Waters and Mccracken25 argue that traditional assessments do not work well in PBL and any assessment (individual or group) should be consistent with the instructional technique used. Of particular concern are multiple choice tests of factual recall that can be passed by individual students cramming by rote memorization on their own, which undermine the positive interdependence of the group that is created by rigorous standards of higher-order reasoning.

Commentary

Health profession programs are currently undergoing an exciting but awkward period of transformation away from the traditional “2 + 2” (or analogous) curriculum that arose out of the landmark Flexner Report to models that integrate biomedical and clinical science aspects. Active and collaborative learning have long been part of clinical education, where they are inherently tied to learning and assessment of skills that are essential to professional practice. Largely to offset the cost of clinical instruction, health profession programs have employed seemingly efficient large-class lecture methods for their biomedical aspects that lack key elements of current best practice for learning: active student engagement in authentic problem-solving tasks, peer interaction, and the development of metacognitive skills required for self-regulated lifelong learning. Peer learning methods offer tremendous promise for improving biomedical science education in health profession programs and in meaningfully integrating biomedical and clinical science learning.

At the same time, prior experiences of both faculty and students continue to create assumptions about the relative costs and benefits of learning in groups compared to learning individually that fuel resistance to peer learning pedagogies in the biomedical sciences. This article is not intended to establish the case for peer learning but rather to speak to those faculty and administrators who have already become convinced of its merits, as we have, and wish to implement successful peer learning in their own programs. Far more than in clinical instruction where collaboration is clearly identified as a professional competency, the benefits of peer learning in biomedical sciences must be explicitly promoted with strong scholarly evidence of its benefits and well-managed processes based on highly intentional designs that necessitate rather that undermine cooperation. Because students commonly experience graded assessments as rewards or punishments that influence their motivation in the learning process, the design of appropriate assessments is especially critical to the success of peer learning in the biomedical sciences aspects of health profession programs.

Fundamentally, norm-based grading and reporting of class rank confront students with a conflict of interest between cooperating and competing with their peers in the learning process. Equally fundamental is the conflict that remains for many health profession educators between goals for complex professional reasoning competencies fostered by peer learning pedagogies and low-level assessments of content knowledge in certification and licensure examinations. While improvements in professional examinations have been made or are on the horizon in some fields and locations, programs that wish to implement peer learning in the context of more traditional examinations will benefit from proactive attention to ensure students are well prepared for content-heavy, multiple-choice formats, even if this requires a separate track of training in test preparation. In our own experience where we implemented PBL in a dental curriculum, the transition was eased when–- for the first time in its history–-the program achieved a 100% first-time pass rate on its national board examination, in part, by devoting a period of dedicated curricular time to examination preparation and by regulating when its students were allowed to challenge the examination based on results of a mock board examination developed and administered locally.

In addition, programs wishing to adopt peer learning approaches in their biomedical science instruction will be aided greatly by a simultaneous integration of biomedical and clinical education, which provides an especially persuasive professional rationale for including group process assessment as part of–-and in support of–-the peer learning process.

Our review of the literature on peer learning, both in and beyond health profession education, revealed an unfortunate tendency for group process assessments to be based either on student-developed ground rules or on frameworks developed locally by faculty based on their own lay understanding of group dynamics and personal preferences for what makes a good collaborator or discussion participant. Student-developed ground rules for group process may bring a greater sense of ownership and commitment than ground rules that are imposed on students without consultation, and faculty may have valuable wisdom to share about group process from personal experience. Nonetheless, if professional collaboration competencies are genuinely valued as program outcomes, then peer learning educators should deepen their own knowledge of evidence-based practice recommendations for group process and build learning objectives and assessments around them.

Jaques and Salmon10 point out that Alverno College, a pioneer in integrating collaboration and assessment practices into every phase of the learning process, distinguishes between task-oriented groups and interpersonal problem-solving groups, and the competencies associated with each. By way of example, they highlight the differences in skills required to arrive at a consensus in a discussion group and those required to complete a complex project, which–-in addition to communication skills–-include organizational and time-management skills, among others. Given common complaints about faculty committee work, it should not be assumed that health profession educators who lack specific scholarly background in these competencies are prepared to teach and assess them at a level that will prepare their graduates for increasingly complex healthcare roles, let alone drive improvements in professional practice.

Indeed the persistent problem of medical errors in many countries,26,27 which are often related to communication and teamwork failures, suggests that common sense advice and widely shared preferences for group process fail to adequately address risks of miscommunication, bias in group decision processes, and poorly coordinated case team management. Social psychological research on small group cognition, for instance, shows considerable variation in the performance of groups compared to individuals,28,29 suggesting that common claims that peer-learning groups score higher than individual test-takers may belie the complexity of group performance factors in nonacademic settings. Such literature also includes findings to guide evidence-based practices to reduce biases in group decision making–-biases that are often rooted in cultural norms, such as majority rule or professional values such as flexibility, and therefore, might be naively adopted as a group ground rule or unconsciously followed despite instructions to the contrary. Just as individual health profession students are often taught to use metacognitive strategies to overcome personal decision biases, today's team-based healthcare delivery environment creates an analogous need to develop students’ knowledge, skills, and metacognition to avoid group decision biases that could put patients at risk. Peer learning pedagogies are an ideal vehicle for such training, but assessments of the group process aspects must move beyond common sense criteria to reflect more advanced understandings grounded in social psychological research.

Author Contributions

Conceived and designed the experiments: CLB, AFD. Analyzed the data: CLB, AFD. Wrote the first draft of the manuscript: CLB. Contributed to the writing of the manuscript: AFD. Agree with manuscript results and conclusions: CLB, AFD. Jointly developed the structure and arguments for the paper: CLB, AFD. Made critical revisions and approved final version: CLB, AFD. Both authors reviewed and approved of the final manuscript.

Footnotes

Peer Review:Four peer reviewers contributed to the peer review report. Reviewers’ reports totaled 553 words, excluding any confidential comments to the Academic Editor.

Competing Interests:Authors disclose no potential conflicts of interest.

Funding:Authors disclose no external Funding sources.

Paper subject to independent expert single-blind peer review. All editorial decisions made by independent Academic Editor. Upon submission manuscript was subject to anti-plagiarism scanning. Prior to publication all authors have given signed confirmation of agreement to article publication and compliance with all applicable ethical and legal requirements, including the accuracy of author and contributor information, disclosure of Competing Interests and Funding sources, compliance with ethical requirements relating to human and animal study participants, and compliance with any copyright requirements of third parties. This journal is a member of the Committee on Publication Ethics (COPE).

References

  • 1.Slavin Robert E. Co-Operative Learning: Theory, Research and Practice. Englewood Cliffs, NJ: Prentice Hall; 1990. [Google Scholar]
  • 2.Boud D., Cohen R., Sampson J. Peer learning and assessment. Assess Eval High Educ. 1999; 24(4): 413–426. [Google Scholar]
  • 3.Barkley E.F., Major C.H., Cross K.P. Wiley: Collaborative Learning Techniques: A Handbook for College Faculty. 2nd ed. San Francisco, CA: Wiley; 2005. [Google Scholar]
  • 4.Whitman N.A., Fife J.D. Peer Teaching: To Teach Is To Learn Twice. ASHE-ERIC Higher Education Report No. 4, 1988. ASHE-ERIC Higher Education Reports, Washington, DC: The George Washington University; 1988. [Google Scholar]
  • 5.Bransford J.D., Brown A.L., Cocking R.R. National Research Council. How People Learn: Brain, Mind, Experience, and School. Washington, DC: National Academy Press; 1999. [Google Scholar]
  • 6.Kagan S. Cooperative Learning. San Clemente, CA: Kagan Publishing; 1994. [Google Scholar]
  • 7.Kagan S. Group grades miss the mark. Educ Leadersh. 1995; 52(8): 68. [Google Scholar]
  • 8.Millis B.J., Cottell P.G. Cooperative Learning for Higher Education Faculty. Series on Higher Education. Phoenix, AZ: Oryx Press; 1997. [Google Scholar]
  • 9.Johnson D.W., Others A. Cooperative Learning: Increasing College Faculty Instructional Productivity. ASHE-ERIC Higher Education Report No. 4, 1991. ASHE-ERIC Higher Education Reports, Washington, DC: George Washington University; 1991. [Google Scholar]
  • 10.Jaques D., Salmon G. Learning in Groups: A Handbook for Face-to-Face and Online Environments. 4th ed. New York, NY: Routledge; 2007. [Google Scholar]
  • 11.Bruffee K.A. Collaborative Learning: Higher Education, Interdependence, and the Authority of Knowledge. 2nd ed. Baltimore, MD: Johns Hopkins University Press; 1999. [Google Scholar]
  • 12.Johnson D.W., Johnson R.T. Assessing Students in Groups: Promoting Group Responsibility and Individual Accountability. Corwin Press, Thousand Oaks, CA; 2003. [Google Scholar]
  • 13.Cottell P.G. Cooperative learning in accounting. In: Millis B., ed. Cooperative Learning in Higher Education: Across the Disciplines, across the Academy. New Pedagogies and Practices for Teaching in Higher Education Series. Sterling, VA: Stylus; 2010: 11–33. [Google Scholar]
  • 14.Nelson C.E. Want brighter, harder working students? Change pedagogies! Some examples, mainly from biology. In: Millis B., ed. Cooperative Learning in Higher Education: Across the Disciplines, across the Academy. New Pedagogies and Practices for Teaching in Higher Education Series. Sterling, VA: Stylus; 2010: 119–139. [Google Scholar]
  • 15.Millis B.J., ed. Why Faculty Should Adopt Cooperative Learning Approaches. Cooperative Learning in Higher Education: Across the Disciplines, Across the Academy. 1st ed. Sterling, VA: Stylus; 2010: 1–9. [Google Scholar]
  • 16.Toppins A.D. Teaching by testing: a group consensus approach. Coll Teach. 1989; 37(3): 96–99. [Google Scholar]
  • 17.Johnson D.W., Johnson R.T. Learning Together and Alone: Cooperative, Competitive, and Individualistic Learning. 5th ed. Boston: Allyn and Bacon; 1999. [Google Scholar]
  • 18.Brame C.J. Team-based learning. Available at: https://cft.vanderbilt.edu/guides-sub-pages/team-based-learning/. Accessed June 16, 2016.
  • 19.Michaelsen L.K., Knight A.B., Fink L.D. Team-Based Learning: A Transformative Use of Small Groups. Santa Barbara, CA: Greenwood Publishing Group; 2002. [Google Scholar]
  • 20.Roberson B., Franchini B. Effective task design for the TBL classroom. J Excell Coll Teach. 2014; 25: 275–302. [Google Scholar]
  • 21.Hrynchak P., Batty H. The educational theory basis of team-based learning. Med Teach. 2012; 34(10): 796–801. [DOI] [PubMed] [Google Scholar]
  • 22.Barrows H.S. Problem-based learning in medicine and beyond: a brief overview. New Dir Teach Learn. 1996; 1996(68): 3–12. [Google Scholar]
  • 23.Swanson D.B., Case S.M., van der Vleuten C.P., Boud D. The Challenge of Problem-Based Learning. Editors: Boud D, Feletti G. Strategies for student assessment. 1991: 260–273. [Google Scholar]
  • 24.Navazesh M., Rich S.K., Chopiuk N.B., Keim R.G. Triple jump examinations for dental student assessment. J Dent Educ. 2013; 77(10): 1315–1320. [PubMed] [Google Scholar]
  • 25.Waters R., McCracken M. Assessment and evaluation in problem-based learning. In: Frontiers in Education Conference, 1997. 27th Annual Conference. Teaching and Learning in an Era of Change. Proceedings. Vol 2, IEEE, NY; 1997: 689–693.
  • 26.International Survey: U.S. Leads in Medical Errors. Available at: http://www.commonwealthfund.org/publications/press-releases/2005/nov/international-survey–u-s–leads-in-medical-errors. Accessed June 17, 2016.
  • 27.Garrouste-Orgeas M., Philippart F., Bruel C., Max A., Lau N., Misset B. Overview of medical errors and adverse events. Ann Intensive Care. 2012; 2: 2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Tindale R.S., Meisenhelder H.M., Dykema-Engblade A.A., Hogg M.A. Shared cognition in small groups. In: Hogg M.A., Tindale R.S., eds. Blackwell Handbook of Social Psychology: Group Processes. Oxford: Blackwell Publishers Ltd; 2001: 1–30. [Google Scholar]
  • 29.Stasser G., Dietz-Uhler B. Collective choice, judgment, and problem solving. In: Hogg M.A., Tindale R.S., eds. Blackwell Handbook of Social Psychology: Group Processes. Oxford: Blackwell Publishers Ltd; 2001: 31–55. [Google Scholar]

Articles from Journal of Medical Education and Curricular Development are provided here courtesy of SAGE Publications

RESOURCES