Skip to main content
CBE Life Sciences Education logoLink to CBE Life Sciences Education
. 2014 Summer;13(2):187–199. doi: 10.1187/cbe.13-12-0235

Feedback about Teaching in Higher Ed: Neglected Opportunities to Promote Change

Cara Gormally *,, Mara Evans , Peggy Brickman
Editor: Diane K O'Dowd
PMCID: PMC4041498  PMID: 26086652

Most college science, technology, engineering, and mathematics faculty members could benefit from more feedback about implementing evidence-based teaching strategies. The goals of this essay are to summarize best practices for providing feedback, to describe the current state of instructional feedback, to recommend strategies for providing feedback, and to highlight areas for research.

Abstract

Despite ongoing dissemination of evidence-based teaching strategies, science teaching at the university level is less than reformed. Most college biology instructors could benefit from more sustained support in implementing these strategies. One-time workshops raise awareness of evidence-based practices, but faculty members are more likely to make significant changes in their teaching practices when supported by coaching and feedback. Currently, most instructional feedback occurs via student evaluations, which typically lack specific feedback for improvement and focus on teacher-centered practices, or via drop-in classroom observations and peer evaluation by other instructors, which raise issues for promotion, tenure, and evaluation. The goals of this essay are to summarize the best practices for providing instructional feedback, recommend specific strategies for providing feedback, and suggest areas for further research. Missed opportunities for feedback in teaching are highlighted, and the sharing of instructional expertise is encouraged.

INTRODUCTION

Despite heroic dissemination of evidence-based teaching practices and their documented improvement on student learning (Ebert-May et al., 1997; Derting and Ebert-May, 2010; Crouch and Mazur, 2001; Udovic et al., 2002; Knight and Wood, 2005; Freeman et al., 2007), university science faculty members have been slow to adopt these practices. In a national survey of new physics faculty members, 25% reported they had attended teaching workshops (Henderson, 2008) and 87% of these reported knowledge of one or more evidence-based strategies, yet only 50% of those attending report adopting these practices (Henderson and Dancy, 2009). These faculty members identified several impediments to adoption, including inadequate training, misunderstanding of evidence-based teaching practices, and lack of support for implementation (Dancy and Henderson, 2010). Two separate studies have documented misunderstandings about what is involved in evidence-based teaching. Ebert-May and colleagues (2011) identified a significant discrepancy between the degree to which faculty members report using active learning versus levels of active learning observable in video recordings of their classrooms. A multi-institution investigation of introductory biology courses also revealed that self-reported use of active-learning instruction was not associated with student learning gains (Andrews et al., 2011). Collectively, this work suggests that one-time workshops raise awareness of evidence-based teaching strategies but are not sufficient for faculty to adopt and successfully use these strategies (National Research Council [NRC], 2012).

We propose that learning to teach, like developing other professional skills, requires acquiring knowledge about performing job-related tasks, but it also must involve feedback and mentoring in order to monitor and improve performance (Hattie and Timperley, 2007; Nielsen, 2011; Finkelstein and Fishbach, 2012). However, college teaching is one of the few vocations that requires neither formal training (Golde and Dore, 2001; Tanner and Allen, 2006; Addy and Blanchard, 2010) nor standard processes for evaluation and supervision (Centra, 1993; Weimer and Lenze, 1994; Johnson and Ryan, 2000). We know that effective dissemination of evidence-based teaching practices requires more intensive training than a one-time workshop can offer (Sunal et al., 2001; Dancy and Henderson, 2010; Singer et al., 2012). Further, when faculty members are given feedback that both motivates and enables them to improve, they are more likely to make significant changes in their teaching practices (Sunal et al., 2001; Henderson et al., 2011). We argue that providing faculty with formative teaching feedback may be the single most underappreciated factor in enhancing science education reform efforts.

In this essay, we argue that models of peer feedback or coaching rather than peer observation and review could encourage the adoption and effective use of evidence-based teaching strategies in science (American Association for the Advancement of Science [AAAS], 2011). We begin by considering the purpose of instructional feedback. We provide a broad review of the best practices of giving feedback and describe feedback approaches used by several national faculty development programs that feedback recipients might borrow. Finally, we highlight opportunities for research on feedback and pose questions about how providing feedback can affect teaching in higher education to encourage the development of specific strategies for providing feedback in higher education. We write for a diverse audience, including individuals who are experienced mentors or consultants involved in faculty development and individuals who consider themselves to be “change agents” leading faculty toward the Vision and Change goals, as well as faculty members who seek more or higher-quality instructional feedback. To faculty seeking feedback, we offer strategies to help identify and solicit needed instructional feedback.

THE NEED FOR FEEDBACK ABOUT TEACHING

Institutions are beginning to recognize the need to offer more substantive and formative instructional feedback to faculty (Seldin, 1999; Bernstein, 2008; Huston and Weaver, 2008; Ismail et al., 2012), although few agree on how to provide it (Johnson and Ryan, 2000). Safavi and colleagues (2013) report that 96% of faculty surveyed (n = 237) desire more meaningful instructional feedback. Currently, faculty members receive the majority of their teaching feedback through student evaluations (Keig, 2000; Loeher, 2006), with the occasional peer-teaching observation (Seldin, 1999). There are considerable limitations to both feedback mechanisms.

Student evaluations typically focus on gathering data about student perceptions of teacher-centered behaviors such as instructor enthusiasm, clarity of explanations provided by the instructor, rapport, and breadth of coverage, and provide only limited opportunities for students to comment on the use of learner-centered pedagogies (Murray, 1983; Cashin, 1990; Marsh and Roche, 1993). This may partially explain the decline in student evaluation scores often mentioned by faculty members who incorporate active learning into their courses (Walker et al., 2008; Brickman et al., 2009; White et al., 2010). Items on student evaluations typically focus on student satisfaction and didactic teaching, rather than measuring learning (d’Apollonia and Abrami, 1997; Aleamoni, 1999; Kolitch and Dean, 1999; Kember et al., 2002). Disciplinary and class-size bias have already been noted as a problem in student evaluations: science and mathematics disciplines garner the lowest student evaluation scores (Cashin, 1990; Ramsden, 1991; Aleamoni, 1999); science courses typically have larger enrollments than arts and humanities courses (Cheng 2011); and student evaluations are lower in larger classes (Aleamoni and Hexner, 1980; McKeachie, 1990; Franklin, 1991).

Faculty members express reservations about the use of student evaluations, particularly for personnel and tenure decisions, and even opposed them outright when they were first introduced (Hills, 1974; Chandler, 1978; Vasta and Sarmiento, 1979; Dowell and Neal, 1982; Menefee, 1983; Zoller, 1992; Goldman, 1993). Faculty members contend that student evaluations lead to lower morale and job satisfaction and may even motivate faculty to reduce standards on examinations and assignments in an effort to placate students, due to their focus on students’ satisfaction (Ryan, 1980; Schneider, 2013). Faculty members have also expressed concern over the appropriate role for student evaluations of their teaching effectiveness in personnel decisions such as retention, promotion, tenure, and salary increases (Cashin and Downey, 1992).

Others have repeatedly argued that student evaluations improve teaching effectiveness (Overall and Marsh, 1979; Cohen, 1980; Marsh and Roche, 1993). However, as the sole measure of teaching effectiveness or as an impetus to increase active learning in the college classroom, student evaluations are far from adequate. Student evaluations provide few concrete ideas for improving instructional effectiveness or learning outcomes (Cohen and McKeachie, 1980; Abrami et al., 1990) or changing curriculum or course objectives (Neal 1988; Abrami 1989). Instructors find it difficult to reconcile contradictory opinions expressed in student evaluations (Ryan, 1980; Callahan, 1992). Consequently, only a small percentage of faculty members report making changes to their courses as a result of student evaluations (Spencer and Flyr, 1992; Kember et al., 2002; Richardson, 2005). And, as we later discuss in depth, faculty may have little incentive to use the data from student evaluations (Kember et al., 2002; Mervis, 2013). Researchers have documented that pairing student evaluations with qualitative student interviews or peer consultations are much more effective at influencing faculty behavior (Cohen, 1980; Wilson, 1986; Tiberius, 1989; Seldin, 1993). However, these practices are not currently implemented at most universities and are difficult to implement at the scale required by many institutions.

Peer-review approaches for evaluating teaching have also been studied and found lacking (Hutchings, 1995; Quinlan and Bernstein, 1996; Huston and Weaver, 2008). One-time classroom observations conducted by peer faculty typically focus on content accuracy, while offering little input about curricular alignment or objectives (Malik, 1996), and often lack collaboration and support from colleagues (Bernstein, 2008). One-time classroom observations also suffer from additional problems, including but not limited to, faculty lack of expertise in providing instructional feedback (Kremer, 1990), observer bias toward similar teaching style (Centra, 2000), reliability issues and conflicts of interest resulting in reluctance to give a peer negative feedback (Marsh, 1984; Feldman, 1988), and power dynamics requiring delicate maneuvering (Keig and Waggoner, 1994). Moreover, one-time observations have been shown to have virtually no impact on faculty teaching, aside from influencing textbook selection (Spencer and Flyr, 1992), and may even lead to erroneous inferences (Weimer, 2002). Faculty members are also resistant to the use of summative peer evaluation, which they feel contributes little to tenure and promotion decisions (Iqbal, 2013).

Having considered the purpose of instructional feedback, and current practices, we provide a broad review of the best practices of giving feedback in the next section.

CHARACTERISTICS OF EFFECTIVE FEEDBACK

In general, regardless of the task, feedback is meant to provide advice from a mentor or provider to assist a recipient with modifying and improving future performance. The question is how to best provide feedback so that it results in improved performance of a specific task. There are a host of factors that come into play, from the complexity of the task to the method of imparting feedback to the definition used to judge performance. Although the value of feedback is frequently noted in the literature (Brinko, 1993; Hattie and Timperley, 2007; Ismail et al., 2012), there is little research on what makes feedback given to faculty effective for improving undergraduate teaching (Bernstein, 2008; Stes et al., 2010). For the purposes of this review, we define feedback as “information provided by an agent (e.g., teacher, peer, book, parent, self, experience) regarding aspects of one's performance or understanding” (Hattie and Timperley, 2007, p. 81). Feedback does not have to be provided by another person; individuals are capable of acquiring feedback through self-reflection. For example, one may learn tasks simply through observing others’ performance (Bandura, 1977; Green and Osborne, 1985). The observer then modifies his or her own behavior by comparisons with others and subsequent self-reflection (Wong, 1985).

We draw on the extensive literature from organizational psychology about the characteristics of feedback that are important for improving workplace performance. For example, Kluger and DeNisi (1996) review the effectiveness of vocational interventions designed to inform recipients about ways to improve their performance on tasks, but exclude feedback related to interpersonal issues. These tasks were as diverse as typing, test performance, and attendance behavior on the job. They caution that feedback does not always result in improved performance and can in fact be detrimental. They conclude that several factors affect the outcome of feedback. These factors include how the task is defined and how feedback is delivered. In work situations, for example, feedback that threatens self-esteem or interferes with the initial stages of learning a new task can have a negative effect on performance (Kluger and DeNisi, 1996). We also draw from literature on the effects of feedback on K–12 student outcomes. Researchers have shown that in testing situations, for example, students do not improve on subsequent tests simply by knowing they missed an item. To improve on subsequent tests, they also need to know the correct answer (Bangert-Drowns, 1991). Finally, we include substantial evidence from the K–12 teacher education literature that immediate and specific instructional feedback supports continuing growth (Brinko, 1993; Scheeler et al., 2004). We also reference the few empirical studies analyzing the effectiveness of feedback, mentoring, and coaching given as part of university faculty instructional development (Stes et al. 2010).

Through review of these and other studies from K–12 teacher education and workplace performance, we identified characteristics of effective feedback (Table 1) that are described in detail below. Effective feedback: 1) clarifies the task by providing instruction and correction; 2) improves motivation that can prompt increased effort; and 3) is perceived as valuable by the recipient, because it is provided by credible sources (Table 1). We propose that feedback about undergraduate teaching that is characterized by these features can lead to tangible benefits, including instructor growth and accolades, increased instructor motivation, and improved student learning.

Table 1.

Providing effective instructional feedback

Qualities of effective feedback Characteristics Suggestions
1. Clarifies the task by providing instruction and correction • Provides instruction • Teaching and learning conferences
• Workshops on innovative teaching practices
• Defines a clear standard for how the task should be completed • Online video resources
• Concrete and specific • Feedback is guided by validated classroom observation protocols.
• Identifies types of errors and provides suggestions for correction
• Timely (as soon as possible after performance of the task) • Debrief immediately after the peer observation, rather than months later or at the end of semester.
• Occurs over multiple occasions • Observations occur several times during the semester.
• Consistent, minimizes conflicting messages from students and peers • Discuss expectations of department and methods for dealing with student resistance.
• Have a consistent template for peer-teaching evaluations.
• Self-referenced (compared with an individual's ability and expectations rather than compared with a peer) • Discuss individual's concerns and address specific challenges that instructor wishes to solve.
• Meet before classroom observation to set up expectations and solicit feedback about specific challenges.
• Does not interfere with the initial stages of learning • Choose a date after the first instructional opportunity.
• Does not threaten self-esteem • Highlight areas of strength and areas for improvement as a formative evaluation that is not part of promotion and tenure decisions.
2. Improves motivation that can prompt increased effort • Leads to higher goal setting • Focus on student outcomes and changes that result in gains in student achievement.
• Provides a positive encouraging message • Acknowledge challenges but emphasize solutions.
• Accounts for confidence and experience level • For novices, emphasize what they are doing well; experts are ready for more corrective feedback.
3. Perceived as valuable by the recipient because it is provided by a reputable source • Encourages seeking feedback voluntarily • Unit head implements peer-coaching model with volunteers.
• Increases perception of value of feedback to improve job status • Unit head provides rewards for seeking feedback in the same way he or she rewards positive student evaluations in evaluating faculty performance.
• Protects the ego and others’ impressions • Private and developmental rather than public and evaluative. Copies of any written materials provided to the department mention that peer evaluation occurred, not the substance of the discussions.
• Respected status of feedback provider • Knowledgeable source of higher status who expresses they are providing feedback for the well-being and improvement of the recipient and for improved student outcomes.

Effective Feedback Clarifies the Task in a Specific, Timely Manner, with a Consistent Message That Informs Recipients How to Improve

At a fundamental level, feedback provides information useful for measuring performance compared with expectations (a task standard) and provides suggestions to correct discrepancies between one's performance and that task standard (Hattie and Timperley, 2007). To correct discrepancies, feedback must identify the type and extent of errors and contain suggestions for correcting them (Scheeler et al., 2004). If the task standard the recipient is aiming for is not clear, then feedback is less likely to be effective. For example, physicians in training are able to improve performance when the feedback they receive includes critical incidents that indicate when their performance deviated from the task standard (Wigton et al., 1986). The recipients of this specific feedback understand their evaluations better (Ilgen et al., 1979). However, if there is no clear task-related standard against which to compare for a novel task, then it should not be surprising that feedback will have little effect. If there are conflicting sources of feedback in the environment (peers, etc.), then the discrepancy may make it difficult to resolve how to integrate the feedback (Kluger and DeNisi, 1996).

Feedback must be concrete and specific: not only is concrete, specific feedback preferred by recipients (Liden and Mitchell, 1985), it is also more effective than general feedback. For example, K–12 teachers are more likely to improve their behaviors (e.g., the amount of time spent asking questions of students or other pacing and prompting behaviors) when they are given specific feedback that includes examples of how to improve rather than just general information, for example, telling them the number of questions they asked students (Englert and Sugai, 1983; Hindman and Polsgrove, 1988; Giebelhaus, 1994; O’Reilly and Renzaglia, 1994).

Feedback has been shown to be most effective when it is provided in a timely manner. In the K–12 setting, researchers compared changes to teaching behaviors following feedback that was delivered immediately or after a delay. Providing feedback after a delay was less effective compared with providing feedback immediately after performance. Immediate feedback involved supervisors interrupting instruction when the teacher incorrectly performed a target behavior, identifying the error for the teacher, asking the teacher how he or she could correct the error, and often providing a more appropriate procedure or modeling the correct behavior (O’Reilly, 1992; O’Reilly and Renzaglia, 1994; Coulter and Grossen, 1997). Similar studies demonstrated that feedback was more effective at changing teaching behaviors beyond an immediate class session if given over multiple—but not too-frequent occasions (Rezler and Anderson, 1971; Ilgen et al., 1979; Chhokar and Wallin, 1984; Fedor and Buckley, 1987).

Effective feedback provides a consistent message that considers both the recipient's knowledge and other conflicting messages they may be receiving. Both peers and students explicitly compare teaching performance with that of other instructors (Cavanagh, 1996). McColskey and Leary (1985) refer to this comparative feedback as “norm-referenced.” Norm-referenced feedback that conveyed the message of failure (negative feedback) led to lower self-esteem, expectations, and motivation (McColskey and Leary, 1985). In contrast, “self-referenced” feedback, which compared an individual's performance with other measures of his or her ability, produced increased feelings of competence, because the feedback attributed the individual's skills to personal effort and contained higher expectations for future performance (McColskey and Leary, 1985).

One alternative to norm-referenced feedback is Utell's (2013) facilitative feedback model, which seeks to build skills and expose opportunities for growth. The facilitative feedback model shares similarities with the peer-teaching discussion group model proposed by Anderson et al. (2011). Other models also rely on the establishment of a mentoring relationship between the individuals receiving and providing feedback (Showers, 1984; Centra, 1993; Johnson and Ryan, 2000). In these models, the instructor's strengths and weaknesses are explicitly identified before a task is performed. During and after performance of the task, the instructor receives feedback from the mentor. The mentor suggests ways for the instructor to improve and highlights areas of strength and future potential. Additionally, meeting before the observation may increase buy-in for the process. This opens the door for two-way conversation, shifting the process from evaluation to coaching, and provides opportunities for the instructor to suggest areas of concern or interest to the mentor (Skinner and Welch, 1996). This type of model accounts for individual differences in experience and presents a consistent message. This could help instructors navigate the conflicting, and frequently negative feedback given by disparate sources.

Effective Feedback Encourages the Instructor, Improving Motivation and Stimulating Increased Effort

Both the tone of feedback and the context in which it is given have both been shown to be important for determining effectiveness. Thinking about business management author Michael Leboeuf's quote from his 1985 book The Greatest Management Principle in the World (Putnam), “what gets rewarded gets done,” reminds us to consider the factors that motivate someone to want to improve at his or her job. Locke and Latham's (2006) goal-setting theory suggests that providing feedback per se does not improve motivation or performance, but it will do so if it leads to higher goals being set or greater commitment to existing goals. In a meta-analysis of 33 studies, Locke and Latham (1990) report that the setting of specific, challenging goals, instead of easy or vague goals like “doing your best,” consistently led to better performance.

Feedback should be positively framed but not generically positive. Instructors prefer hearing positive feedback over negative feedback (Jussim et al., 1995). Feedback is more easily recalled when it is accompanied by a positive encouraging message compared with negative messages (Podsakoff and Farh, 1989); and positive feedback is considered more accurate (Podsakoff and Farh, 1989; Jussim et al., 1995). In K–12 settings, researchers have demonstrated that the addition of a positive message to noncorrective feedback (e.g., information on the number of times the teacher exhibited a specific behavior) increases the effectiveness of that feedback as compared with noncorrective feedback alone (Cossairt et al., 1973). However, perpetually receiving only positive feedback leads to complacency (Podsakoff and Farh, 1989); perhaps an instructor begins to think, “I am doing so well, I don't need to improve.”

Feedback providers should consider the confidence and experience of the recipient when choosing the appropriate amount of encouragement. Individuals with lower self-confidence tend to view negative feedback as more accurate (Jussim et al., 1995) and to rely on feedback from external sources rather than from themselves (Ilgen et al., 1979). Novices generally have lower self-esteem, and they indicate a preference for positive feedback. For example, novice learners preferred language instructors who emphasized what students were doing well in the classroom rather than correcting mistakes (Finkelstein and Fishbach, 2012). Experts, however, will seek out negative feedback, indicating more interest in learning what they did wrong and how to correct it (Finkelstein and Fishbach, 2012).

Unfortunately, the common practices for imparting instructional feedback in higher education do not account for differences in instructor self-confidence and experience. Faculty commonly receive negative, or what Utell (2013) refers to as “failure-based feedback,” which focuses on fault-finding in task performance. Failure-based feedback can be found in the two most common types of teaching feedback. Students’ references to evaluations as an opportunity to “vent” (Marlin, 1987; Lindahl and Unger, 2010) or as a “plot to get back at an instructor” (Jacobs, 1987) are examples of faultfinding feedback. Students can also express failure-based feedback by choosing not to enroll in courses, and this feedback can have devastating consequences. For example, one study documented the termination of a faculty member following rising student attrition rates in a course utilizing evidence-based teaching practices (Silverthorn et al., 2006).

An instructor may be less likely to take risks and therefore choose not to adopt evidence-based teaching strategies if these are perceived as too risky or likely to result in failure-based feedback from students or peers.

Feedback Is More Likely to Be Sought If the Potential Benefit Outweighs the Costs

As we reviewed in the Introduction, the current models for receiving feedback in higher education—end-of-course student ratings and peer reviews—are intended to assess competence using a standardized instrument, are prescribed rather than voluntary, and are not perceived as coming from credible sources. Those interested in improving teaching recommend adopting a more formative developmental feedback model that endeavors to improve performance on a task (Weimer and Lenze, 1994) and solicits volunteers who have been shown to be more receptive to receiving feedback (Blumenthal, 1978; Sweeney and Grasha, 1979). “Feedback seeking” is the better description for this type of situation, because individuals are motivated to voluntarily seek feedback for their own improvement (Ashford et al., 2003).

Organizational psychologists characterize two major competing motives that influence the likelihood that someone will voluntarily seek feedback related to job performance. Ashford and colleagues explain that “individuals are instrumentally motivated to obtain valued information, but are also motivated to protect and/or enhance their ego and to protect others’ impressions of them” (Ashford et al., 2003, p. 774). Perceived benefits and costs are weighed in each decision. For perceived benefits, feedback seekers look for credibility, seeking feedback from individuals who possess relevant and accurate information (Fedor et al., 1992; Finkelstein and Fishbach, 2012). Negative feedback is accepted only if it comes from a high-status source (Ilgen et al., 1979), and status changes both the perception of and the desire to respond to feedback (Ilgen et al., 1979; Greller, 1980). On the other side, costs to one's ego are also considered. For example, researchers find that individuals with longer time on the job seek less feedback, possibly due to reduction in perceived value or increased perception of costs (Ashford, 1986). In addition, feedback is more likely to be sought if the situation is uncertain and the individual perceives the risk to his or her job warrants this sacrifice of his or her ego (Hays and Williams, 2011). Individuals are more likely to seek feedback if the supervisor shows respect and concern (VandeWalle et al., 2000) and if the feedback will be private and developmental rather than public and evaluative (Ashford and Northcraft, 1992).

The organizational context for university faculty bears some similarity to the corporate and K–12 scenarios studied above. Our tiered system of ranks denotes status, and established individuals with tenure have less uncertainty about their future than junior faculty and instructors. One of the major differences may be the particularly low value associated with job performance in teaching and the associated lack of reward for these activities (Hativa, 1995; Walczyk and Ramsey, 2003; Gibbs and Coffey, 2004; AAAS, 2010; Mervis, 2013). Faculty members attribute greater value to feedback if it comes from sources who are knowledgeable, and they also consider the perspective and motivation of the source (Wergin et al., 1976). Applying the principles from an organizational setting, one would predict that junior university faculty would be more likely to voluntarily seek out feedback if it is perceived as providing value—for example, increasing likelihood of receiving tenure and promotion. Feedback would also be accepted (even negative feedback) and responded to if the source is in a position of greater status. For tenured faculty members, there is less value added from feedback. They are not likely to gain status as a result of improving their teaching, so the cost to their self-image may be too great to warrant voluntarily seeking feedback from peers.

Vision for Feedback in Higher Ed

We summarize here these research findings to help formulate specific suggestions for structuring feedback (Table 1). That way, feedback may be structured to best support a faculty recipient in modifying and improving his or her teaching. If at all possible, feedback should be delivered immediately and on more than just one occasion. This could entail going over instructional materials before a class and immediately discussing thoughts for improvement, or right before and after a class session, but not after the long delay common to end-of-semester student evaluations or peer evaluations. Feedback providers need to be perceived as sympathetic, credible, and unbiased. Selecting coaches from outside the tenure-granting department may minimize conflicts and preserve collegiality and allow senior faculty access to expert role models (Huston and Weaver, 2008). However, research from peer coaching in the K–12 setting using collaborative teams of teachers of equal status rather than expert supervisors also showed demonstrable improvement on changing teaching behavior and student achievement (Showers, 1984). Stes and colleagues (2010), reviewing the handful of studies empirically examining the effects of instructional mentoring or coaching in higher education, noted an increase in teacher attitudes (Finkelstein, 1995; Gallos et al., 2005; McShannon and Hynes, 2005) and knowledge (Harnish and Wild, 1993) after peer mentoring and coaching. However, none of these studies utilized comparison groups or empirically and specifically tested the effect of the mentor's status. Regardless of their status, providers need to be able to account for individual differences in experience and self-confidence when counseling recipients. To be most useful, feedback should be voluntarily sought. Newer faculty members, be they tenure-track or not, are more likely to appreciate the benefit of feedback to their advancement. Senior faculty members without the need to achieve promotion may respond better to encouragement and attaining goals such as documenting improved student learning in their classes. Finally, the most effective feedback identifies errors in a positive manner and provides examples of how to improve. This requires an increased openness and visibility where it is accepted that faculty regularly observe teaching in the classroom in the same manner used when mastering a new research technique. It also requires better descriptions (task standards) that explain what evidence-based practices look like during implementation (i.e., the taxonomy of observable practices for scientific teaching in development by Swarts et al., 2013).

OVERCOMING EXISTING BARRIERS: STRATEGIES FOR RECIPIENTS OF FEEDBACK

In this section, we identify barriers to implementing best practices for providing effective feedback on undergraduate teaching. Then we highlight strategies that recipients of feedback may borrow from existing programs facilitating pedagogical change and faculty development.

Situational barriers to providing effective feedback are apparent early in faculty careers. In fact, these barriers begin in graduate school. During their graduate training, most faculty members had few opportunities for teacher development: only a third of science graduate students report having access to a one-semester training in pedagogy (Golde and Dore, 2001; Tanner and Allen, 2006). Given this lack of professional development, many instructors are unaware of pedagogical techniques (Crouch and Mazur, 2001; Handelsman et al., 2004; Pukkila, 2004; DeHaan, 2005). Therefore, it is unsurprising that effective use of challenging pedagogical techniques is rare (Andrews et al., 2011; Henderson et al., 2012). This lack of training ultimately impacts not only the use of good teaching practices but also ability to provide instructional feedback. Scientists’ professional identities may also act as a barrier to widespread reform in science education, an idea proposed by Brownell and Tanner (2012). Teaching is sometimes an undervalued part of faculty professional identity. Incorporating long-term ongoing opportunities for pedagogical development for graduate students can address this barrier by promoting innovative ways to seek and give feedback at the earliest stages of faculty careers (Brownell and Tanner, 2012).

Alternatively, faculty members may be aware of evidence-based teaching methods, but demonstrate a performance gap between what they are doing (or not doing) as compared with what they should be doing (Andrews et al., 2011; Ebert-May et al., 2011). After exposure to these teaching practices at workshops, faculty may need additional support through implementation (Table 2). While discipline-based science education research continues to grow, there are not necessarily in-house experts to provide feedback in each department, and these individuals may not have sufficient status for their feedback to be valued. Showers’ model (1984) supports the hypothesis that peers can be effectively trained as coaches, and Bernstein (2008) mentions several models for engaging centers for teaching and learning and fellow faculty members in the process (Hutchings, 1995; Chism, 2007). Buy-in to evidence-based teaching practices may be another barrier, however. Faculty may be resistant to change, for reasons such as commitment to content coverage (Anderson, 2002), lack of confidence in student ability (Brown et al., 2006; Henderson and Dancy, 2007), employment as adjunct faculty with different expectations and campus involvement (Roney and Ulerick, 2013), or concerns over classroom management (Welch et al., 1981). Consequently, instructional feedback may not be framed from a reformed perspective.

Table 2.

Resources for providing feedback in higher education

Type Resources for feedback in higher education
Conferences and workshops • Instructional development workshops (centers for teaching and learning, National Academies Summer Institutes [www.academiessummerinstitute.org])
• Process Oriented Guided Inquiry Learning (https://pogil.org)
• Project Kaleidoscope meetings (PKAL, American Association of Colleges and Universities [www.aacu.org/pkal])
• CIRTL (www.cirtl.net)
Online videos • iBiology education videos from the American Society for Cell Biology (www.ibiology.org/ibioeducation.html)
• Howard Hughes Medical Institute biological demonstrations (www.researchandteaching.bio.uci.edu/lecture_demo.html#ATP)
Classroom observation protocols • Classroom observation protocols (RTOP [http://physicsed.buffalostate.edu/AZTEC/RTOP/RTOP_full/about_RTOP.html])
• Classroom Observation Protocol for Undergraduate STEM (COPUS; Smith et al., 2013)
• Taxonomy of observable practices for scientific teaching (Swarts et al., 2013)
• Electronic Quality of Inquiry Protocol (EQUIP; Marshall et al., 2010)
Departmental culture • Discuss expectations of department and methods for dealing with student resistance (Seidel and Tanner, 2013)
• PULSE Vision & Change Rubrics (Aguirre et al., 2013)
Peer evaluation • Excellent peer evaluation of teaching guide at http://tenntlc.utk.edu/ut-peer-evaluation-of-teaching-guide
• Peer review of teaching project www.courseportfolio.org/peer/pages/index.jsp
• Peer Review of Teaching: A Sourcebook, 2nd ed. (Chism, 2007)
• “The role of colleagues in the evaluation of college teaching” (Cohen and McKeachie, 1980)

Moreover, the reward structure at research institutions often undervalues teaching (Hativa, 1995; Walczyk and Ramsey, 2003; Gibbs and Coffey, 2004; AAAS, 2010; Mervis, 2013). Often, there are no formal mechanisms in place for offering peer feedback beyond promotion and tenure evaluations, nor rewards for participating in a peer-feedback process. Faculty members may lack incentives for improving teaching while facing high expectations for research productivity (Boyer Commission on Educating Undergraduates in the Research University, 1998; NRC, 2003; DeHaan, 2005). Taken together, these barriers compound over time so that a sense of community around teaching in higher education may not be the norm.

Given the barriers described above, we recommend that change-makers and faculty development consultants consider the following example. In nursing, researchers have identified a systematic approach to improve productivity and competence (Stolovitch et al., 2000). The stepwise approach involves first analyzing the performance gap to understand the difference between the behavior exhibited and expectations, as well as its significance. Then, the underlying cause of the gap is identified before an appropriate intervention is selected. Finally, subsequent change is measured (Stolovitch et al., 2000). This approach has relevance for higher education, as there may be multiple underlying reasons that faculty may fail to adopt evidence-based teaching practices. Feedback providers should use knowledge of the reason(s) why someone is not implementing evidence-based teaching practices to frame and develop appropriate feedback interventions. Change-makers should consider that multiple intertwined causes may prevent effective implementation. This stepwise analysis supports feedback-giving efforts tailored to individuals’ needs and challenges with room for flexibility, variation, and change.

Both change-makers and feedback recipients might look to strategies that support shifts in professional identity while building a sense of community around teaching (thus changing culture) (Table 2). Establishing faculty learning communities for those willing to participate may be one avenue for offering and receiving regular feedback to support faculty with feedback beyond student evaluations and drop-in peer evaluations. Peer coaching is another strategy that may support this shift. A peer-feedback model, unlike a one-time classroom observation, is all-encompassing—providing feedback about everything from learning objectives to assessment strategies—rather than just evaluating the in-class performance. In this model, instructors regularly observe one another, providing support, feedback, and assistance in order to improve one another's instructional practices (Mallette et al., 1999; Weimer, 2002; Huston and Weaver, 2008). Weimer (2002, p. 197) suggests that this is a way to let peers “function as colleagues and work collaboratively on improvement efforts.” Weimer offers two recommendations that are useful guiding principles: first, practice the “golden rule” in giving feedback, “give unto each other the kind and quality of feedback you would like to receive,” and second, develop an agenda. With a defined agenda, faculty members may learn and reflect together on specific problems. This shifts the feedback giving-and-receiving dynamic from a one-way exchange to more productive two-way communication. Both faculty learning communities and peer coaching may support science, technology, engineering, and mathematics (STEM) faculty grappling with student resistance to evidence-based instructional practices.

Given what we know about best practices for feedback, we recommend that change-makers, feedback providers, and feedback recipients focus on identifying how to make feedback specific, timely, corrective, and positively framed. Both change-makers and feedback recipients might borrow tools from existing faculty development programs to structure higher-quality feedback (Table 2). For example, interested faculty might use the feedback practices used by the Faculty Institutes for Reforming Science Teaching (FIRST IV; www.msu.edu/∼first4/Index.html). FIRST IV participants watch videotaped classroom sessions and then respond to questions such as: “What are the students doing? What is the instructor doing? How would you go about changing this classroom so it is more student-centered? What is the instructor doing that students themselves should be doing?” Participants discuss and reflect, and then perform self-evaluations of their own videotaped classroom samples in concert with peer and expert review. Faculty may use rubrics developed by the Partnership for Undergraduate Life Science Education (PULSE; www.pulsecommunity.org). These rubrics are intended to structure departmental-level discussion and reflection about how program curricula and teaching practices align with Vision and Change goals. Faculty may use these rubrics to spark more nuanced discussions about feedback for teaching practices. Extensive additional resources are available through the Center the Integration of Research, Teaching, and Learning (CIRTL; www.cirtl.net) and the Measures of Effective Teaching (MET) project (www.metproject.org/faq.php).

Feedback recipients may be their own best advocates for receiving more useful feedback (Tables 1 and 2). Feedback recipients could propose a preclassroom observation meeting to discuss class goals, challenges faced, and areas for a peer observer to suggest specific strategies. This preobservation meeting may set up a framework for feedback recipients to receive more thoughtful, focused, practical feedback. Such a framework may also increase feedback recipients’ perception of the value of feedback and give them a voice in the process. Because barriers to accessing locally based learning communities may exist, programs such as PULSE make use of technology to share resources across institutional borders. We encourage feedback recipients to think beyond their department walls, to seek additional feedback from external mentors. From research about highly effective athletic coaches, we know that individuals with strong social networks who discussed their practices with others and dedicated portions of their off-season to studying their sports, had larger winning records than coaches who did not (Horton and Young, 2010). Essentially, winning coaches were more successful because they actively sought out feedback to improve their performance. Instructors, like coaches, also benefit from discussing their practices and sharing feedback to achieve a winning season as measured by student achievement. This mirrors what we know about how people learn: we continually reconstruct our understanding of the world and this process is social (Bransford et al., 2000). Likewise, we need to actively seek feedback to revise and improve our teaching practices.

AREAS FOR FURTHER RESEARCH

What we know about best practices for feedback primarily comes from the realm of K–12 teacher education research, as well as organizational psychology research. Research about best practices for instructional feedback in higher education—for college faculty—is uncharted territory. Here, we propose several areas of instructional feedback in need of more research, specifically focusing on instructional feedback for college faculty and potential outcomes related to student experiences.

Many faculty members, including educational researchers, are confused or disagree as to what exactly constitutes active learning (Hativa, 1995; Miller et al., 2000; Winter et al., 2001; Hanson and Moser, 2003; Yarnall et al., 2007; Chi, 2009; Allendoerfer et al., 2012). As a result, faculty members struggle to define the standards by which to frame feedback. Few models exist, consequently even faculty members who have attended workshops about active learning mischaracterize their own performance (Ebert-May et al., 2011). This disconnect between understanding and implementation suggests that feedback must clarify specific expectations while limiting contradictory information. One resource compilation to help instructors better envision and create engaged classroom environments is in development: the iBiology Project at the American Society for Cell Biology is in the process of creating and posting videos through their iBiologyEducation YouTube channel that showcase evidence-based classroom practices (iBioEducation, 2013). Research is needed to address questions such as: How does feedback that includes clarification about effectively implementing evidence-based teaching practices impact faculty teaching practices? In other words, to what extent does “clarifying the task” aid instructors? Does this increase the likelihood that faculty members are able to accurately define and effectively implement active-learning strategies?

We know that simply providing instructors with evidence about their teaching practices is not enough to instigate improved teaching (Andrews and Lemons, personal communication). Tools are needed to provide structured feedback for evidence-based teaching practices that will both support implementation and inform a peer-teaching evaluation system. Classroom observation protocols exist (e.g., the Reformed Teaching Observation Protocol; Sawada et al., 2002), but these are used for evaluative research purposes rather than for formative feedback, and the measurement scales are challenging to interpret (Marshall et al., 2010). Moreover, these do not offer strategic feedback for improvement (Marshall et al., 2011). New classroom observation protocols are in development that may be useful for formative instructional feedback (Eddy et al., 2013; Smith et al., 2013; Swarts et al., 2013), as is a feedback tool to improve evidence-based teaching practices (Gormally et al., unpublished data). More work is needed to understand: What are effective means for instructional feedback in higher education? How should this feedback be structured? What types of feedback do instructors report as most engaging them in trying new techniques?

To understand how to motivate faculty to seek and use feedback, we need to clarify the types of feedback desired by faculty in different job settings. First, we need to know more about how faculty members give and receive feedback. Then we can question whether informal or formal feedback approaches yield different outcomes in terms of how instructors perceive and respond to the feedback. How does an instructor's perception of a feedback provider's value impact his or her response to feedback? How does the manner in which the feedback is conveyed impact instructor morale? How do different types of faculty respond to different ways of conveying instructional feedback? It will also be critical to characterize, measure, and quantify instructional change following feedback. How do faculty behaviors, beliefs, and attitudes change as a result of feedback? How do faculty professional identities shift as a result of feedback? Researchers may explore whether we begin to see a cultural shift and whether “what gets rewarded, gets done” will encompass both research and teaching.

Studies show modest but significant improvements in teaching as measured by student perceptions (through student evaluations) of faculty change (Cohen, 1980; Safavi et al., 2013). We need to understand whether receiving feedback ultimately impacts student outcomes. How do students perceive changes in teaching behaviors following feedback? Further, how might end-of-semester course evaluations be revised to be more learner centered? How might the type of feedback elicited from a learner-centered course evaluation differ from a teacher-centered course evaluation? Do faculty members view this feedback as more valuable than traditional teacher-centered course evaluations? Do more faculty members report using this feedback? How might this feedback be used to address or head off student resistance in future courses? How does feedback lead to change that impacts student attitudes about the classroom environment, pedagogy, and learning science? Research to address these questions could substantially affect both faculty and student resistance to adopting evidence-based practices.

People are more likely to increase effort when “the goal is clear, when high commitment is secured for it, and when belief in eventual success is high” (Kluger and DeNisi, 1996). The efforts on the part of STEM instructors to reform instruction and shift the status quo closer to evidence-based teaching practices are heroic and ongoing, but we must match these efforts with improved instructional feedback. More research is needed to understand the outcomes and impacts of offering feedback to faculty. Implementing a reformed instructional feedback protocol, in addition to reformed teaching, may seem daunting. However, our current strategies for providing instructional feedback in STEM are inadequate. Therefore, we must challenge one another to move beyond student evaluations and the typically unproductive drop-in observations. Instead, we must advocate for more research in STEM education that focuses on the outcomes of improved instructional feedback, leading to the development and implementation of successful models of instructional feedback.

ACKNOWLEDGMENTS

The authors acknowledge continuing support and feedback from the University of Georgia Biology Education Research Group. This work was supported by National Science Foundation grant DUE-0942261 to P.B.

REFERENCES

  1. Abrami PC. How should we use student ratings to evaluate teaching? Res High Educ. 1989;30:221–227. [Google Scholar]
  2. Abrami PC, Cohen PA, d’Apollonia S. Validity of student-ratings of instruction—what we know and what we do not. J Educ Psychol. 1990;82:219–231. [Google Scholar]
  3. Addy TM, Blanchard MR. The problem with reform from the bottom up: instructional practises and teacher beliefs of graduate teaching assistants following a reform-minded university teacher certificate programme. Int J Sci Educ. 2010;32:1045–1071. [Google Scholar]
  4. Aguirre KM, Balser TC, Thomas J, Marley KE, Miller KG, Osgood MP, Pape-Lindstrom PA, Romano SL. Letter to the editor: PULSE Vision & Change rubrics. CBE Life Sci Educ. 2013;12:579–581. doi: 10.1187/cbe.13-09-0183. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Aleamoni LM. Student rating myths versus research facts from 1924 to 1998. J Pers Eval Educ. 1999;13:153–166. [Google Scholar]
  6. Aleamoni LM, Hexner PZ. A review of the research on student evaluation and a report on the effect of different sets of instructions on student course and instructor evaluation. Instruct Sci. 1980;9:67–84. [Google Scholar]
  7. Allendoerfer C, Kim MJ, Burpee E, Wilson D, Bates R. 2012. Awareness of and receptiveness to active learning strategies among STEM faculty. Frontiers in Education Conference Proceedings, Seattle, WA, pp. 1–6. [Google Scholar]
  8. American Association for the Advancement of Science (AAAS) Vision and Change: A Call to Action. Washington, DC: 2010. [Google Scholar]
  9. AAAS. Vision and Change in Undergraduate Biology: A Call to Action. Washington, DC: 2011. [Google Scholar]
  10. Anderson R. Reforming science teaching: what research says about inquiry. J Sci Teach Educ. 2002;13:1–2. [Google Scholar]
  11. Anderson WA, et al. Changing the culture of science education at research universities. Science. 2011;331:152–153. doi: 10.1126/science.1198280. [DOI] [PubMed] [Google Scholar]
  12. Andrews TM, Leonard MJ, Colgrove CA, Kalinowski ST. Active learning NOT associated with student learning in a random sample of college biology courses. CBE Life Sci Educ. 2011;10:394–405. doi: 10.1187/cbe.11-07-0061. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Ashford SJ. Feedback-seeking in individual adaptation—a resource perspective. Acad Manage J. 1986;29:465–487. [Google Scholar]
  14. Ashford SJ, Blatt R, VandeWalle D. Reflections on the looking glass: a review of research on feedback-seeking behavior in organizations. J Management. 2003;29:773–799. [Google Scholar]
  15. Ashford SJ, Northcraft GB. Conveying more (or less) than we realize—the role of impression-management in feedback-seeking. Organ Behav Hum Dec. 1992;53:310–334. [Google Scholar]
  16. Bandura A. Social Learning Theory. Englewood Cliffs, NJ: Prentice Hall; 1977. [Google Scholar]
  17. Bangert-Drowns RL. The instructional effect of feedback in test-like events. Rev Educ Res. 1991;61:213–238. [Google Scholar]
  18. Bernstein DJ. Peer review and evaluation of the intellectual work of teaching. Change. 2008;40:48–51. [Google Scholar]
  19. Blumenthal P. Watching ourselves teaching psychology. Teach Psychol. 1978;5:162–163. [Google Scholar]
  20. Boyer Commission on Educating Undergraduates in the Research University. Reinventing Undergraduate Education: A Blueprint for America's Research Universities. Stony Brook: State University of New York: 1998. www.sunysb.edu/boyerreport (accessed 11 June 2013) [Google Scholar]
  21. Bransford JDE, Brown ALE, Cocking RRE. How People Learn: Brain, Mind, Experience, and School, expanded ed. Washington, DC: National Academies Press; 2000. [Google Scholar]
  22. Brickman P, Gormally C, Armstrong N, Brittan H. Effects of inquiry-based learning on students’ science literacy skills and confidence. Int J Scholarsh Teach Learn. 2009;3(2):1–22. [Google Scholar]
  23. Brinko KT. The practice of giving feedback to improve teaching: what is effective? J High Educ. 1993;64:574–593. [Google Scholar]
  24. Brown PL, Abell SK, Demir A, Schmidt FJ. College science teachers’ views of classroom inquiry. Sci Educ. 2006;90:784–802. [Google Scholar]
  25. Brownell SE, Tanner KD. Barriers to faculty pedagogical change: lack of training, time, incentives, and … tensions with professional identity? CBE Life Sci Educ. 2012;11:339–346. doi: 10.1187/cbe.12-09-0163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Callahan JP. Faculty attitude towards student evaluation. Coll Student J. 1992;26:98–102. [Google Scholar]
  27. Cashin WE. Students do rate academic fields differently. In: Theall M, Franklin J, editors. New Directions for Teaching and Learning. San Francisco, CA: Jossey-Bass; 1990. pp. 113–121. [Google Scholar]
  28. Cashin WE, Downey RG. Using global student rating items for summative evaluation. J Educ Psychol. 1992;84:563–572. [Google Scholar]
  29. Cavanagh RR. Formative and summative evaluation in the faculty peer review of teaching. Innov High Educ. 1996;20:235–240. [Google Scholar]
  30. Centra JA. Reflective Faculty Evaluation: Enhancing Teaching and Determining Faculty Effectiveness. San Francisco, CA: Jossey-Bass; 1993. [Google Scholar]
  31. Centra JA. Evaluating the teaching portfolio: a role for colleagues. New Direct Teach Learn. 2000;2000(83):87–93. [Google Scholar]
  32. Chandler TA. The questionable status of student evaluations of teaching. Teach Psychol. 1978;5:150–152. [Google Scholar]
  33. Cheng DA. Effects of class size on alternative educational outcomes across disciplines. Econ Educ Rev. 2011;30:980–990. [Google Scholar]
  34. Chhokar JS, Wallin JA. A field study on the effect of feedback frequency on performance. J Appl Psychol. 1984;69:524–530. [Google Scholar]
  35. Chi MTH. Active-constructive-interactive: a conceptual framework for differentiating learning activities. Top Cogn Sci. 2009;1:73–105. doi: 10.1111/j.1756-8765.2008.01005.x. [DOI] [PubMed] [Google Scholar]
  36. Chism NV. 2nd ed. Bolton, MA: Anker; 2007. Peer Review of Teaching: A Sourcebook. [Google Scholar]
  37. Cohen PA. Effectiveness of student-rating feedback for improving college instruction: a meta-analysis of findings. Res High Educ. 1980;13:321–341. [Google Scholar]
  38. Cohen PA, McKeachie WJ. The role of colleagues in the evaluation of college teaching. Improving Coll Univ Teach. 1980;28:147–154. [Google Scholar]
  39. Cossairt A, Hall RV, Hopkins BL. Effects of experimenters instructions, feedback, and praise on teacher praise and student attending behavior. J Appl Behav Anal. 1973;6:89–100. doi: 10.1901/jaba.1973.6-89. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Coulter GA, Grossen B. The effectiveness of in-class instructive feedback versus after-class instructive feedback for teachers learning direct instruction teaching behaviors. Effect School Pract. 1997;16:21–35. [Google Scholar]
  41. Crouch CH, Mazur E. Peer instruction: ten years of experience and results. Am J Phys. 2001;69:970–977. [Google Scholar]
  42. Dancy M, Henderson C. Pedagogical practices and instructional change of physics faculty. Am J Phys. 2010;78:1056–1063. [Google Scholar]
  43. d’Apollonia S, Abrami PC. Navigating student ratings of instruction. Am Psychol. 1997;52:1198–1208. [Google Scholar]
  44. DeHaan RL. The impending revolution in undergraduate science education. J Sci Educ Technol. 2005;14:253–269. [Google Scholar]
  45. Derting TL, Ebert-May D. Learner-centered inquiry in undergraduate biology: positive relationships with long-term student achievement. CBE Life Sci Educ. 2010;9:462–472. doi: 10.1187/cbe.10-02-0011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Dowell DA, Neal JA. A selective review of the validity of student ratings of teaching. J Higher Educ. 1982;53:51–62. [Google Scholar]
  47. Ebert-May D, Brewer C, Allred S. Innovation in large lectures—teaching for active learning. BioScience. 1997;47:601–607. [Google Scholar]
  48. Ebert-May D, Derting TL, Hodder J, Momsen JL, Long TM, Jardeleza SE. What we say is not what we do: effective evaluation of faculty professional development programs. BioScience. 2011;61:550–558. [Google Scholar]
  49. Eddy S, Converse M, Abshire E, Longton C. Development and implementation of an instrument to characterize active learning in large lecture classes. In: Wenderoth AP, editor. Conference Proceeding. Minneapolis, MN: Society for the Advancement of Biology Education Research; 2013. [Google Scholar]
  50. Englert CS, Sugai G. Teacher training: improving trainee performance through peer observation and observation system technology. Teach Educ Special Educ. 1983;6:7–17. [Google Scholar]
  51. Fedor DB, Buckley MR. Providing feedback to organizational members. J Bus Psychol. 1987;2:171–181. [Google Scholar]
  52. Fedor DB, Rensvold RB, Adams SM. An investigation of factors expected to affect feedback seeking—a longitudinal-field study. Pers Psychol. 1992;45:779–805. [Google Scholar]
  53. Feldman KA. Effective college teaching from the students’ and faculty's view: matched or mismatched priorities? Res High Educ. 1988;28:291–344. [Google Scholar]
  54. Finkelstein M. Assessing the Teaching and Student Learning Outcomes of the Katz/Henry Faculty Development Model. South Orange: New Jersey Institute for Collegiate Teaching and Learning; 1995. [Google Scholar]
  55. Finkelstein SR, Fishbach A. Tell me what I did wrong: experts seek and respond to negative feedback. J Consumer Res. 2012;39:22–38. [Google Scholar]
  56. Franklin J, Theall M, Ludlow L. Grade inflation and student ratings: a closer look. Paper presented at the annual meeting of the American Educational Research Association. Chicago: 1991. [Google Scholar]
  57. Freeman S, O’Connor E, Parks JW, Cunningham M, Hurley D, Haak D, Dirks C, Wenderoth MP. Prescribed active learning increases performance in introductory biology. CBE Life Sci Educ. 2007;6:132–139. doi: 10.1187/cbe.06-09-0194. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Gallos MR, van den Berg E, Treagust DF. The effect of integrated course and faculty development: experiences of a university chemistry department in the Philippines. Research report. Int J Sci Educ. 2005;27:985–1006. [Google Scholar]
  59. Gibbs G, Coffey M. The impact of training of university teachers on their teaching skills, their approach to teaching and the approach to learning of their students. Active Learn Higher Educ. 2004;5:87–100. [Google Scholar]
  60. Giebelhaus CR. The mechanical third ear device: a student teaching supervision alternative. J Teach Educ. 1994;45:365–373. [Google Scholar]
  61. Golde CM, Dore TM. At Cross Purposes: What the Experiences of Today's Doctoral Students Reveal about Doctoral Education. Philadelphia, PA: Pew Charitable Trusts; 2001. [Google Scholar]
  62. Goldman L. On the erosion of education and the eroding foundations of teacher education (or why we should not take student evaluation of faculty seriously) Teacher Educ Q. 1993;20:57–64. [Google Scholar]
  63. Green G, Osborne JG. Does vicarious instigation provide support for observational learning theories? A critical review. Psychol Bull. 1985;97:3–16. [Google Scholar]
  64. Greller MM. Evaluation of feedback sources as a function of role and organizational level. J Appl Psychol. 1980;65:24–27. [Google Scholar]
  65. Handelsman J, et al. Scientific teaching. Science. 2004;304:521–522. doi: 10.1126/science.1096022. [DOI] [PubMed] [Google Scholar]
  66. Hanson S, Moser S. Reflections on a discipline-wide project: developing active learning modules on the human dimensions of global change. J Geogr Higher Educ. 2003;27:17–38. [Google Scholar]
  67. Harnish D, Wild LA. Peer mentoring in higher education: a professional development strategy for faculty. Commun College J Res Pract. 1993;17:271–282. [Google Scholar]
  68. Hativa N. The department-wide approach to improving faculty instruction in higher-education—a qualitative evaluation. Res High Educ. 1995;36:377–413. [Google Scholar]
  69. Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77:81–112. [Google Scholar]
  70. Hays JC, Williams JR. Testing multiple motives in feedback seeking: the interaction of instrumentality and self protection motives. J Vocat Behav. 2011;79:496–504. [Google Scholar]
  71. Henderson C. Promoting instructional change in new faculty: an evaluation of the physics and astronomy new faculty workshop. Am J Phys. 2008;76:179–187. [Google Scholar]
  72. Henderson C, Beach A, Finkelstein N. Facilitating change in undergraduate STEM instructional practices: an analytic review of the literature. J Res Sci Teach. 2011;48:952–984. [Google Scholar]
  73. Henderson C, Dancy MH. Barriers to the use of research-based instructional strategies: the influence of both individual and situational characteristics. Phys Rev ST Phys Educ Res. 2007;3:020102. [Google Scholar]
  74. Henderson C, Dancy MH. Impact of physics education research on the teaching of introductory quantitative physics in the United States. Phys Rev ST Phys Educ Res. 2009;5:020107. [Google Scholar]
  75. Henderson C, Dancy M, Niewiadomska-Bugaj M. Use of research-based instructional strategies in introductory physics: where do faculty leave the innovation-decision process? Phys Rev ST Phys Educ Res. 2012;8:020104. [Google Scholar]
  76. Hills JR. On the use of student ratings of faculty in determination of pay, promotion, and tenure. Res High Educ. 1974;2:317–324. [Google Scholar]
  77. Hindman SE, Polsgrove L. Differential effects of feedback on preservice teacher behavior. Teach Educ and Special Educ. 1988;11:25–29. [Google Scholar]
  78. Horton S, Young B. Pedagogical Self-improvement Methods: Lessons from a Master Coach Extrapolated to Developing Educators. PHENex Journal/Revue phenEPS. 2010;2(2):1–12. [Google Scholar]
  79. Huston T, Weaver CL. Peer coaching: professional development for experienced faculty. Innov High Educ. 2008;33:5–20. [Google Scholar]
  80. Hutchings P. From Idea to Prototype: The Peer Review of Teaching. Sterling, VA: Stylus; 1995. [Google Scholar]
  81. iBioEducation. iBiology Scientific Teaching Series. YouTube; 2013. iBiology Scientific Teaching Series. [Google Scholar]
  82. Ilgen DR, Fisher CD, Taylor MS. Consequences of individual feedback on behavior in organizations. J Appl Psychol. 1979;64:349–371. [Google Scholar]
  83. Iqbal I. Academics’ resistance to summative peer review of teaching: questionable rewards and the importance of student evaluations. Teach High Educ. 2013;18:557–569. [Google Scholar]
  84. Ismail EA, Buskist W, Groccia JE. Peer review of teaching. In: Kite ME, editor. Effective Evaluation of Teaching: A Guide for Faculty and Administrators. Society for the Teaching of Psychology; 2012. p. 95. [Google Scholar]
  85. Jacobs LC. University Faculty and Students’ Opinions of Student Ratings, Indiana Studies in Higher Education no. 55. Bloomington: Bureau of Evaluative Studies and Testing, Indiana University; 1987. [Google Scholar]
  86. Johnson TD, Ryan KE. A comprehensive approach to the evaluation of college teaching. New Dir Teach Learn. 2000;2000(83):109–123. [Google Scholar]
  87. Jussim L, Yen HJ, Aiello JR. Self-consistency, self-enhancement, and accuracy in reactions to feedback. J Exp Soc Psychol. 1995;31:322–356. [Google Scholar]
  88. Keig L. Formative peer review of teaching: attitudes of faculty at liberal arts colleges towards colleague assessment. J Pers Eval Educ. 2000;14:67–87. [Google Scholar]
  89. Keig L, Waggoner MD. Collaborative Peer Review: The Role of Faculty in Improving College Teaching, ASHE-ERIC Higher Education Report no. 2. Washington, DC: ERIC Publications; 1994. [Google Scholar]
  90. Kember D, Leung DYP, Kwan KP. Does the use of student feedback questionnaires improve the overall quality of teaching? Assess Eval High Educ. 2002;27:411–425. [Google Scholar]
  91. Kluger AN, DeNisi A. The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull. 1996;119:254–284. [Google Scholar]
  92. Knight JK, Wood WB. Teaching more by lecturing less. Cell Biol Educ. 2005;4:298–310. doi: 10.1187/05-06-0082. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Kolitch E, Dean AV. Student ratings of instruction in the USA: hidden assumptions and missing conceptions about “good” teaching. Stud High Educ. 1999;24:27–42. [Google Scholar]
  94. Kremer JF. Construct validity of multiple measures in teaching, research, and service and reliability of peer ratings. J Educ Psychol. 1990;82:213–218. [Google Scholar]
  95. Liden RC, Mitchell TR. Reactions to feedback—the role of attributions. Acad Manage J. 1985;28:291–308. [Google Scholar]
  96. Lindahl MW, Unger ML. Cruelty in student teaching evaluations. Coll Teach. 2010;58:71–76. [Google Scholar]
  97. Locke EA, Latham GP. The Theory of Goal Setting and Task Performance. Englewood Cliffs, NJ: Prentice Hall; 1990. [Google Scholar]
  98. Locke EA, Latham GP. New directions in goal-setting theory. Curr Dir Psychol Sci. 2006;15:265–268. [Google Scholar]
  99. Loeher L. An Examination of Research University Faculty Evaluation Policies and Practice. Portland, OR: Professional and Organizational Development; 2006. [Google Scholar]
  100. Malik DJ. Peer review of teaching: external review of course content. Innov High Educ. 1996;20:277–286. [Google Scholar]
  101. Mallette B, Maheady L, Harper GF. The effects of reciprocal peer coaching on preservice general educators’ instruction of students with special learning needs. Teach Educ Special Educ. 1999;22:201–216. [Google Scholar]
  102. Marlin JW. Student perception of end-of-course evaluations. J Higher Educ. 1987;58:704–716. [Google Scholar]
  103. Marsh HW. Students’ evaluations of university teaching: dimensionality, reliability, validity, potential biases, and utility. J Educ Psychol. 1984;76:707–754. [Google Scholar]
  104. Marsh HW, Roche L. The use of students’ evaluations and an individually structured intervention to enhance university teaching effectiveness. Am Educ Res J. 1993;30:217–251. [Google Scholar]
  105. Marshall JC, Smart J, Horton RM. The design and validation of equip: an instrument to assess inquiry-based instruction. Int J Sci Math Educ. 2010;8:299–321. [Google Scholar]
  106. Marshall JC, Smart J, Lotter C, Sirbu C. Comparative analysis of two inquiry observational protocols: striving to better understand the quality of teacher-facilitated inquiry-based instruction. Sch Sci Math. 2011;111:306–315. [Google Scholar]
  107. McColskey W, Leary MR. Differential-effects of norm-referenced and self-referenced feedback on performance expectancies, attributions, and motivation. Contemp Educ Psychol. 1985;10:275–284. [Google Scholar]
  108. McKeachie WJ. Research on college teaching: the historical background. J Educ Psychol. 1990;82:189–200. [Google Scholar]
  109. McShannon J, Hynes P. Student achievement and retention: can professional development programs help faculty GRASP it? J Faculty Dev. 2005;20:87–94. [Google Scholar]
  110. Menefee R. The evaluation of science teaching. J Coll Sci Teach. 1983;13:138. [Google Scholar]
  111. Mervis J. Transformation is possible if a university really cares. Science. 2013;340:292–296. doi: 10.1126/science.340.6130.292. [DOI] [PubMed] [Google Scholar]
  112. Miller JW, Martineau LP, Clark RC. Technology infusion and higher education: changing teaching and learning. Innov High Educ. 2000;24:227–241. [Google Scholar]
  113. Murray HG. Low-inference classroom teaching behaviors and student ratings of college teaching effectiveness. J Educ Psychol. 1983;75:138–149. [Google Scholar]
  114. National Research Council (NRC) Improving Undergraduate Instruction in Science, Technology, Engineering, and Mathematics: Report of a Workshop. Washington, DC: National Academies Press; 2003. [Google Scholar]
  115. NRC. Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering. Washington, DC: National Academies Press; 2012. [Google Scholar]
  116. Neal JE. Faculty Evaluation: Its Purposes and Effectiveness, ERIC Digest. Washington, DC: ERIC Clearinghouse on Higher Education; 1988. [Google Scholar]
  117. Nielsen N. Promising Practices in Undergraduate Science, Technology, Engineering, and Mathematics Education: Summary of Two Workshops. Washington, DC: National Academies Press; 2011. [Google Scholar]
  118. O’Reilly MF. Teaching systematic instruction competencies to special education student teachers: an applied behavioral supervision model. J Assoc Pers Sev Handicaps. 1992;17(2):104–111. [Google Scholar]
  119. O’Reilly M, Renzaglia A. A systematic approach to curriculum selection and supervision strategies: a preservice practicum supervision model. Teacher Educ Special Educ. 1994;17:170–180. [Google Scholar]
  120. Overall JU, Marsh HW. Midterm feedback from students: its relationship to instructional improvement and students’ cognitive and affective outcomes. J Educ Psychol. 1979;71:856–865. [Google Scholar]
  121. Podsakoff PM, Farh JL. Effects of feedback sign and credibility on goal setting and task-performance. Organ Behav Hum Dec. 1989;44:45–67. [Google Scholar]
  122. Pukkila PJ. Introducing student inquiry in large introductory genetics classes. Genetics. 2004;166:11–18. doi: 10.1534/genetics.166.1.11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  123. Quinlan K, Bernstein DJ. Special issue on peer review of teaching. Innov High Educ. 1996;20(4) [Google Scholar]
  124. Ramsden P. A performance indicator of teaching quality in higher education: the Course Experience Questionnaire. Stud High Educ. 1991;16:129–150. [Google Scholar]
  125. Rezler AG, Anderson AS. Focused and unfocused feedback and self-perception. J Educ Res. 1971;65:61. [Google Scholar]
  126. Richardson JT E. Instruments for obtaining student feedback: a review of the literature. Assess Eval High Educ. 2005;30:387–415. [Google Scholar]
  127. Roney K, Ulerick SL. A roadmap to engaging part-time faculty in high-impact practices. Peer Rev. 2013;15(3):24. [Google Scholar]
  128. Ryan JJ. Student evaluation: the faculty responds. Res High Educ. 1980;12:317–333. [Google Scholar]
  129. Safavi SA, Bakar KA, Tarmizi RA, Alwi NH. Faculty perception of improvements to instructional practices in response to student ratings. Educ Assess Eval Accountability. 2013;25:143–153. [Google Scholar]
  130. Sawada D, Piburn MD, Judson E, Turley J, Falconer K, Benford R, Bloom I. Measuring reform practices in science and mathematics classrooms: the Reformed Teaching Observation Protocol. Sch Sci Math. 2002;102:245–253. [Google Scholar]
  131. Scheeler MC, Ruhl KL, McAfee JK. Providing performance feedback to teachers: a review. Teacher Educ Special Educ. 2004;27:396–407. [Google Scholar]
  132. Schneider G. Student evaluations, grade inflation and pluralistic teaching: moving from customer satisfaction to student learning and critical thinking. Forum Soc Econ. 2013;42:122–135. [Google Scholar]
  133. Seidel SB, Tanner KD. What if students revolt?”—considering student resistance: origins, options, and opportunities for investigation. CBE Life Sci Educ. 2013;12:586–595. doi: 10.1187/cbe-13-09-0190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  134. Seldin P. The use and abuse of student ratings of professors. Chronicle of Higher Education. 1993;39(46):A40. [Google Scholar]
  135. Seldin P. Changing Practices in Evaluating Teaching: A Practical Guide to Improved Faculty Performance and Promotion/Tenure Decisions. Boston, MA: Anker; 1999. [Google Scholar]
  136. Showers B. Peer Coaching: A Strategy for Facilitating Transfer of Training. A CEPM R&D Report. Eugene: Center for Educational Policy and Management, Oregon University; 1984. [Google Scholar]
  137. Silverthorn DU, Thorn PM, Svinicki MD. It's difficult to change the way we teach: lessons from the integrative themes in physiology curriculum module project. Adv Physiol Educ. 2006;30:204–214. doi: 10.1152/advan.00064.2006. [DOI] [PubMed] [Google Scholar]
  138. Singer SR, Nielsen NR, Schweingruber HA. Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering. Washington, DC: National Academies Press; 2012. [Google Scholar]
  139. Skinner ME, Welch FC. Peer coaching for better teaching. Coll Teach. 1996;44:153–156. [Google Scholar]
  140. Smith MK, Jones FHM, Gilbert SL, Weiman CE. The classroom observation protocol for undergraduate STEM (COPUS): a new instrument to characterize university STEM classroom practices. CBE Life Sci Educ. 2013;12:618–627. doi: 10.1187/cbe.13-08-0154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  141. Spencer PA, Flyr ML. The Formal Evaluation as an Impetus to Classroom Change: Myth or Reality? Reports—Research. Riverside: University of California Press; 1992. [Google Scholar]
  142. Stes A, Min-Leliveld M, Gijbels D, Van Petegem P. The impact of instructional development in higher education: the state-of-the-art of the research. Educ Res Rev. 2010;5:25–49. [Google Scholar]
  143. Stolovitch HD, Keeps EJ, Finnegan G. Book review: Handbook of Human Performance Technology: Improving Individual and Organizational Performance Worldwide (second edition) Perf Improv. 2000;39(5):38–44. [Google Scholar]
  144. Sunal DW, Hodges J, Sunal CS, Whitaker KW, Freeman LM, Edwards L, Johnston RA, Odell M. Teaching science in higher education: faculty professional development and barriers to change. Sch Sci Math. 2001;101:246–257. [Google Scholar]
  145. Swarts T, Schelpat T, Couch B, Wood B. Defining observable behaviors associated with scientific teaching. In: Wenderoth MP, editor. Conference Proceeding. Minneapolis, MN: Society for the Advancement of Biology Education Research; 2013. [Google Scholar]
  146. Sweeney JM, Grasha AF. Improving teaching through faculty-development triads. Educ Technol. 1979;19(2):54–57. [Google Scholar]
  147. Tanner K, Allen D. Approaches to biology teaching and learning: on integrating pedagogical training into the graduate experiences of future science faculty. Cell Biol Educ. 2006;5:1–6. doi: 10.1187/cbe.05-12-0132. [DOI] [PMC free article] [PubMed] [Google Scholar]
  148. Tiberius RG. The influence of student evaluative feedback on the improvement of clinical teaching. J High Educ. 1989;60:665–681. [Google Scholar]
  149. Udovic D, Morris D, Dickman A, Postlethwait J, Wetherwax P. Workshop biology: demonstrating the effectiveness of active learning in an introductory biology course. BioScience. 2002;52:272–281. [Google Scholar]
  150. Utell J. What the food network can teach us about feedback. University of Venus: GenX Women in Higher Ed Writing across the Globe (blog), Inside Higher Ed, January 13, 2013. 2013. www.insidehighered.com/blogs/university-venus/what-food-network-can-teach-us-about-feedback (accessed 4 July 2013) [Google Scholar]
  151. VandeWalle D, Ganesan S, Challagalla GN, Brown SP. An integrated model of feedback-seeking behavior: disposition, context, and cognition. J Appl Psychol. 2000;85:996–1003. doi: 10.1037/0021-9010.85.6.996. [DOI] [PubMed] [Google Scholar]
  152. Vasta R, Sarmiento RF. Liberal grading improves evaluations but not performance. J Educ Psychol. 1979;71:207–211. [Google Scholar]
  153. Walczyk JJ, Ramsey LL. Use of learner-centered instruction in college science and mathematics classrooms. J Res Sci Teach. 2003;40:566–584. [Google Scholar]
  154. Walker JD, Cotner SH, Baepler PM, Decker MD. A delicate balance: integrating active learning into a large lecture course. CBE Life Sci Educ. 2008;7:361–367. doi: 10.1187/cbe.08-02-0004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. Weimer M. Learner-Centered Teaching. San Francisco: Jossey-Bass; 2002. [Google Scholar]
  156. Weimer M, Lenze LF. Instructional interventions: a review of the literature on efforts to improve instruction. In: Paulsen KFMB, editor. Teaching and Learning in the College Classroom. Needham Heights, MA: Ginn; 1994. [Google Scholar]
  157. Welch WW, Klopfer LE, Aikenhead GS, Robinson JT. The role of inquiry in science education: analysis and recommendations. Sci Educ. 1981;65:33–50. [Google Scholar]
  158. Wergin JF, Mason EJ, Munson PJ. The practice of faculty-development: An experience-derived model. J High Educ. 1976;47:289–308. [Google Scholar]
  159. White J, Pinnegar S, Esplin P. When learning and change collide: examining student claims to have “learned nothing.”. J Gen Educ. 2010;59:124–140. [Google Scholar]
  160. Wigton RS, Patil KD, Hoellerich VL. The effect of feedback in learning clinical-diagnosis. J Med Educ. 1986;61:816–822. doi: 10.1097/00001888-198610000-00006. [DOI] [PubMed] [Google Scholar]
  161. Wilson RC. Improving faculty teaching: effective use of student evaluations and consultants. J High Educ. 1986;57:196–211. [Google Scholar]
  162. Winter D, Lemons P, Bookman J, Hoese W. Novice instructors and student-centered instruction: identifying and addressing obstacles to learning in the college science laboratory. J Scholarsh Teach Learn. 2001;2(1):14–42. [Google Scholar]
  163. Wong BY L. Self-questioning instructional-research—a review. Rev Educ Res. 1985;55:227–268. [Google Scholar]
  164. Yarnall L, Toyama Y, Gong B, Ayers C, Ostrander J. Adapting scenario-based curriculum materials to community college technical courses. Community College J. 2007;31:583–601. [Google Scholar]
  165. Zoller U. Faculty teaching performance evaluation in higher science education: issues and implications (a “cross-cultural” case study) Sci Educ. 1992;76:673–684. [Google Scholar]

Articles from CBE Life Sciences Education are provided here courtesy of American Society for Cell Biology

RESOURCES