Skip to main content
Springer logoLink to Springer
. 2011 Apr 27;16(4):505–515. doi: 10.1007/s10459-011-9295-2

Are tutor behaviors in problem-based learning stable? A generalizability study of social congruence, expertise and cognitive congruence

Judith C Williams 1,, W A M Alwis 1, Jerome I Rotgans 2
PMCID: PMC3167391  PMID: 21523614

Abstract

The purpose of this study was to investigate the stability of three distinct tutor behaviors (1) use of subject-matter expertise, (2) social congruence and (3) cognitive congruence, in a problem-based learning (PBL) environment. The data comprised the input from 16,047 different students to a survey of 762 tutors administered in three consecutive semesters. Over the three semesters each tutor taught two of the same course and one different course. A generalizability study was conducted to determine whether the tutor behaviors were generalizable across the three measurement occasions. The results indicate that three semesters are sufficient to make generalizations about all three tutor behaviors. In addition the results show that individual differences between tutors account for the greatest differences in levels of expertise, social congruence and cognitive congruence. The study concludes that tutor behaviors are fairly consistent in PBL and somewhat impervious to change. Implications of these findings for tutor training are discussed.

Keywords: Problem-based learning, Tutor performance, Expertise, Social congruence, Cognitive congruence

Introduction

The role of the tutor in problem-based learning (PBL) is important (Hmelo-Silver 2004; Albanese 2004). Together with the quality of the problem, the learning resources, and the students’ prior knowledge, what the tutor does in the classroom can determine the quality of the students’ learning experiences, their levels of motivation to learn (Rotgans and Schmidt 2010; Chung and Chow 2004), the functioning of the small groups in which each student works (Dolmans and Wolfhagen 2005; Azer 2009) and the academic achievement of the student (Schmidt and Moust 1995; De Grave et al. 1999; Finch 1999). Much research has been conducted that has dissected the role of the tutor in PBL and described the characteristics and behaviors of those who are effective (as measured by students’ academic achievement) as compared to those who are not. We know that tutors’ subject-matter knowledge and their ability to facilitate the learning process are important factors in students’ knowledge construction (Groves et al. 2005; Hmelo-Silver and Barrows 2006; Neville 1999; Das et al. 2002). What is unclear is whether the behavior of tutors is stable, or whether, for example, it fluctuates depending on time and context.

From the literature into students’ learning in PBL it seems that their actions are often situational. The student-centred nature of these classrooms provide students with choices about what learning goals and issues they should pursue, and how much and for how long they should study (Hmelo-Silver 2004). Fluctuations in student behavior are therefore common. Are tutor behaviors also subject to oscillation as they adapt and modify to changing groups of students and different course content, or is tutoring in fact a more steady, habitual act?

One study that looked at consistency in PBL tutor behaviors was conducted by Gijselaers (1997), who measured staff performance in a medical school. He found that there were low levels of stability across successive courses, meaning that the same tutor may behave in different ways even when teaching the same course. Indeed, he concludes that it may take the facilitation of up to 14, six-week courses (all 14 course are different) before reliable conclusions can be drawn about how a tutor generally behaves in a PBL classroom. Gijselaers (1997) speculated that the fluctuation in performance may be due to the open nature of PBL where the tutor is expected to modify behaviors as required–for example, in one course a group of students may need more direction from the tutor than another group in a different course. He recommends that when trying to understand tutor behaviors more attention should be paid to wider contextual factors such as group composition and the subject discipline. Dolmans et al. (1996) also investigated the long-term stability of PBL tutor performance. The aim of the study was to determine how many occasions were required to rate tutor performance before the scores from the ratings can be considered as reliable. The researchers concluded that for decisions regarding tenure and promotion, data should be collected over at least two to four occasions.

Both the Gijselaers (1997) and the Dolmans et al. (1996) studies looked at overall or aggregated ratings of tutor performance. Teaching is a complex combination of many factors, for example, knowledge of the topics being studied, an understanding of the learning process, communicating with young people, appreciating the cultural and organisation constraints of the classroom. An overall rating makes it difficult to identify the level of competence of the tutor in each area and so will make it hard to ascertain how their performance in each factor is contributing to overall performance. The current study differs in that it investigates three specific theory-supported behaviors that describe how a tutor performs in an active learning environment. These behaviors have been identified by Schmidt and Moust (1995) and are known as (1) social congruence, (2) use of subject-matter expertise (from this point referred to simply as expertise), and (3) cognitive congruence. Social congruence is the term given to how well a tutor is socially aligned with the students; whether they are interested in the students’ lives, in what they are doing, what concerns them, and understands the difficulties they are going through—“The tutor showed interest in our personal lives”. Expertise refers to a tutor’s knowledge about a subject domain—“The tutor used his/her content knowledge to help us”. Cognitive congruence measures how well a tutor is able to present the curriculum content in such a way that it is accessible enough to engage the students in learning—“The tutor asked questions we could understand”.

Schmidt and Moust (1995) tested their theoretical model which linked subject-matter expertise, social congruence and cognitive congruence with the functioning of small student groups, time spent on individual study and student outcomes. The data fit the model well but unfortunately, they did not repeat their study to determine whether the three behaviors are stable across courses and across years.

Social congruence and cognitive congruence have been further examined in a study by Lockspeiser et al. (2008). These researchers looked at the value of both behaviors in a medical programme that used near-peer tutoring (whereby second year medical students took on the role of tutor to first year students). They found that when there existed a social and cognitive congruence between the students and their near-peers powerful learning experiences occurred. Students in the study felt that the near-peer tutors could anticipate problems they may have in understanding concepts, and also the tutors were able to share strategies that could assist the students to overcome their learning obstacles. However, the study did not look at the development of these behaviours by the near-peer tutors over a number of courses.

This study is based on the work of Schmidt and Moust (1995) and seeks to determine the level of stability of expertise, social congruence and cognitive congruence across three semesters when tutors are (1) facilitating two different modules in the same academic year and (2) facilitating the same module but across the first semester of two successive academic years.

Context of the study

The study was conducted in a polytechnic in Singapore and involved 762 tutors delivering more than 130 different courses from a broad range of vocational diplomas including diplomas in Biomedical Sciences, Biomedical Electronics, Healthcare Administration, and Health Management and Promotion. Students must complete 30 courses in order to graduate, which in practice equates to five courses per semester for 3 years.

Each course lasts one 15-week semester and is comprised of 15 problems. The same 15 problems will be given to all students enrolled in the course. Each problem represents content that the curriculum writers have determined cover the key concepts that need to be explored in the course. Each course is offered in either first or second semester each academic year. A diploma’s curriculum is fairly stable from year to year with few changes to the problems across the semesters. Any major changes to the problem objectives must be approved through the polytechnic’s quality review process. Hence it is highly likely that a problem delivered in one semester will be very much the same when it is delivered 1 year later. None of the courses in the study had gone through the systematic, formal, quality review process.

Tutors are employed to teach using PBL. Before entering the classroom they are required to attend a 5 day PBL orientation programme which is designed to familiarise them with the PBL structure used at the polytechnic as well as introduce them to the rational behind the polytechnic’s choice of pedagogy. Tutors are then expected to fulfil a further 90 hours of PBL training in their first 18 months of service. This training is usually in the form of workshops, to explore a broad range of topics related to their day-to-day work as a tutor. Tutors who do not fulfil this training requirement will be in breach of their contract, which may then not be renewed at the end of the 2 year employment cycle. Tutors work with a class of twenty-five students who work in groups of five for a full 7 h teaching day. On average they teach one or two courses per semester. Whilst the level of tutors’ prior knowledge was not measured, tutors are hired because of their knowledge of the subject and experience in a related industry. It is likely that as graduates in the field of study they will be knowledgeable of the key concepts covered in their courses. In addition, prior to the delivery of each problem, tutors must attend a problem briefing that is conducted by the curriculum writers. In these briefings key concepts are discussed and likely student learning obstacles are identified. If tutors are unable to attend a briefing they are sent a facilitation sheet that outlines the main points covered in the briefings.

Every PBL tutorial follows a similar pattern regardless of the course, the tutor, or the grade of students. Each teaching day the tutor meets with the class of 25 students on three occasions interspersed with two student self-study periods. The structure of each of the three class meetings is similar across the courses and typically involve a meeting to define the problem, a meeting to share what has been learnt thus far and to discuss learning obstacles, and a meeting to present, elaborate and defend solutions to the problem. During each meeting the tutor is expected to take the role of a facilitator of student learning rather than an instructor or transmitter of information.

Method

Participants

The sample consisted of 762 tutors (54% female, 46% male, average age 35.7 years old) who had been evaluated by students on at least three consecutive occasions. This group of 762 tutors represents approximately two thirds of the total teaching staff at the polytechnic. As the evaluations are a mandatory part of the polytechnic’s quality assurance process for tutors, all participants in the study were teaching at the time. All 762 tutors were evaluated for all three measurement occasions—there were no drop-outs. The PBL experience of the tutors ranged from three semesters to twelve semesters.

Data for the 762 tutors were based on the input of 16,047 different students. The student data set varied with each semester, as only those students who were being taught by one of the 762 tutor participants were included. There were 6,164 students (38.4%) who provided input to all three semesters. However, only 13 students (0.081%) happened to have evaluated the same tutor in all three semesters. The level of prior knowledge and experience held by each student in each course was likely to vary. However, every student undertook the same PBL learning process each day and was therefore familiar with teaching through facilitation rather than direct instruction.

Measures

Tutor evaluation survey

The instrument used by the students to evaluate their tutors was based on the questionnaire developed by Schmidt and Moust (1995) with only minor modifications to suit the polytechnic used in the study. For example the word ‘tutor’ was replaced by ‘facilitator’ and the word ‘course’ was replaced by ‘module’ as these are the terms used in the polytechnic. The questionnaire (called the tutor evaluation questionnaire) contains ten items that measure the three separate constructs: (1) social congruence–four items, (2) expertise–two items, and (3) cognitive congruence–four items. The items included for example: “The facilitator showed interest in our personal lives”, “The facilitator has a lot of content knowledge about this module” and “The facilitator asked questions we could understand”. Students were asked to respond to the items on a five-point Likert scale, from 1 = strongly disagree, to 5 = strongly agree. The construct validity of the modified instrument was tested using confirmatory factor analysis and was found to be both valid and reliable. The coefficient H for social congruence was 74, for expertise 70 and for cognitive congruence 77, each falling within the cut-off value for coefficient H of 70.

Procedure

The tutor evaluation questionnaire was administered to students on three occasions; first semester, then second semester of the same academic year; and again during the first semester of the following academic year. Each time, the survey was conducted during week ten of a 15-week semester. It was administered online and was mandatory. Students could complete the survey at any time during the survey period; however, those who did not submit the survey by the submission date were denied access to the polytechnic’s intra-net. Access was denied until they complied. Only students who were on extended medical leave or other endorsed leave were exempt. Hence the response rate by students was 93%.

Analysis

As a first step in the analysis, students’ ratings were averaged per tutor. Data were analysed at the tutor level. Generalizability studies were conducted to estimate the reliability of the students’ judgments for social congruence, expertise and cognitive congruence. In addition to determining reliability such analysis indicates whether the tutor behaviors are generalizable across the three semesters. A generalizability study, which is based on analysis of variance, is able to recognise multiple sources of error in the data rather than a single source of error, for example, differences among the tutors and differences among the measurement occasions—the semesters. The tutors were used as the universe of generalization. The study included variance-component estimation of three sources: (1) differences between tutors, (2) differences between measurement occasions (semesters), and (3) tutor-occasion interaction and unidentified sources of error variance. The levels of generalizability are reported through a dependability coefficient. According to Brennan and Kane (1977) a dependability coefficient is similar to a reliability coefficient but involves absolute error variance. An acceptable coefficient is considered to be 0.80 or higher.

Results

The results of the generalizability studies are presented in Table 1. This table provides a summary of the sources of variability and the estimated variance components. The variance associated with tutors is 60.5% for social congruence, 50.0% for expertise and 56.1% for cognitive congruence. These percentages indicate that the instrument has identified large differences between tutors across the three semesters for all three behaviors, particularly social congruence. In contrast, the results showing the variance related to measurement occasions (semesters) is much lower 1% for social congruence; 5% for expertise; and 0.1% for cognitive congruence, suggesting that the semester, and by extension the course taught in the semester, has virtually no impact on tutor behaviors. The variance associated with the interaction of tutor-occasion is higher at 39.4% for social congruence; 49.5% for expertise; and 43.8% for cognitive congruence. This interaction effect does show that there is some change in tutor behaviors from semester to semester but the change is small, particularly for expertise. In summary, the results reported in Table 1 suggest that changes in tutor behaviors are largely associated with the differences between tutors rather than differences between courses delivered in the three semesters.

Table 1.

Sources of variability and estimated variance components for three tutor behaviors over three semesters

Source of variability Sum squared Degrees of freedom Mean squared Estimated variance % of Total variance
For expertise
 Difference between tutors 113.70 761 0.1494 0.03745 50.0
 Difference between semesters 0.68 2 0.3408 0.00040 0.5
 Tutor-semester interaction and unidentified sources of variance in error 56.43 1522 0.0371 0.03708 49.5
For social congruence
 Difference between tutors 187.75 761 0.2467 0.06755 60.5
 Difference between semesters 0.22 2 0.1119 0.00009 0.1
 Tutor-semester interaction and unidentified sources of variance in error 67.07 1522 0.0441 0.04406 39.4
For cognitive congruence
 Difference between tutors 104.83 761 0.1377 0.03644 56.1
 Difference between semesters 0.16 2 0.0788 0.00007 0.1
 Tutor-semester interaction and unidentified sources of variance in error 43.27 1522 0.0284 0.02843 43.8

Table 2 uses the estimated variance components to determine whether data collected over three semesters is sufficient to enable generalizations to be made about the three behaviors. The dependability coefficients after three semesters are .821 for social congruence, .750 for expertise and .793 for cognitive congruence. These results show that for all three behaviors, using three semesters of data provides just about an acceptable level of .80 to establish reliability and hence generalizations can be made. After four semesters all behaviors can be predicted with even greater certainty.

Table 2.

Dependability coefficients and standard error measurements for three tutor bahviours

No. of occasions Expertise use Social congruence Cognitive congruence
Dependability coefficient Standard error of measurement Dependability coefficient Standard error of measurement Dependability coefficient Standard error of measurement
1 0.500 0.1936 0.605 0.2101 0.561 0.1688
2 0.666 0.1369 0.754 0.1486 0.719 0.1194
3 0.750 0.1118 0.821 0.1213 0.793 0.0975
4 0.800 0.0968 0.860 0.1051 0.836 0.0844
5 0.833 0.0866 0.884 0.0940 0.865 0.0755
6 0.857 0.0790 0.902 0.0858 0.885 0.0689
7 0.875 0.0732 0.915 0.0794 0.900 0.0638
8 0.889 0.0684 0.924 0.0743 0.911 0.0597

Discussion

The objective of the study was to investigate whether specific rather than overall tutor qualities are stable across three consecutive semesters. The three behaviors investigated were expertise, social congruence and cognitive congruence. Data for 762 tutors were gathered from students on three separate occasions and analysed in a generalizability study.

Overall, the results suggest that using data from a survey conducted on three successive occasions provides a reliable enough measure to make generalizations about the stability of tutor behaviors. Furthermore the findings show that any differences in the three specific tutor behaviors are largely accounted for by differences between tutors rather than measurement occasions or even tutors-occasion interactions. In short, students rate their tutor in similar ways in terms of the tutor’s use of expertise, social congruence and cognitive congruence regardless of the semester. Such a finding is both reassuring and rather disconcerting. It is encouraging to know that tutors’ actions are not erratic and that similar performances can be observed across different courses and in different semesters. However, this result suggests that there is little development of the three behaviors over the three semesters—PBL tutors seem to be obdurate.

The results of this study contradict the findings of Gijselaers (1997) who concluded that tutors’ behaviors were not stable over time. He suggested that the reason could lie in the nature of PBL which require tutors’ to respond in a variety of ways depending upon students’ needs at particular points in their learning. However, his explanation rests on tutors being skilled enough to adapt to these dynamic classrooms. In contrast, the results of this study show that not all tutors are so adept at PBL and that on the whole tutors’ behaviors are quite resilient to change.

The findings from this research are more in keeping with Dolmans et al. (1996), in that data gathered over a few courses are sufficient to make generalisations about PBL tutors’ behaviors. Dolmans et al. (1996), reported that if scores across all the items in their questionnaire are aggregated for each tutor then four measurement occasions yield reliable results, if a single overall judgement of each tutor’s performance is used then reliability can be established after just two measurement occasions. This study examining specific PBL behaviors concurs with such a timeframe. Generalizations about social congruence can be made after three measurement occasions. Generalizations about expertise and cognitive congruence can also be made after three measurement occasions, although the dependability coefficients are just short of the 0.80 target being 0.750 for expertise and 793 for cognitive congruence.

Turning to each of the behaviors, it is clear that although the results for all three behaviors are generally similar, differences in expertise are accounted for by both differences in tutors and differences in tutor-occasion interaction (tutors’ ratings vary slightly when teaching different courses), whereas the results for social congruence and cognitive congruence show that the variance of tutor-occasion interaction is not so high.

The finding that tutors’ expertise varies across semesters and hence across courses is somewhat surprising given that tutors are hired for their subject knowledge, and are provided with teaching resources prior to entering the classroom. These practices were designed to ensure that all tutors have the requisite content knowledge to facilitate the curriculum as intended, and hence it was speculated that there would be consistency in their level of expertise across the three semesters. Tutors are rarely asked to facilitate courses that are outside of their subject domain, hence it is likely that the issue is not so much the amount of subject knowledge a tutor possesses but, their skill in determining how and when they apply that knowledge in the classroom. In PBL, a tutor’s use of expertise is linked to their students’ prior knowledge of a problem. When students lack knowledge concerning a subject it is incumbent upon the tutor to intervene, this may require the tutor sharing expert knowledge about a topic. Neville’s (1999) reiterates this point in his review of the role of the tutor in PBL. It could be that the tutors in this current study possess sufficient content knowledge but are not adroit at deciding when it is appropriate to share this knowledge with their students. A teacher development model used by Hativa (2000) could help tutors to make competent choices about their use of expertise. Hativa (2000) describes a very intense personalised programme where teachers watch videotapes of their classes, write summaries of what they learned about their teaching from the tapes, and then meet in one-on-one discussions with an experienced teacher to talk about strategies that could be employed to improve pedagogical knowledge and skills in the classroom. A less intense observation and mentoring programme could be adopted for those tutors identified with low levels of expertise as a way of starting a dialogue between new tutors and experienced staff on how to improve the application of expertise in the PBL classroom. Indeed, many institutions already have mentoring programmes; it may just be a matter of targeting the dialogue between those involved and directing conversations towards the identification and resolution of difficulties in using expertise appropriately.

The finding that variance in social congruence is explained by differences between tutors rather than by differences in occasions or differences in tutor-occasion interaction suggests that some tutors rate consistently highly in this behavior while others continue to perform poorly. Is social congruence then an innate quality rather than a behavior that can be developed? As a pedagogy, PBL places emphasis on tutors being student-centred in their focus. Relationships in the PBL classroom are important, with the tutor being required to engage students in the problem, generate discussion, empathise with the way learners think and to resist any temptation to take over the learning process. Such behavior may in fact require a particular set of beliefs about learners that is at odds with some of the tutors in the study. A study by Chai et al. (2010) into the beliefs held by Singaporean teachers about learners proves illuminating. Using structural equation modelling they discovered that teachers who believe that successful learners possess an innate ability tended to also believe in traditional teaching approaches while those holding the view that learning requires a process involving effort were more likely to favour constructivist teaching. Changing teachers’ beliefs about learners is considered to be a difficult process (Richardson 2003) hence to expect to see all tutors in the study interact with students in socially congruent ways in just three semesters may be unreasonable. Bowman and Hughes (2005) recognise the importance of the emotional relationship between tutors and students in PBL and provide ideas on how this relationship can be improved. They suggest that tutors agree on the roles and boundaries of tutoring; that more experienced tutors meet with new staff to discuss how to handle groups and individual students; they also propose that tutors get involved in social activities with students. Tutors may also benefit from information about the classes they are taking so that they can consider particular needs that the groups may have beyond curriculum needs. Tutor development programmes might need to look beyond the structure of a PBL course and the dos and don’ts of facilitation. Programmes may benefit from a focus on tutors’ beliefs about learners, the psychological and emotional aspects of learning and communicating with adolescent learners.

Differences between tutors, and to a lesser extent differences between tutor-occasion interactions, account for the variance in levels of cognitive congruence, with semesters playing almost no role. It seems that while tutors may make some modifications to their behavior towards students in different courses and in different semesters, by and large, their skills in asking comprehensive questions and being aware of students difficulties when grappling with content seem to be very much consistent. Communities of Practice (COP) may provide a vehicle for tutors to transition towards PBL and to improve the kinds of complex facilitation skills described through cognitive congruence. Spronken-Smith and Harland (2009) studied a group of Geography lecturers as they set up a COP to help them move from teaching using traditional methods to adopting PBL. They found that while COPs helped tutors to understand PBL; its aims and its practices, working in groups was problematic for some and marred their development. Hence if COPs are to be used as part of a tutor development programme the dynamics and rules of the group need to be considered prior to its establishment.

The study shows that development as a PBL tutor does not come quickly for some. An implication from the study is that the initial 5 day training programme and 90 hours of on-going tutor development may not be adequate to ensure a shift in the three core PBL behaviors for all staff. It could be that tutors whose views on teaching and learning make it difficult for them to adapt to PBL require programmes that examine their belief systems and challenge their ways of thinking. An example of an alternative approach to the typical workshop-based tutor development programmes is described in a study of 282 pre-service teachers by Askell-Williams et al. (2007). These new teachers were enrolled in an education course that was facilitated using PBL. The results were extremely positive with the teachers reporting that their experience of PBL lead to changes in their mental models about teaching subject content, motivating learning, and connecting theory with practice. Their study however, relied on tutor self reports and did not include practicing tutors hence further evaluations of the effective use of PBL in tutor development would be useful.

This study has a number of limitations; the first is related to the model of PBL adopted at the polytechnic. The tutors in this study are required to facilitate classes that consist of five groups of five students, a model of PBL that is different to many others where tutors work with a single, larger group and where the tutor may take on a role that resembles that of a group member. It is difficult to ascertain whether in this study the same tutoring skills are required, and whether tutors engage with students in similar ways, using the same three behaviours as they might in a more typical PBL setting. A second limitation is also connected to the model of PBL that forms the context of the study. In this study each tutor is being examined by a large number of students at each measurement occasion. It might be that the number of measurement points needed for an acceptable level of reliability are fewer than when smaller groups of students evaluate the teacher on each occasion. Having the same students rate the teacher over time may also decrease the number of instances needed to reach a reliable estimate of a tutor’s performance. Thirdly, the data analysis did not differentiate tutors by the department to which they belong; hence differences across faculties cannot be identified. In addition, the study did not take into account years of experience with PBL. It could be that tutors who had been facilitating for many semesters prior to the data collection perform differently to those who have facilitated for just the three semesters covered by the study. Further longitudinal studies that take into account tutor characteristics such as experience, age, qualifications and levels of expertise may yield additional insights to the findings of the study.

In summary, despite PBL being an open and dynamic learning environment, tutors tend to behave in ways that are consistent regardless of the course or the different cohorts of students they are teaching. As such the implications from the study are primarily for tutor development. If, as it seems, PBL behaviors act like personal attributes that are consistent and resilient to change then tutor development programmes need to shift their focus. Emphasis may need to be placed on examining beliefs about teaching and learning, understanding adolescence, and building pedagogical content knowledge rather than focusing on the actions required to carryout PBL tutoring sessions. Those responsible for tutor development programmes may also want to consider conducting them using PBL.

Conclusion

In conclusion, the study provides evidence that the tutor evaluation questionnaire if administered over at least three consecutive occasions will yield results that can be used to make generalizations about tutors’ performance. The study also indicates that a tutor’s performance in PBL is consistent. The differences in behaviors are overwhelmingly a result of differences between tutors rather than differences between measurement occasions or even differences between tutor-occasion interactions. Speculation about the possession of innate abilities to facilitate PBL is raised as well as the possibility that tutor development programmes may need to be modified to address the underlying beliefs of tutors for whom the facilitation of PBL is not instinctive. Finally, further studies that look at the antecedents of expertise, social congruence and cognitive congruence would be useful to ascertain whether factors exist that may result in some staff being “natural” tutors or whether such an idea is pure conjecture.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

References

  1. Albanese MA. Treading tactfully on tutor turf: does PBL tutor content expertise make a difference? Medical Education. 2004;38:916–920. doi: 10.1111/j.1365-2929.2004.01965.x. [DOI] [PubMed] [Google Scholar]
  2. Askell-Williams H, Murray-Harvey R, Lawson MJ. Teacher education students’ reflections on how problem-based learning has changed their mental models about teaching and learning. The Teacher Educator. 2007;42(4):237–263. doi: 10.1080/08878730709555406. [DOI] [Google Scholar]
  3. Azer SA. Interactions between students and tutors in problem-based learning: The significance of deep learning. The Kaohsiung Journal of Medical Sciences. 2009;25(5):240–249. doi: 10.1016/S1607-551X(09)70068-3. [DOI] [PubMed] [Google Scholar]
  4. Bowman D, Hughes P. Emotional responses of tutors and students in problem-based learning: Lessons for staff development. Medical Education. 2005;39:141–153. doi: 10.1111/j.1365-2929.2004.02064.x. [DOI] [PubMed] [Google Scholar]
  5. Brennan RL, Kane MT. An index of dependability for mastery tests. Journal of Educational Measurement. 1977;14:277–289. doi: 10.1111/j.1745-3984.1977.tb00045.x. [DOI] [Google Scholar]
  6. Chai CS, Teo T, Lee CB. Modelling the relationships among beliefs about learning, knowledge, and teaching of pre-service teachers in singapore. The Asia-Pacific Education Researcher. 2010;19(1):25–42. doi: 10.3860/taper.v19i1.1507. [DOI] [Google Scholar]
  7. Chung JCC, Chow SMK. Promoting student learning through a student-centred problem-based learning subject curriculum. Innovations in Education and Teaching International. 2004;41(2):157–167. doi: 10.1080/1470329042000208684. [DOI] [Google Scholar]
  8. Das M, Mpofu DJS, Hasan MY, Stewart TS. Student perceptions of tutor skills in problem-based learning tutorials. Medical Education. 2002;36(3):272–278. doi: 10.1046/j.1365-2923.2002.01148.x. [DOI] [PubMed] [Google Scholar]
  9. De Grave WS, Dolmans DHJM, van der Vleuten CPM. Profiles of effective tutors in problem-based learning: scaffolding student learning. Medical Education. 1999;33:901–906. doi: 10.1046/j.1365-2923.1999.00492.x. [DOI] [PubMed] [Google Scholar]
  10. Dolmans DHJM, Wolfhagen IHAP. Complex interactions between tutor performance tutorial group productivity and the effectiveness of PBL unites as perceived by students. Advances in Health Sciences Education. 2005;10:253–261. doi: 10.1007/s10459-005-0665-5. [DOI] [PubMed] [Google Scholar]
  11. Dolmans DHJM, Wolfhagen IHAP, Van Der Vleuten CPM. Long-term stability of tutor performance. Academic Medicine. 1996;71(12):1344–1347. doi: 10.1097/00001888-199612000-00017. [DOI] [PubMed] [Google Scholar]
  12. Finch PM. The effect of problem-based learning on the academic performance of students studying podiatric medicine in Ontario. Medical Education. 1999;33:411–417. doi: 10.1046/j.1365-2923.1999.00347.x. [DOI] [PubMed] [Google Scholar]
  13. Gijselaers WH. Effects of contextual factors on tutor behavior. Teaching and Learning in Medicine. 1997;9(2):116–124. doi: 10.1080/10401339709539825. [DOI] [Google Scholar]
  14. Groves, M., Rego, P., & O’Rourke, P. (2005). Tutoring in problem-based learning medical curricula: the influence of tutor background and style on effectiveness. BMC Medical education,5(20). Accessed online on 10 July, 2010 at http://www.biomedcentral.com/1472-6920/5/20. [DOI] [PMC free article] [PubMed]
  15. Hativa N. Becoming a better teacher: A case of changing pedagogical knowledge and beliefs of law professors. Instructional Science. 2000;28:491–523. doi: 10.1023/A:1026521725494. [DOI] [Google Scholar]
  16. Hmelo-Silver CE. Problem-based learning: What and how do students learn? Educational Psychology Review. 2004;16(3):235–266. doi: 10.1023/B:EDPR.0000034022.16470.f3. [DOI] [Google Scholar]
  17. Hmelo-Silver CE, Barrows HS. Goals and strategies of a problem-based learning facilitator. The Interdisciplinary Journal of Problem-based Learning. 2006;1(1):21–39. [Google Scholar]
  18. Lockspeiser TM, O’Sullivan P, Teherani A, Muller J. Understanding the experience of being taught by peers: The value of social and cognitive congruence. Advances in Health Sciences Education. 2008;13:361–372. doi: 10.1007/s10459-006-9049-8. [DOI] [PubMed] [Google Scholar]
  19. Neville AJ. The problem-based learning tutor: Teacher? Facilitator? Evaluator? Medical Teacher. 1999;21(4):393–401. doi: 10.1080/01421599979338. [DOI] [Google Scholar]
  20. Richardson V. Preservice teachers’ beliefs. In: Raths J, McAninch AC, editors. Teacher beliefs and classroom performance: The impact of teacher education. Charlotte: IAP; 2003. pp. 1–22. [Google Scholar]
  21. Rotgans, J. I., & Schmidt, H. G. (2010). The role of teachers in facilitating situational interest in an active-learning classroom. Teaching and Teacher Education. doi:10.1016/j.tate.2010.06.025 (in press).
  22. Schmidt HG, Moust JHC. What makes a tutor effective? A structural equations modelling approach to learning in problem-based learning. Academic Medicine. 1995;70(8):708–714. doi: 10.1097/00001888-199508000-00015. [DOI] [PubMed] [Google Scholar]
  23. Spronken-Smith R, Harland T. Learning to teach with problem-based learning. Active Learning in Higher Education. 2009;10(2):138–153. doi: 10.1177/1469787409104787. [DOI] [Google Scholar]

Articles from Advances in Health Sciences Education are provided here courtesy of Springer

RESOURCES