Skip to main content
BMJ Simulation & Technology Enhanced Learning logoLink to BMJ Simulation & Technology Enhanced Learning
. 2020 Apr 20;6(3):140–147. doi: 10.1136/bmjstel-2018-000390

Interprofessional simulation training’s impact on process and outcome team efficacy beliefs over time

Matthew James Kerry 1, Douglas S Ander 2, Beth P Davis 2
PMCID: PMC8936757  PMID: 35518379

Abstract

Introduction

Recent findings suggest that process and outcome-based efficacy beliefs are factorially distinct with differential effects for team performance. This study extends this work by examining process and outcome efficacy (TPE, TOE) of interprofessional (IP) care teams over time.

Methods

A within-team, repeated measures design with survey methodology was implemented in a sample of prelicensure IP care teams performing over three consecutive clinical simulation scenarios. TPE and TOE were assessed before and after each performance episode.

Results

Initial baseline results replicated the discriminant validity for TPE and TOE separate factors. Further findings from multilevel modelling indicated significant time effects for TPE convergence, but not TOE convergence. However, a cross-level interaction effect of ‘TOE(Start-Mean)×Time’ strengthened TOE convergence over time. A final follow-up analysis of team agreement’s substantive impact was conducted using independent faculty-observer ratings of teams’ final simulation.

Conclusion

Independent sample t-tests of high/low-agreement teams indicated support for agreement’s substantive impact, such that high-agreement teams were rated as significantly better performers than low-agreement teams during the final simulation training. We discuss the substantive merit of methodological within-team agreement as an indicator of team functionality within IP and greater healthcare-simulation trainings at-large.

Keywords: team efficacy, interprofessional, simulation, team dynamics, temporal

Introduction

Rooted in Bandura’s social-cognitive theory of efficacy beliefs, team efficacy (TE) continues to figure prominently in the burgeoning field of interprofessional (IP) care team research,1 2 and it has recently been defined as ‘a shared belief in a group’s collective capability to organize and execute courses of action required to produce given levels of goal attainment’3 (p 90). Although TE has often been assessed as a mean or average of team members’ perceptions,2 meta-analytic evidence has found agreement to have significant positive effects on team performance.4 Scarce research attention has been paid, however, to identifying predictors of TE agreement.5 The current study fills this knowledge gap by empirically testing two theorised predictors of team agreement. Specifically, we examine if team agreement is predicted by: (1) time and (2) baseline efficacy at team formation.

Following Bandura’s initial conceptualisation of TE as a task and situation-specific belief, we distinguish efficacy from potency as a broader, more generalised belief.6 The distinction between narrower specific TE and broader general potency has been supported by consistent meta-analytic findings.4 Within TE, researches have further distinguished between process and outcome forms.7 The conceptual rationale for TE’s further distinction follows from analogous process-outcome theorisations at the individual level.8 At the teams level, efficacy’s process-outcome distinction also follows partly from team paradigms delineated by teamwork (ie, process) and taskwork (ie, outcome).9

According to Collins and Parker, team process efficacy (TPE) is defined as a team’s belief that they can successfully conduct communicative, collaborative and cooperative functions, including the effective resolution of conflict among team members.7 In contrast (and consistent with conventional research), team outcome efficacy (TOE) is defined as a team’s belief that it can achieve a specific level of performance.7 Collins and Parker found support for TPE and TOE’s factorial distinction, as well as for differential predictive validity, such that TPE predicted team-citizenship behaviours, whereas TOE predicted team performance.7 A supplementary goal of the current study is to extend empirical evidence for TPE-TOE factorial distinction to the healthcare domain.

Efficacy convergence

In addition to TE’s process-outcome distinction, a more general question pertains to within-team changes over time. Although empirical evidence has found team agreement to positively impact performance,10 researchers have long noted theoretical scarcity for examining agreement over time.11 Empirically, the typical composition approach of assessing teem agreement is cross sectional in nature. As Chan notes, ‘composition models are concerned with static core attributes of focal constructs’12 (p 241). Here, we consolidate the consensus terminology and Chan’s ‘static’ characterisation of composition models with the term ‘mono-consensus’. Because the mono-consensus approach concerns static attributes of healthcare teams, the temporal dynamics are largely neglected.

Here, we suggest that the mono-consensus approach is problematic for three primary reasons. First, theoretically, the mono-consensus approach assumes that TE is strictly a homogenous construct. Team member beliefs must exceed an arbitrary, statistical threshold in order to be construct valid towards aggregation at the team level. Although statistical homogeneity is reasoned by the calibration of theory measurement analysis levels, the underlying assumption is inconsistent with Bandura’s theorisation of TE, for example, he has argued that ‘collective efficacy is not a monolithic group attribute’(p 479).1 Before empirically studying TE’s convergence over time, heterogeneity in beliefs must be admitted as theoretically meaningful. Second, empirically, findings have supported the substantive prediction power of agreement for team performance.13 In contrast, the mono- consensus approach’s aggregation discards any additional within-team variability for analytic purposes. In fact, the minimisation of ‘aggregation bias’ depends directly on the assumption of time invariance. Third, methodologically, it is notable that measurement disputes among 1990s’ team scholars primarily contested two methods: (1) individual aggregation (ie, mono- consensus), and (2) team discussionary. Relevant to the current study, advocates of the team discussion method argued that ‘the origin of the construct must reflect the processes of interaction that occur within the team’14 (p 648).

Summary

In summary, the mono-consensus approach precludes investigation of differential changes in variability between process and outcome forms of TE over time. Conceptually, the longitudinal study of team-means is analogous to the cross-sectional study of team-variances in terms of understanding within-unit phenomena. Both are veridical as to between-teams effects, but neither seems, ceteris paribus, more informative than the other as to within-team dynamics.15 Extending this logic, the longitudinal study of teams in any observational design requires a variance component for examining within-team effects. Indeed, the amount of dispersion discarded via the mono-consensus approach reappears as within-team attenuation bias when estimating any team-level construct over time.

As noted by DeRue et al, an important step in temporal teams research is the meaningful use of taxonomy for estimating dynamics of within-team efficacy; accordingly, we draw on DeRue et al’s team dispersion framework to develop our hypotheses below.16

Temporal convergence of TE

Several empirical findings have supported TE’s convergence over time.10 17 Using both individual aggregation and team discussionary measures of TOE (n=31), Jung and Sosik reported TOE convergence over a 6-week, pre-post assessment interval.17 In a notable extension to previous studies of temporal convergence, Arthur et al examined agreement’s moderation of the TOE performance effect, which indicated that agreement positively interacted with TOE performance, such that the positive effect was strengthened for high-agreement teams.10 Taken together, previous empirical findings suggest that efficacy beliefs tend to converge over time in teams.16 However, studies investigating these temporal changes have used classical, outcome-based measures (TOE), and it is unclear whether both TPE and TOE show similar patterns of convergence over time.

Current study and hypotheses

This study extends prior findings on emergent consensus in TE over time by investigating team-level process and team-level outcome efficacy beliefs in a repeated measures design (TPE and TOE).

Specifically, the goals of the study are threefold: (1) to investigate the discriminant validities of TPE and TOE as they relate to within-team convergence, (2) to evaluate cross-level, contextual influences of mean- TPE and mean-TOE at the start of team performance on within-team convergence over time, and (3) to provide general evidence on the feasibility of integrating elements of both dispersion and process models to investigate TE variability over time.

Consistent with original empirical study, we begin by evaluating the discriminant validity of process and outcome facets of TE.7 To test our hypothesis, we employ two different analyses.

Consistent with the moderate-positive effect size between Team outcome efficacy (TOE) and team process efficacy (TPOE) reported by Collins and Parker (r=0.51), we hypothesise that there will be considerable overlap between process-oriented and outcome- oriented TE scales.7 In addition, we evaluate the probability of type 1 error that our effect size is comparable to that obtained by Collins and Parker by conducting a z-test to verify the linear relation between two theoretically separable constructs.7 This ancillary analysis is necessary to address the confirmatory question within a null hypothesis testing framework. Thus, we first hypothesise:

Hypothesis 1a. TPE and TOE will demonstrate a significant, positive relationship.

While we expect that TPE and TOE will be substantially related, we also expect that they will be empirically distinguishable. Our study context of IP education simulation training, with a sample comprising mixed medical-nursing teams is relevant for observing the hypothesised distinction between TPE and TOE constructs.18 19 Consistent with Collins and Parker’s finding that TPE and TOE represent distinct dimensions of TE, we expect that these two dimensions of TE will emerge within a comparatively shorter time frame.7 We hypothesise:

Hypothesis 1b. TPE and TOE will emerge as distinct, correlated factors of TE.

As reviewed above, the heterogeneity in measures of efficacy used previously and the methodology used for testing efficacy convergence do not permit evaluation of potential differences in convergence between process and outcome types of TE. Based on DeRue et al’s taxonomy,16 in teams where membership remains constant and there is a high interdependence, both TPE and TOE forms of TE should show convergence over time as team and task experience increases among members.5 Thus, we hypothesise:

Hypothesis 2a. Within-team agreement on TPE will increase over time.

Hypothesis 2b. Within-team agreement on TOE will increase over time.

Lastly, we also examine whether team member perceptions at the time of formation influence TE convergence. Group development scholars have long posited that the conditions of a team’s initial formation may have downstream effects on team processes.20 In the efficacy domain, recent theorising suggests that initial mean-TE serves as a context for subsequent judgements and action, by providing information input to team members.21 Higher mean levels of TE are posited to serve as a stronger context that affords members more accurate perceptions of team capabilities. Empirically, for example, Tasa et al found that mean-level TE beliefs at a team’s formation increased subsequent individual-level teamwork behaviour.22 Similar team formation effects have also recently been reported vis-à-vis team satisfaction.23 To further examine the potential interactive effects with time on convergence, we hypothesise:

Hypothesis 3a. Mean-TPE (magnitude) at the start of performance (Time=0) will positively interact with time in predicting within-team agreement on TPE.

Hypothesis 3b. Mean-TOE (magnitude) at the start of performance (Time=0) will positively interact with time in predicting within-team agreement on TOE.

Taken together, our hypotheses are illustratively summarised in the structural model in figure 1.

Figure 1.

Figure 1

Conceptual hypotheses illustrated as structural model. TOE, team outcome efficacy; TPE, team process efficacy.

Methods

Research setting

Data for this study were collected during a collaborative training session conducted jointly by the Schools of Medicine and Nursing at a medium-sized south-eastern university. The goal of the training session was to help students from different disciplines work more effectively as part of an interdisciplinary healthcare team. Fourth year medical students and first year nursing students in the Masters programme participated in the study. Attendance to the training session was mandatory, and student participation in the research study was voluntary.

The training session comprised two components: (1) a training lecture on teamwork competencies, and (2) a series of three case simulations. Medical and nursing students were randomly assigned within profession to attend either a morning session or afternoon session. The content and structure of the 4-hour training was identical across sessions. Students in each session were randomly assigned into 12 teams. Twenty-four total teams participated in the training simulations across the entire training programme.

Participants

A total of 183 prelicensure health students participated in the study. One hundred and thirty participants were enrolled in the medical school, and 53 participants were enrolled in the nursing school. Participants provided informed consent for collection and use of their data prior to beginning the IP simulation training.

Measures

To distinguish between preperformance and postperformance assessments, the stems for TPE and TOE measures were systematically changed. Specifically, preperformance stems were worded, ‘For the case simulation you are about to do,’ and postperformance stems were worded, ‘For the next case simulation.’

Response scales were identical across measures and comprised a 10-point Likert-type scale, ranging from 1 (not at all confident) to 10 (very confident). All data were collected at the individual level with a team-referent shift (direct judgement of the team as target).

Team process efficacy

TPE was assessed with a four-item scale adapted from Collins and Parker.7 The item stem ‘For the case simulation you are about to do, how confident is your team in its ability to…’ was followed by four items: (1) ‘Adapt rapidly to changing situations or demands’, (2) ‘Communicate effectively with one another’, (3) ‘Prevent or rapidly resolve team member conflicts’ and (4) ‘Generate a realistic and accepted team goal’. Items were selected for content relevance, based on the student sample and task content, as well as factor loadings in Collins and Parker.7

Team outcome efficacy

TOE was assessed with a four-item scale adapted from Collins and Parker.7

The item stem  ‘For the case simulation you are about to do, how confident is your team that it will perform in the…’ was followed by four items: (1) ‘top-50%’, (2) ‘top-25%’, (3) ‘top-10%’ and (4) ‘top-5%’.

Faculty team performance rating

Faculty team performance ratings were recorded with three items adopted from Frankel et al’s communication and teamwork skills assessment for measuring care team’s performance.24 The three items were averaged to form a global composite rating of team performance, which exhibited high internal reliability as estimated by Cronbach’s alpha, α=0.86.

Training delivery and participant flow

All participants attended a 1-hour introductory lecture on teamwork competencies. Following the lecture, participants were randomly assigned to teams, ranging in size from five to nine members each. Each team then engaged in three consecutive high-fidelity case simulations in fully counterbalanced order. In each simulation, paid actors played the role of patients, interacting with the teams using a semistructured script. Student teams communicated with the ‘patients’ in order to diagnose and treat them properly. One faculty member was assigned to facilitate each team. For each team, the facilitator provided the introduction and instructions for each case simulation. Facilitators also provided structured feedback on teamwork competencies at the end of each case simulation.

Participants completed paper and pencil assessments of TPE and TOE, both before and after each case simulation. Facilitators administered preassessments after introducing the case and providing team members with instructions. Postassessments of TPE and TOE were administered following facilitator feedback. Upon completion of the third simulation, participants were debriefed and thanked. In total, six measures of TPE and TOE were collected from each team across the three simulation training scenarios.

Data treatment and analyses

Missing data analysis and Hypothesis 1 testing was conducted in SPSS V.25. Pattern analysis indicated less than 2% of missing data for any given variable. Given the small amount of missing data, hot deck imputation was used to estimate the missing values.

Inherent to examining any intraunit phenomenon over time is unit-level observation. In the current sample, we analysed team-aggregated individual responses as teams remained the same over the three performance cycles. Hypothesis 1a was tested by inspection of observed correlations, and Hypothesis 1b was tested by conducting exploratory factor analyses (EFA) with inspection of item-factor loading patterns (both in SPSS).

Hypotheses 2 and 3 testing was conducted in R statistical software V.3.5.1. Specifically, Hypotheses 2 and 3 were both tested by conducting multilevel modelling with inspection of pertinent parameter estimates using the ‘lme4’ multilevel-linear modelling package.25 Aggregated within-team agreement ( r wg) was entered as a dependent variable in the following multilevel linear-mixed model equation: r wg~1 +  Time + (1|Team). In this equation, ‘Time’ is a fixed-effects estimator of teams’ agreement (Rwg). The same equation was used to test Hypothesis 3, except that an additional interaction predictor was added, specifically: r wg~1 + ‘Time_×_StartMean’ + (1|Team). This equation included three fixed effects estimated for the interaction effects on within-team agreement over time: (1) ‘Time’, (2) StartMean, and (3) ‘Time_×_StartMean’.

Results

Data aggregation

To evaluate the rationale of aggregating individual responses to the team level, we computed James et al’s Rwg(j) statistic and intraclass correlation coefficients at each assessment (see table 1).11 Mean-TPE agreement values ranged from 0.73 to 0.83. Mean-TOE agreement values ranged from 0.51 to 0.57.

Table 1.

Internal reliability, intraclass correlation and agreement estimates over time

α ICC1 ICC2 rwg(j) rwg(j)
TOE
 Time 1 0.93 0.09 0.42 0.51 0.56
 Time 2 0.95 0.21 0.67 0.52 0.52
 Time 3 0.95 0.17 0.61 0.54 0.57
 Time 4 0.95 0.18 0.63 0.54 0.57
 Time 5 0.95 0.19 0.63 0.56 0.61
 Time 6 0.95 0.30 0.77 0.57 0.64
TPE
 Time 1 0.97 0.21 0.66 0.73 0.74
 Time 2 0.98 0.25 0.72 0.75 0.76
 Time 3 .98 0.16 0.57 0.75 0.78
 Time 4 0.97 0.18 0.62 0.76 0.81
 Time 5 0.98 0.16 0.59 0.79 0.83
 Time 6 0.98 0.20 0.66 0.83 0.84

ICC, intraclass correlation coefficient; TOE, team outcome efficacy; TPE, team process efficacy.

Consistency was calculated using ICC(1) as a measure of homogeneity of scores within teams, while ICC(2) was calculated to estimate reliable discrimination between team-means. ICC(1) values for TPE ranged from 0.16 to 0.25, while ICC(2) values ranged from 0.57 to 0.72. ICC(1) values for TOE ranged from 0.09 to 0.30, while ICC(2) values ranged from 0.42 to 0.77. These estimates suggest aggregation of individual- level TE ratings to the team level is appropriate. Within-team agreement indices for both measures were retained as indicators of team-level efficacy convergence, in addition to TPE and TOE mean scores.

Hypothesis testing

Hypothesis 1a was tested by evaluating the observed correlation between TPE and TOE at the first assessment. Results indicated a significant positive correlation (r=0.69), supporting Hypothesis 1a. To complement the null hypothesis significance test, we computed Fisher’s r-z transformations in order to compare effect sizes obtained in our study (r=0.69) and obtained by Collins and Parker (r=0.51; 2009).7

The results of this analysis were non-significant (z(1)=0.256, p>0.05), suggesting that (assuming a homogenous population) the observed effect size in the current study is comparable to that obtained by Collins and Parker.7 In addition, the TPE-TOE interscale correlation indicates that the scales shared approximately 48% common variance.

Results from EFA testing Hypothesis 1b are displayed in table 2.

Table 2.

Exploratory factor analysis of team outcome and team process efficacy items

Component (Time-0) Component (Time-All)
Scale  items 1 2 1 2
TOE
 Top 50% 0.47 0.45
 Top 25% 0.85 0.83
 Top 10% 1.02 1.01
 Top 5% 0.95 1.00
TPE
 Adapt to demands 0.88 0.88
 Communicate 0.95 0.97
 Resolve conflict 0.95 0.96
 Generate goal 0.97 0.99
Eigenvalue 6.12 1.08 5.66 1.44
% Variance 62.36 22.91 77.36 13.54

Items abbreviated for display. Loadings <0.45 suppressed for display.

TOE, team outcome efficacy; TPE, team process efficacy.

As shown, using maximum likelihood extraction and oblimin rotation, two factors with eigenvalues >1 were extracted for the first assessment (n=24). Repeating the analysis across all assessments (n=143) yielded the same two-factor structure. Examination of scree plots for both analyses also supported the extraction of two factors. As depicted in table 2, the two-factor solution accounted for approximately 91% of the variance across all assessments.

Hypotheses 2 and 3 addressed potential change in TPE and TOE within-team agreement over time. Because estimating fixed effects of change-over-time requires an accurate intercept estimate, multilevel models were implemented in order to account for teams’ agreement values at the first assessment (start). Specifically, Hypotheses 2 and 3 were tested with random intercept models.

To test the convergence of TPE over time (Hypothesis 2a), we estimated random intercepts for starting TPErwg(j) (Time=0), and time was entered as a fixed-effects slope estimate of TPErwg(j). Results indicated a significant effect of time on TPErwg(j) (γ=0.02, t=2.82, p<0.01), such that within-team agreement on TPE increased over time. Hypothesis 2a was therefore supported.

To test the convergence of TOE over time (Hypothesis 2b), each team’s starting TOErwg(j) (Time=0) was entered as a random intercept, and time was entered as a fixed-effects slope estimate of TOErwg(j). Results indicated non-significance (γ=0.01, t=1.25, p>0.05), such that TOE failed to converge over time (see table 3). Hypothesis 2b was therefore not supported.

Table 3.

Multilevel modelling estimates for impact of time on TOE and TPE convergence

TOE model
Fixed effects Coefficient SE t
(Intercept) 0.51 0.04 12.95**
Time 0.01 0.01 1.25 0.11
TPE model
Fixed effects Coefficient SE T
(Intercept) 0.72 0.02 31.83**
Time 0.02 0.01 3.54**

**P<0.01.

TOE, team outcome efficacy; TPE, team process efficacy.

Following the procedure described by Raudenbush and Bryk,26 we calculated the variance accounted for in our significant model by comparing it to a null model with no fixed-effects predictor. Results indicated that 11% of the within-team variance in TPE agreement was explained by the effect of time.

The interactive effect of start-mean and time on TPE convergence (Hypothesis 3a) was tested by entering random intercept effects of start-mean and agreement at Time-0 on TPErwg(j). In addition, a Start-Mean × Time interaction term was computed and entered as a fixed effect (table 4). Results indicated that the main effect of time remained significant (γ=0.18, t=3.54, p<0.01), while the main effect of start-mean was not (γ=0, t=−0.20, p>0.05). The Start-Mean × Time interaction effect was non-significant (γ=0, t=0.01, p>0.05), providing no support for Hypothesis 3a.

Table 4.

Multilevel modelling estimates for impact of time on TOE and TPE convergence

TOE model
Fixed effects Coefficient SE t
(Intercept) 0.51 0.04 12.95**
Time 0.01 0.01 1.28
Start-mean 0.01 0.01 −1.31
Start-Mean x Time 0.01 0.01 2.46* 0.02
TPE model
Fixed effects Coefficient SE T
(Intercept) 0.72 0.02 31.85**
Time 0.02 0.01 3.54** 0. 02
Start-mean 0.00 0.01 −0.20 0. 02
Start-Mean × Time 0.00 0.00 0. 01 0. 02

*P<0.05; **P<0.01.

TOE, team outcome efficacy; TPE, team process efficacy.

To test Hypothesis 3b, the interactive effect of start-mean and time on TOErwg(j), random intercepts for start-mean and agreement at Time-0 were entered as random intercept effects. In addition, a Start-Mean × Time interaction term was computed and entered as a fixed effect (see table 4). Results indicated that neither time (γ=−0.10, t=−1.88, p>0.05) nor start-mean (γ=−0.01, t=−1.31, p>0.05) main effects were significant. The interaction effect of Start-Mean × Time reached significance on TOE convergence (γ=0.01, t=2.46, p<0.05). Hypothesis 3b was therefore supported. Again, using the procedure described by Raudenbush and Bryk,26 calculations indicated that the cross-level moderation of start-mean with time accounted for approximately 2% of within-team convergence on TOE. A graph of the interaction effect is plotted in figure 2 with procedures specified by Aiken et al.27

Figure 2.

Figure 2

Effect of time on team outcome efficacy (TOE) agreement at 1 SD above/below mean.

Exploratory analyses

Given the scarcity of empirical studies on process and dispersion models of TE, we conducted post hoc analyses to test viable alternatives for detected effects, as well as assess whether other effects may be present that were not included in formulating the original hypotheses.

Non-linearity

Motivation theories at the individual level have received empirical support for non-linear effects of self-efficacy,28 as well as for TE.29 While these studies implicate mean-TE effects on performance, the inconsistent findings for time effects on agreement in the present study prompted us to test for non-linear effects of time on TPE and TOE agreement. Specifically, a random intercept model was estimated for both TOE and TPE with agreement indices at Time=0 entered as a random effect followed by fixed effects of Time and Time 2. Results indicated no support for a non-linear effect of time on either TPE (γ=0.00, t=0.97, p>0.05) or TOE (γ=0.00, t=−0.05, p>0.05) convergence.

Team composition

While DeRue et al’s taxonomy includes team composition as an antecedent of efficacy dispersion in teams,16 their composition factors are limited to members’ prior taskwork and teamwork experience. Work in the relational demography literature provides evidence for differential time effects between surface-level and deep-level diversity and work group cohesion.30 In addition, teamwork theorising has postulated team member SE (moderated by status) and team process (moderated by cooperation/competition) as antecedents to TE.21 Given the IP context of the current training programme, which was designed to foster effective working relationships between professions,31 we tested for potential professional composition effects on convergence. Specifically, a fixed-proportion variable was computed for each team and entered into a random intercept model with agreement indices at Time=0 entered as random effects and time and professional proportion entered as fixed effects. Results indicated non-significance for the composition effect on either TPE (γ=0.03, t=0.23, p>0.05) or TOE (γ=0.03, t=0.14, p>0.05) convergence. That is, the proportion of medical and nursing students comprising our teams sample may be eliminated as a potential confound to our findings on convergence.

Before-after effects

Marks et al have previously noted that interpersonal processes persist over action and transition stages of repeated performance cycles.32 Similarly, Bandura has argued that outcome efficacy will have differential effects over preparatory and performance contexts.6 Thus, it is possible that TOE may not converge without stratifying preperformance and postperformance assessments across episodes.

On the other hand, TPE may be robust to this stratification, based on the generality of these processes over action and transition stages. To test this possibility, a random intercept model was estimated with agreement indices at Time=0 entered as a random intercept and time entered as a fixed effect, blocking on pre-post assessments. For TPE, results remained significant on both preassessment (γ=0.04, t=2.12, p>0.05) and postassessment (γ=0.06, t=2.40, p<0.05) tests of time on TPE convergence. In addition, results remained non-significant for both the preassessment (γ=0.03, t=0.75, p>0.05) and postassessment (γ=0.03, t=0.66, p>0.05) tests of time on TOE convergence. The unchanged pattern of findings suggests that our results are not confounded or suppressed by whether the assessments came before or after simulation performance scenarios.

Finally, in order to determine the substantive impact of team agreement for performance, we conducted a median split of high/low-agreement teams in our sample at the final performance episode. An independent samples t-test on faculty ratings of team performance indicated that high-agreement teams performed significantly better than low-agreement teams. The substantive impact of high/low agreement on performance ratings was consistent across both TPE and TOE measures. These findings are displayed in figure 3.

Figure 3.

Figure 3

Independent sample t-tests of faculty-rated performance by high/low agreement on team process efficacy (TPE) and team outcome efficacy (TOE). 95% SE bars and t-test values are displayed. n=24 for TPE and TOE t-tests.

Discussion

Despite the long-time use of dispersion models for representing collective psychological phenomena, there have been few studies directed towards mapping the dynamic and multidimensional elements of agreement in team phenomena. Our study integrates three related elements of TE development over time: (1) the dimensionality of TE as comprising process and outcome facets; (2) the temporal relations of process and outcome efficacy, along with agreement as an important index; and (3) the interactive relationship between initial mean-TE and time on agreement.

First, our results show that the two dimensions of TPE and TOE are distinct, replicating findings from Collins and Parker.7 Extending the distinction between these efficacy dimensions from the individual to team level is important for demonstrating isomorphic composition—a necessary but insufficient condition for homology in multilevel research.33 In addition to replication, the process-outcome factorial distinction findings stand in contrast to recent results indicating unidimensionality of direct-reflected TE judgements.34 However, a more process-oriented measure of TE may better elicit the social dimension in reflected efficacy beliefs, and this possibility warrants future investigation.

Second, we move ‘beyond agreement and aggregation’ approaches by testing propositions from DeRue et al’s recent taxonomy of efficacy dispersion in teams.16 We found that, in newly formed IP care teams, TPE converged relatively sooner than TOE. This substantiates early theorising by Gibson and Earley for the developmental sources of TE—‘focusing only on certainty may obscure important issues, such as what information and experiences are drawn on by the team, [and] the process by which a team collectively develops its estimate’21 (p 439).

Third, we found evidence for downstream effects of mean level of outcome efficacy at the onset of training for team-level convergence over time. Teams who began the first simulation with higher mean levels of outcome TE converged over time, whereas teams who began with first simulation with lower mean levels of outcome TE failed to converge. The impact of outcome efficacy means (magnitude) on team-level convergence over time is intriguing, particularly since in this study team members had not yet attempted the task. One explanation for the finding may be that teams with higher levels of outcome efficacy prior to performance shared similar mental models for performance that supported rapid convergence in subsequent efficacy estimates. In contrast, if teams with lower initial levels of outcome efficacy held less similar mental models for performance, one would expect less rapid convergence over time. Research to examine the antecedents of high and low initial levels of team-level outcome efficacy judgements is needed to evaluate this possibility.

It is important to note, however, that we did not find a similar initial Mean × Time effect on convergence in team-level process efficacy judgements. In contrast to outcome efficacy that focuses on effective mobilisation of member resources for taskwork, process efficacy judgements refer to success in working together or mobilising member resources for teamwork. In the context of working with new persons, judgements of team-level process efficacy are more difficult to estimate prospectively and are more likely to be determined by teamwork processes that manifest during the simulation than by member resources, such as knowledge, which is consistent with the student composition of our team training sample.35 The differential impact of initial mean-TOE and mean-TPE on subsequent convergence is consistent with the notion of emergence of teamwork processes over time. Our findings suggest that future research investigating emergent processes employs more narrowly defined measures of team-level efficacy that correspond to intrateam processes.

Theoretical and practical implications

Our findings highlight the different pattern of effects that are obtained when using team-level measures of agreement compared with means (magnitude), and the importance of distinguishing between process and outcome dimensions of TE when investigating team agreement over time. As noted previously, the bulk of research to date on the role of time in team-emergent phenomena has used integrated measures of TE, rather than distinguishing process and outcome-focused efficacy. Findings from integrated efficacy measures indicate cross-level effects of means-over-time on individual behaviour (ie, top-down) and team performance (ie, bottom-up).26 Further, Goncalo et al found differential effects for a general TE measure on team performance across performance phase,36 where TE was negatively related to the early phase of team performance, but positively related to later phases of performance. It is noteworthy that these time-related effects were fully mediated by process conflict. Therefore, one pressing question for future research will be whether dispersion, like means, can have positive or negative effects on team effectiveness, depending on the phase of a performance cycle, for example, early–late. In the current study, we found that agreement among team members in process efficacy tended to increase over time. However, agreement in outcome efficacy was differentially determined by initial mean levels. A natural extension is the future examination of dispersion-mean pairings of both process and outcome efficacy across phases of a performance cycle.5

In a different vein, our dimensions of process and outcome TE may be viewed as rough antipodes of relational and task-related conflict, respectively. The relative time sensitivity of process- efficacy agreement may inform recent findings in the group conflict literature.37 For example, empirical evidence suggests that unchecked, intragroup process conflict tends to devolve into relationship conflict.

Practically, measures of process efficacy may be useful to managers seeking to sensitively assess affect-laden disagreement, or broadly monitor subtler forms of workplace discrimination or bias.38 More generally, one particularly interesting prospect for future research would be the mapping of mean and variance measurement indices to the team conflict literature. Put directly, how does measurement dispersion relate to substantive conflict (disagreement)? With measurement-theory levels aligned, this relation can be located with application of the appropriate analytic models.

Limitations

One limitation to the current study is the relatively small sample size. Although our total sample consisted of six measurement occasions nested within 183 participants, for a total of 1098 observations, only 24 teams were observed during the session. Since individuals were nested in teams, and TE agreement represented a team-level construct, only 144 cases (24N×6t) were included in the model. In spite of our relatively small sample size, Snijders and Bosker have recommended that multilevel modelling may be appropriate when the number of groups is greater than 10.39 Multilevel researchers have further noted model estimates assume an asymptotic distribution,40 and it has been shown that fixed-effects parameter estimates are unbiased for sample sizes approaching n=25, which suggests that our study’s estimates are reasonably stable and accurate.

Replication should buttress the findings from the current study. In addition, it should be noted that the adoption of a dispersion model need not require additional cases, rather, a different analytic approach. Our study was also limited by the design of the training session. Two related issues emerge here, namely: (1) the lack of objective performance indices, and (2) the independence of simulation episodes vis-à-vis facilitator feedback. In each medical simulation, teams performed a different training scenario under the guidance of a different faculty facilitator. Across simulations, the facilitator feedback did not increase, which may have limited changes in TOE judgements. Similarly, without objective performance indices, we had no way to compare efficacy judgements against changes in actual team member behaviour.

Conclusion

IP TE beliefs are qualified by both informational and motivational judgements. In the current study, we successfully replicated the discriminant validity for process and outcome forms of TE. While initial validation focused on differential effects, our adoption of a process model uncovered different timescales for convergence. Specifically, results from our findings indicated that process beliefs converge relatively more quickly than outcome beliefs. It is possible that process beliefs converged because feedback sources pervaded both action and transition phases over repeated performance cycles. On the other hand, cross-level effects were found for outcome efficacy beliefs, but not for process beliefs. Our results suggest the need for further investigation of unique determinants and consequences of TPE and TOE, as well as furthering understanding of their within-unit dynamics.

Acknowledgments

STROBE reporting guidelines

were adhered to in for the study’s conductance and write-up of this original research manuscript.

Footnotes

Contributors: MJK contributed IPE measurement selection, questionnaire programming, data curation, technical analyses and write-up of early manuscript drafts. DSA contributed initial project planning outreach to schools of allied health professions, as well as IPE literature review synthesis, and IPE facilitator training. BPD contributed IPE training delivery support and materials development for facilitator training, as well as support in assessing participating allied health professions.

Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests: None declared.

Ethics approval: The enclosed study was approved by the academic medical centre’s internal Institutional Review Board.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data sharing statement: All data published or unpublished from the enclosed study are freely available by direct request from the secondary author.

References

  • 1. Bandura A. Self-efficacy: the exercise of control: Macmillan, 1997. [Google Scholar]
  • 2. Platt A, Prescott-Clements L, McMeekin P. O28 Debriefing with team deliberate practice: accelerating learning curves to maximise delivery and optimise participant learning in simulation-based education. BMJ Simul Technol Enhanc Learn 2017;3:A24. [Google Scholar]
  • 3. Kozlowski SW, Ilgen DR. Enhancing the Effectiveness of Work Groups and Teams. Psychol Sci Public Interest 2006;7:77–124. 10.1111/j.1529-1006.2006.00030.x [DOI] [PubMed] [Google Scholar]
  • 4. Stajkovic AD, Lee D, Nyberg AJ. Collective efficacy, group potency, and group performance: meta-analyses of their relationships, and test of a mediation model. J Appl Psychol 2009;94:814–28. 10.1037/a0015659 [DOI] [PubMed] [Google Scholar]
  • 5. Lofton L, Collins S, Raimalwalla T, et al. P58 Perceived improvement of non-technical skills and confidence after in-situ simulation emergency events and the effect of repeated training. BMJ Simul Technol Enhanc Learn 2017;3:A68. [Google Scholar]
  • 6. Bandura A. The explanatory and predictive scope of self-efficacy theory. J Soc Clin Psychol 1986;4:359–73. 10.1521/jscp.1986.4.3.359 [DOI] [Google Scholar]
  • 7. Collins CG, Parker SK. Team capability beliefs over time: Distinguishing between team potency, team outcome efficacy, and team process efficacy. J Occup Organ Psychol 2010;83:1003–23. 10.1348/096317909X484271 [DOI] [Google Scholar]
  • 8. Mone MA. Comparative validity of two measures of self-efficacy in predicting academic goals and performance. Educ Psychol Meas 1994;54:516–29. 10.1177/0013164494054002026 [DOI] [Google Scholar]
  • 9. McIntyre RM, Salas E. Measuring and managing for team performance: Emerging principles from complex environments. Team effectiveness and decision making in organizations. 16, 1995:9–45. [Google Scholar]
  • 10. Arthur W, Bell ST, Edwards BD. An empirical comparison of the criterion-related validities of additive and referent-shift operationalizations of team efficacy. Organizational Research Methods 2007;10:35–58. [Google Scholar]
  • 11. James LR, Demaree RG, Wolf G. Estimating within-group interrater reliability with and without response bias. J Appl Psychol 1984;69:85–98. 10.1037/0021-9010.69.1.85 [DOI] [Google Scholar]
  • 12. Chan D. Functional relations among constructs in the same content domain at different levels of analysis: A typology of composition models. J Appl Psychol 1998;83:234–46. 10.1037/0021-9010.83.2.234 [DOI] [Google Scholar]
  • 13. Stewart GL, Nandkeolyar AK. Exploring how constraints created by other people influence intraindividual variation in objective performance measures. J Appl Psychol 2007;92:1149–58. 10.1037/0021-9010.92.4.1149 [DOI] [PubMed] [Google Scholar]
  • 14. Kirkman BL, Tesluk PE, Rosen B. Assessing the incremental validity of team consensus ratings over aggregation of individual-level data in predicting team effectiveness. Pers Psychol 2001;54:645–67. 10.1111/j.1744-6570.2001.tb00226.x [DOI] [Google Scholar]
  • 15. Kozlowski SW, Chao GT, Grand JA, et al. Capturing the multilevel dynamics of emergence: Computational modeling, simulation, and virtual experimentation. Organizational Psychology Review 2016;6:3. [Google Scholar]
  • 16. DeRUE DS, Hollenbeck J, ILGEN DAN, et al. Efficacy dispersion in teams: Moving beyond agreement and aggregation. Pers Psychol 2010;63:1–40. 10.1111/j.1744-6570.2009.01161.x [DOI] [Google Scholar]
  • 17. Jung DI, Sosik JJ. Group potency and collective efficacy: Examining their predictive validity, level of analysis, and effects of performance feedback on future group performance. Group Organ Manag 2003;28:366–91. [Google Scholar]
  • 18. Forshaw N, Lane M, Lofton L, et al. 0176 Perspectives of the value of interprofessional simulation training by participant background. BMJ Simul Technol Enhanc Learn 2015;1:A33.2–A33. 10.1136/bmjstel-2015-000075.81 [DOI] [Google Scholar]
  • 19. Paige JT, Garbee DD, Yu Q, et al. Team Training of Inter-Professional Students (TTIPS) for improving teamwork. BMJ Simul Technol Enhanc Learn 2017;3:127–34. 10.1136/bmjstel-2017-000194 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Tuckman BW. Developmental sequence in small groups. Psychol Bull 1965;63:384–99. 10.1037/h0022100 [DOI] [PubMed] [Google Scholar]
  • 21. Gibson CB, Earley PC. Collective cognition in action: accumulation, interaction, examination, and accommodation in the development and operation of group efficacy beliefs in the workplace. Acad Manage Rev 2007;32:438–58. 10.5465/amr.2007.24351397 [DOI] [Google Scholar]
  • 22. Tasa K, Taggar S, Seijts GH. The development of collective efficacy in teams: a multilevel and longitudinal perspective. J Appl Psychol 2007;92:17–27. 10.1037/0021-9010.92.1.17 [DOI] [PubMed] [Google Scholar]
  • 23. Haarhaus B. Uncovering cognitive and affective sources of satisfaction homogeneity in work teams. Group Processes & Intergroup Relations 2018;21:646–68. 10.1177/1368430216684542 [DOI] [Google Scholar]
  • 24. Frankel A, Gardner R, Maynard L, et al. Using the Communication and Teamwork Skills (CATS) Assessment to measure health care team performance. Jt Comm J Qual Patient Saf 2007;33:549–58. 10.1016/S1553-7250(07)33059-6 [DOI] [PubMed] [Google Scholar]
  • 25. Bates D, Mächler M, Bolker B, et al. Fitting linear mixed-effects models using lme4. arXiv preprint arXiv 2014;1406:5823. [Google Scholar]
  • 26. Raudenbush SW, Bryk AS. Hierarchical linear models: applications and data analysis methods: Sage, 2002. [Google Scholar]
  • 27. Aiken LS, West SG, Reno RR. Multiple regression: testing and interpreting interactions: Sage, 1991. [Google Scholar]
  • 28. Vancouver JB, More KM, Yoder RJ. Self-efficacy and resource allocation: support for a nonmonotonic, discontinuous model. J Appl Psychol 2008;93:35–47. 10.1037/0021-9010.93.1.35 [DOI] [PubMed] [Google Scholar]
  • 29. Tasa K, Whyte G. Collective efficacy and vigilant problem solving in group decision making: a non-linear model. Organ Behav Hum Decis Process 2005;96:119–29. 10.1016/j.obhdp.2005.01.002 [DOI] [Google Scholar]
  • 30. Harrison DA, Price KH, Bell MP. Beyond relational demography: time and the effects of surface-and deep-level diversity on work group cohesion. Academy of management journal 1998;41:96–107. [Google Scholar]
  • 31. Kerry MJ, Ander DS. Mindfulness fostering of interprofessional simulation training for collaborative practice. BMJ Simul Technol Enhanc Learn 2018:bmjstel-2018-000320. 10.1136/bmjstel-2018-000320 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Marks MA, Mathieu JE, Zaccaro SJ. A temporally based framework and taxonomy of team processes. Acad Manage Rev 2001;26:356–76. 10.5465/amr.2001.4845785 [DOI] [Google Scholar]
  • 33. Kozlowski SW, Klein KJ. A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent processes. Multilevel theory, research, and methods in organizations. Jossey-Bass, San Francisco: Foundations, Extensions, and New Directions. [Google Scholar]
  • 34. Reedy GB, Lavelle M, Simpson T, et al. Development of the Human Factors Skills for Healthcare Instrument: a valid and reliable tool for assessing interprofessional learning across healthcare practice settings. BMJ Simul Technol Enhanc Learn 2017;3:135–41. 10.1136/bmjstel-2016-000159 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. van Dick R, van Knippenberg D, Hägele S, et al. Group diversity and group identification: The moderating role of diversity beliefs. Human Relations 2008;61:1463–92. 10.1177/0018726708095711 [DOI] [Google Scholar]
  • 36. Goncalo JA, Polman E, Maslach C. Can confidence come too soon? Collective efficacy, conflict and group performance over time. Organ Behav Hum Decis Process 2010;113:13–24. 10.1016/j.obhdp.2010.05.001 [DOI] [Google Scholar]
  • 37. Greer LL, Jehn KA. Chapter 2 the pivotal role of negative affect in understanding the effects of process conflict on group performance. Affect and Groups, 2007. [Google Scholar]
  • 38. Iyer A, Jetten J, Branscombe NR, et al. The difficulty of recognizing less obvious forms of group-based discrimination. Group Processes & Intergroup Relations 2014;17:577–89. 10.1177/1368430214522139 [DOI] [Google Scholar]
  • 39. Snijders T, Bosker R. Multilevel modeling: an introduction to basic and advanced multilevel modeling. [Google Scholar]
  • 40. Maas CJM, Hox JJ. Sufficient sample sizes for multilevel modeling. Methodology 2005;1:86–92. 10.1027/1614-2241.1.3.86 [DOI] [Google Scholar]

Articles from BMJ Simulation & Technology Enhanced Learning are provided here courtesy of BMJ Publishing Group

RESOURCES