Abstract
Objective
We review the current state-of-the-art in team cognition research, but more importantly describe the limitations of existing theories, laboratory paradigms, and measures considering the increasing complexities of modern teams and the study of team cognition.
Background
Research on, and applications of, team cognition has led to theories, data, and measures over the last several decades.
Method
This article is based on research questions generated in a spring 2022 seminar on team cognition at Arizona State University led by the first author.
Results
Future research directions are proposed for extending the conceptualization of teams and team cognition by examining dimensions of teamness; extending laboratory paradigms to attain more realistic teaming, including nonhuman teammates; and advancing measures of team cognition in a direction such that data can be collected unobtrusively, in real time, and automatically.
Conclusion
The future of team cognition is one of the new discoveries, new research paradigms, and new measures.
Application
Extending the concepts of teams and team cognition can also extend the potential applications of these concepts.
Keywords: teamwork, team cognition, human-machine teaming, unobtrusive measurement, team dynamics
TEAM COGNITION: STATE OF THE SCIENCE
Teams are undeniably ubiquitous in today’s world. Work and play alike increasingly involve complex goals that extend well beyond individual tasks, needs, and capabilities. Salas and colleagues (1992) defined teams as “distinguishable set[s] of two or more people who interact dynamically, interdependently and adaptively toward common and valued goal[s], […] assigned specific roles or functions to perform and who have a limited life span of membership” (p. 4), which remains among the most cited and comprehensive definitions in the field. However, recent studies have expanded upon or omitted some of its key elements. The prominence of artificial intelligence (AI) in today’s team contexts, for example, has resulted in the conceptualization of human-machine teams (HMTs, also “human-autonomy teams”; O’Neill et al., 2020).
The same is true for how cognition at the team-level has been framed in recent years. Dating from the mid- to late 1990s, shared mental models (SMMs; Cannon-Bowers et al., 1993) and transactive memory systems (TMSs; Moreland et al., 1996) have offered insight into how team cognition can be understood as composites or compilations of individual teammates’ cognition (Kozlowski & Chao, 2012; Mesmer-Magnus et al., 2017). However, empirical evidence has more recently driven the development of interactive team cognition (ITC) theory, which employs an ecological perspective that posits team interactions themselves as team cognition (Cooke et al., 2013). We note that ITC theory is not mutually exclusive with composite or compilational team cognition theories and that combinations of these theories have brought about more holistic approaches to the aforementioned shifts in how teams are studied (e.g., Demir et al., 2018; Fiore & Wiltshire, 2016; O’Neill et al., 2020).
Recent findings also indicate changes in the contexts of how team cognition is studied. Over the last two decades, field studies have shown that teams have become more distributed across spatiotemporal contexts and increasingly reliant on virtual communication technologies (Morrison-Smith & Ruiz, 2020)—a trend accelerated by the Covid-19 pandemic. The increased role of synthetic task environments (STEs; Cooke & Shope, 2004) that feature remote communication tools in empirical research reflects these trends, as well. Wizard of Oz (WoZ) studies—dubbed so for the use of confederates posing as machine teammates to simulate theoretical AI capabilities—have relatedly seen significant use in conjunction with STEs and remote data collection methods, particularly in HMT research (Riek, 2012). Trends towards gamification of team tasks have also been observed in field applications (Kapp, 2012) and the design of STEs for laboratory experiments (Cooke et al., 2020). More recently, commercially developed games such as Minecraft and Roblox have been adapted for laboratory team studies, partially in response to limitations of remote data collection (Lematta et al., 2022).
It is notable that measures of team cognition have somewhat lagged behind the pace of change in theory and practice. Classical measures of team cognition aggregate individual-level measurements, such as surveys and physiological sensor data, and are thus more aligned with compositional views of team cognition. Though numerous ways to measure team cognition at the team-level have been proposed and demonstrated (e.g., Gorman et al., 2017; Kozlowski & Chao, 2012), compositional measures of team cognition remain dominant (Mesmer-Magnus et al., 2017). This is concerning, as such measures may prove inadequate and impractical in studying team cognition for novel team structures; it is difficult to even conceptualize how to administer survey questionnaires to machine and animal teammates in hybrid teams, for example. As an alternative, interactive measures such as nonlinear coordination dynamics have been used to model hybrid and all-human teams (Demir et al., 2018; Gorman et al., 2010), in some cases in real time (Gorman et al., 2012). It is within this theoretical and methodological context that the authors explore future directions in team cognition.
METHODS
Over the course of a seminar in Spring 2022 at Arizona State University, we examined different topics in team cognition studied primarily over the last decade, but in the context of prior decades of work. Each graduate student was teamed with an undergraduate student in the bi-level class. The graduate student mentored the undergraduate and they jointly identified a topic, identified 10 or more relevant articles from the last 10 years, provided an annotated bibliography, selected the most relevant reading for the class, collected pre-class statements on the reading from the class, and led the class discussion. During the discussion, the undergraduate would take notes in a shared Google document of open issues, gaps, or future research directions. At the final class meeting, the class identified recurring themes and questions by systematically considering the notes in the shared document. A brainstorming discussion was held to identify high-level research questions raised in each class. These high-level questions were aggregated and as a group consensus was reached on the themes that emerged from them. We observed themes pertaining to the definition of “team,” to machines as teammates, and to methods and measures for studying team cognition. Results of our discussions are summarized in the sections below.
ON TEAMS AND TEAMNESS
In this section, we outline some pressing questions about team cognition in relation to how teams have evolved both in field and laboratory settings. We start by considering the essential elements of a team and proposing a re-examination of the properties that have been used to define various types of teams. This includes several emerging team constructs, including ad hoc teams, multiteam systems, human-animal teams, and HMTs.
Consider 3 groups of dancers: (1) a folk-dance troupe choreographed such that dancers are paired with each other and perform steps relative to individual and paired positions; (2) a group of line dancers performing simple steps together that can also be executed alone; and (3) dancers scattered in a club and independently performing their own routines to the same music. If we compare the first two groups only, the dance troupe may be considered more of a team and the line dancers more of a group, though it can be argued that both are teams in their own right. In contrast, the third group may more clearly not be a team—until they take on a line dance, that is. Why is it that not only can we identify whether groups are teams or not but also argue that one team is more team-like than the other—or even compared to itself at a different point in time?
Turning to classical definitions of teaming (e.g., Salas et al., 1992) only gives us a partial answer: all teams are groups, but not all groups are teams. Existing taxonomical frameworks that center on the attributes of teams as groups also offer limited clarity. Earlier classification schemes (e.g., Cohen & Bailey, 1997; Sundstrom et al., 1990) often focus on whether teams are of one type or another, such as if a team is a project team or an action-oriented team. However, these are often limited only to teams whose structures and activities are relatively static within stable organizational contexts. More recent work considers team attributes irrespective of team types, which can be used to factor in the dynamic nature of these attributes. An example is the multidimensional scaling paradigm by Hollenbeck et al. (2012) that describes teams in terms of skill differentiation, authority differentiation, and temporal stability. Though such frameworks are certainly useful for comparing different teams, applying them can be a tedious process, limiting its utility for tracking how such team attributes change within teams over time. Additionally, because both older and newer taxonomic schemes derive their dimensions from teams that meet the traditional definition, they may not be as useful in answering questions about how novel structures like HMTs function as teams.
We observed in our discussions the recurrence of questions about how team cognition takes place in various team contexts, which lent credence to the need for a structural supra-paradigm within which the dimensions and characteristics of team cognition could be studied as a continuum. We summarize these questions, along with examples that inspired them in Table 1.
TABLE 1:
Questions About Team Cognition as Observed in Various Team Structures
| Questions About Team Cognition | Notable Examples |
|---|---|
| 1. Human teammates: Do teammates have to be people? | Human-animal teams suggest that teams can also include nonhumans that perform team tasks beyond human capabilities and have been suggested as a blueprint for designing HMTs (Phillips et al., 2016). Examples are human-canine narcotics search teams, human-dolphin fishing teams, or human-canine sheep management teams. |
| In human-machine teams, certain sophisticated machines may serve as teammates to humans, just as animals fill this role. Artificially intelligent forms of automation which exhibit an increased level of autonomy from human control and direction with functions and capabilities beyond those of simple tools may be considered teammates (O’Neill et al., 2020). | |
| 2. Heterogeneity: Can there be degrees of homogeneity in teams due to shared tasks and roles? | Centaur teams, in which humans and AI agents form very tightly coupled, often dyadic structures, perform the same tasks as a collective unit that is “half-human, half-AI” (Muller, 2022). The term was first popularized by chess grandmaster Garry Kasparov, who, after his historic loss to IBM’s Deep Blue, initiated the first “Centaur Chess” competition. Results showed that centaur chess teams outperformed both grandmasters and solo computer players, suggesting the presence of emergent team cognition (Case, 2018). |
| Borrowing from the concept of “mosaic warfare,” mosaic teams are virtual, decentralized, ad hoc teams composed of a vast array of team members with heterogeneous areas of expertise. However, these teams also have non-MECE (mutually exclusive, collectively exhaustive) team states (McChrystal et al., 2015), with redundant roles within teams to leverage skill differences among individuals. | |
| 3. Shared goal and identity: Do teams have to have a shared goal or a shared identity? | Human and animal members of human-animal teams such as human-canine search teams are likely to have different understandings of what the shared goal is or the collective identity of the group because of their different cognitive abilities. Nevertheless, interactions between them satisfy the definition of team processes and teamwork (Johnson & Bradshaw, 2021; Marks et al., 2001). |
| Dispersed or virtual teams involve team members interacting over temporal or spatial distances, via technology. The impact of virtual settings on dispersed team dynamics, information exchange behaviors, communication, and team emergent properties is not yet fully understood (Espinosa et al., 2015). Dispersed team members may hold uniquely divergent views about their collective identity and shared goals because of cultural distances, though it is undeniable that teamwork occurs in them. | |
| In multiteam systems such as in a military Joint All-Domain Command and Control (JADC2) system, heterogeneous teams operating across different domains (such as land, air, or water) coordinate between and within subunits in response to multiple rapidly changing environments (Gorman et al., 2006). Further research is needed to determine how local and global goals and identity factor into team cognition in such systems. | |
| 4. Hierarchy and authority: Do teammates need to have the same perception of hierarchy or authority differentiation? | In surgical teams, putative hierarchies among nurses, surgeons, and assistants have been shown to dissolve and re-emerge over time, depending on the complexity of the surgical procedure and the criticality of the medical situation (Barth et al., 2015; van den Oever & Schraagen, 2021). When such hierarchical structures dissolve, initiation of and participation in exploratory communication and direction-setting occurs in a lateral fashion. |
| Human-machine teams are different from traditional supervisory control in human-automation interaction paradigms (Sheridan, 2012). This is not just because machine team member actions can occur autonomously (O’Neill et al., 2020) but also due to the interactive capabilities of machine teammates that allow them to initiate and participate in coordinative activities such as negotiation (Chiou & Lee, 2021). We note that this is not mutually exclusive with hierarchical team contexts, though people’s capacity to consider machine teammates as equals in more lateral teaming contexts has been questioned (Groom & Nass, 2007). | |
| 5. Interdependence: Can teammate interdependence be a matter of degree of interdependence? Can interdependencies among teammates change over time? | Action-oriented teams are teams that “conduct complex, time-limited engagements with audiences, adversaries, or challenging environments in ‘performance events’ for which teams maintain specialized, collective skill” (Sundstrom, 1999). Such events are “periods of time over which performance accumulates, and feedback is available.” (Mathieu & Button, 1992, p. 1761). Examples of action-oriented teams include search and rescue teams, infantry platoons, aviation crews, cooking teams, sports teams, and musical teams. It has been observed that degrees of interdependence change dynamically within such teams, in that individual taskwork and coordinative teamwork relative to each other change over time. The effects of dynamic team membership, particularly those in mosaic and ad hoc teams, may also change how each teammate’s taskwork is interdependent upon another. In addition, per the earlier dancing example, degrees of interdependence may differ across teams. Further studies are needed to accurately track changes in interdependence and its effects on teamwork in general. |
Discussions surrounding these questions led us to a focus on team cognition as an emergent state of activity instead of teams as specialized groups. Marks et al. (2001, p. 357) defined team processes as “members’ interdependent acts that convert inputs to outcomes through cognitive, verbal, and behavioral activities directed toward organizing taskwork to achieve collective goals.” They also indicated that team emergent states, including team cognitive constructs like SMMs, are the byproducts of interactive team processes. More recently, Johnson and Bradshaw (2021) described teamwork as activities in which participants intentionally work together through interdependent tasks that are situated within a relational structure characterized by group identity and shared commitment. Combining these with ITC theory allows us to refine the definition of interactive team cognition as follows: interactive team cognition is an emergent state of activity that intermittently arises as teams engage in acts of interdependence, that is, teamwork, throughout their limited lifespans. An implication of this is that team cognition arises not simply from any interaction between any two members of a team but from those that involve interdependence.
Focusing on how a team functions rather than who it comprises thus entails describing the extent to which a team’s interactions involve team cognition, through a multidimensional construct that we refer to as teamness. Teamness conceptually allows us to answer the question of how a team appears more team-like at one point in time over another by qualifying the differences and similarities between and within instances of teaming. To illustrate, studies on team coordination dynamics have shown that a team’s performance is positively correlated with the variability of its coordination and decision-making strategies (Demir et al., 2018, 2021; Gorman et al., 2010). The diversity of a team’s coordination strategies over a set of interactions may be a dimension of teamness and might explain the observed relative effectiveness of HMTs and all-human teams at given points in time beyond their composition.
We note that the concept of teamness needs further development through the identification and measurement of interaction-based dimensions. In Table 1, we have identified potential dimensions in team composition, role heterogeneity, diversity of shared goals and identity, authority structure, and degrees of interdependence; and there are likely more. We believe that its application can advance team science away from needless debates on whether groups like human-machine systems fit traditional definitions of teams (c.f., Shneiderman, 2022, ch. 14; Groom & Nass, 2007), towards how system interactions could be engineered to promote interdependent interactions from which beneficial emergent system characteristics can arise (National Academies of Science, 2021).
TEAM COGNITION IN HUMAN-MACHINE TEAMS
As indicated in Question 1 of Table 1, intelligent machines can be considered teammates just as animals can be considered teammates (Phillips et al., 2016). In many ways, we may do better to consider machine teammates as members of another species. Considering a machine as a teammate does not mean that the machines are in control, that machines are human or human-like, or that the machine is not human-centered. In fact, designing a machine to work well with humans as a teammate can increase human-centeredness. In addition, this design can draw on what we know from the team literature (e.g., team composition, team process, team development, and team measurement) to do so.
In the past, automation and AI have been construed as functioning mostly as a tool, without autonomy, and completely under human supervisory control (Sheridan, 2012). But with increasing capabilities in AI and the growth of sociotechnical systems, more research has acknowledged that AI can function as part of a team (Chiou & Lee, 2021; O’Neill et al., 2020; Seeber et al., 2020; Zieba et al., 2010). A robot that searches for Improvised Explosive Devices ahead of soldiers all connected through GPS sensors and communication systems functions as an integral part of the team or system. It is critical to understand how human-machine interactions may be more complex than dyadic supervisory control structures. Existing literature on teamwork provides a starting place to do so, but not all of the team cognition literature can be expected to translate to this new teaming arrangement. Working with machines may change the way teams work together, as recent literature shows (e.g., McNeese, et al., 2018). Studying the teamness of human-machine interactions may aid in the development of theoretical teamwork models that can predict how HMTs function.
We acknowledge some issues that have been raised in studying team cognition in HMTs. AI technology has not yet reached the level of general human intelligence; machine teammates may therefore have limited context awareness or social and emotional intelligence (O’Neill et al., 2020). Machines may excel at taskwork, but lack in teamwork (Chiou & Lee, 2016). In addition, there are many open research questions relevant to how interactions and collaborations will be affected by the presence of nonhuman teammates (Seeber et al., 2020). How are team processes such as conflict resolution, coordination, and backup behavior affected? How do leadership behaviors, motivation, and psychological safety work in a human-machine team? How does the degree to which the machine is thought of as a true teammate affect teaming behaviors?
Beyond limitations in the abilities of the machine teammates, the teamness of human-machine interactions may be impacted by the human’s perceptions, beliefs, and attitudes toward machine teammates, and the composition of humans and machines on the team (Musick et al., 2021; Walliser et al., 2019). Anthropomorphism, the process through which people prescribe human-like characteristics to nonhuman entities (Epley et al., 2007), means that humans can be easily swayed by the trivial characteristics of intelligent agents giving them the appearance of advanced capabilities that do not exist (National Academies of Sciences, 2021; Philips et al., 2011). Other factors of perception include trustworthiness, machine behavior, and friendliness (Phillips et al., 2011; Shneiderman, 1989; Sims et al., 2005). Overall, it is essential to understand how humans perceive intelligent machines, the factors that guide those perceptions, and their resulting effects on team interactions and performance. These are open research questions, as raised in a recent report by the National Academies of Sciences (2021).
EXTENDING LABORATORY PARADIGMS TO STUDY TEAM COGNITION
Laboratory paradigms must be extended to study team cognition as dimensions of teamness vary within and between contexts (see Table 1). Paramount to this is the development of testbeds that have a high degree of ecological validity and thus mirror the demands, pressures, risks, and dynamics of real-world teaming. Yet, incorporating these factors into laboratory paradigms requires careful consideration of the tradeoffs between control, validity, and practical constraints. For instance, realism and the emergence of self-organized dynamics might be enhanced in a teaming experiment by allowing teams a high degree of flexibility in how they can interdependently pursue shared goals within a simulation environment. However, this flexibility may also present challenges to maintaining experimental control and tractability of variables for data analysis. Other aspects of teamness, like role heterogeneity or diversity of shared goals, are challenging to implement when using inexperienced teams that do not possess the domain-specific knowledge required to understand the nuance of the interactions within a given task.
Designing an effective teaming study not only requires a scenario with adequate task fidelity, but in some cases, it also requires that participants perceive risk in a manner similar to the real world. For example, perceptions of risk influence the development of trust, including in human-machine teams (Chiou & Lee, 2021; Stuck et al., 2022). However, it is often ethically difficult or impossible to create realistic perceptions of the high-impact risks experienced by many real-world teams (e.g., physical harm). Furthermore, the examination of risk is complicated in human-machine teams due to the ambiguity surrounding accountability and responsibility. Another challenge is that team cognition studies tend to examine ad hoc teams of inexperienced participants that are formed for only short durations. Although these experiments can provide some degree of insight into team behaviors and patterns of interaction, they offer little insight into teams comprised of more experienced individuals or teams who have extensive experience working together, both of which impact teamwork to a great degree.
Advances in synthetic task environments (STEs) that recreate the cognitive realism of a specific team environment have begun to evolve to support some of these research needs. Notably, STEs have recently been developed using modified commercial games, making STE development more accessible and financially feasible (Cooke et al., 2020). Likewise, improvements in the availability and bandwidth of internet- and computer-mediated collaborative tools have made it possible to conduct distributed team research in which participants or experimenters are geographically dispersed (Lematta et al., 2022). In addition, distributed STEs may allow researchers access to larger pools of potential participants, which in turn makes larger team studies more practical. However, the potential of distributed STEs have yet to be fully realized for studying teams that remain together for extended periods of time, teams that include expert participants who are often difficult to access, and the examination of multiteam systems. STEs also offer a potentially low overhead opportunity to study teams that include nonhumans (e.g., HMTs) through “Wizard of Oz” paradigms (Riek, 2012). Other opportunities exist in utilizing observation of “teams in the wild” through internet ethnography or the analysis of teamwork in online video games. However, such research requires advances in data collection and analysis methods and creates new concerns for privacy and confidentiality.
TEAM COGNITION: MEASUREMENT NEEDS
The dimensions of teamness outlined in Table 1 suggest that measurement needs may vary depending on the kind of team. For instance, understanding effectiveness for distributed action-oriented teams may require considering different variables, instruments, and modeling techniques from collocated knowledge-oriented teams. Further, measurement needs for one team may vary as their tasks, goals, and contexts evolve. Requirements for measurement may be different because how the team functions to achieve its goals may be different, and hence, characterizing teams by dimensions of teamness is an important step to measuring them. Research is needed to identify appropriate metrics for teams specific to teams’ degree of teamness.
ITC theory posits that measures of teamwork are most appropriate at the team-level (Cooke et al., 2013). But what does “team-level” really mean? If teams are a different kind of system than individuals, individual-level measures may miss parts of teaming that cannot be ascribed to individuals. Overall, future research is needed to define team-level constructs based on what teams are and how teams work, rather than solely extrapolating individual factors to teams.
For example, the demand imposed on the team by tasks relative to the amount of work teams can do may depend on group dynamics and team interdependence. Hence, adding up individuals’ workloads may lead to large discrepancies in accurately estimating team workload. The need for team-level measurement poses a challenge to existing measurement techniques. In particular, physiological measures must be captured at the individual level, but what would a team-level measurement of heart-rate variability look like (c.f., Kazi et al., 2021)? Researchers have considered heart-rate synchronization as a collective measure of workload (Demir et al., 2022; Dias et al., 2019), but research is needed to further scope team-level physiological measures.
The concept of team trust exemplifies challenges in defining team-level constructs. For example, consider trust in a team of three. Teammate 1 may trust Teammate 2 and Teammate 3, but Teammate 1 may not trust Teammate 2 and Teammate 3 working together. Teammate 1’s trust in Teammate 2 may also be affected by what Teammate 3 says about them. Teammate 1’s trust in Teammate 2 or Teammate 3 in the moment may also depend on context, such as what Teammate 1 needs of Teammate 2 or Teammate 3. Overall, there is much more potential complexity in team trust than individual trust, which is already a complex concept. Team science needs working definitions of team trust as well as measures that accommodate this complexity.
Whereas many aspects of teamwork are time-sensitive, some of the best measures available for team effectiveness constructs are not. For constructs such as trust and situation awareness, taking a team out of their workflow or requesting their self-report in a survey may disrupt researchers’ ability to capture constructs reliably and may reduce the extent that team-based scenarios reflect real work. In team assessment applications such as training or real-world operations, there is often no time to interrupt the team to measure them. Team science needs to advance measures of teamwork that can be passively or unobtrusively collected, and measures and assessment outputs that are generated in real time to provide useful feedback in a timely manner. Moreover, not all time is created equal. Events, situations, and tasks contextualize team interactions over time, which means the characteristics of effective team interaction are likely to shift as context shifts. In sum, time-sensitivity in measurements means minimizing the impact of measurement on a team’s time-sensitive task, generating outputs that are usable as real time feedback, and understanding the temporal context of team interactions (Gorman, et al., 2020).
CONCLUSION
Team cognition is not a monolithic construct, nor is the definition of team, which has many dimensions such as size and interdependence. Our conceptual definition of teamness means that teams and teamwork are more nuanced than previous definitions suggest. Empirical findings on team cognition may be specific to the point in the multidimensional space of teamness that reflects the specific team and teamwork studied. For instance, what kind of team training is most appropriate for human-machine teams of high interdependence with hierarchical control structures? Likewise, team composition, structure, and function are important considerations that may affect the generalizability of results. How does a massively distributed and heterogeneous team maintain team situation awareness as compared to a small all-human colocated team? Equally important is the type of teammate with nonhuman teammates playing increasingly important roles. Laboratory paradigms and measurement practices need to adapt to the growing complexity associated with these new and varied types of teams. The future of team cognition is one of the new discoveries, new research paradigms, and new measures.
ACKNOWLEDGMENTS
The authors would like to acknowledge the intellectual contributions from other seminar participants including Carlos Bustamante Orellana, Noella Campbell, Mustafa Demir, Jessica Dirks, Logan Fralich, Daelyn Haddad, Rhea Holgate, Kelsey Keberle, Jayci Landfair, Christian Larosa, Christina Lewis, Chris Lieber, Dylan Orth, Tamantha Pizarro, Felix Raimondo, Lucero Rodriguez Rodriguez, Sydney Wallace, Kambiz Weaver Salazar, and Giang Wilkinson.
Biography
Nancy J. Cooke is Professor of Human Systems Engineering and Director of the Center for Human, Artificial Intelligence, and Robot Teaming through the Global Security Initiative at Arizona State University. She received her PhD in 1987 from New Mexico State University and has been at Arizona State University since 2002.
Myke C. Cohen is a Graduate Research Assistant at the Center for Human, AI, and Robot Teaming, and a PhD student in Human Systems Engineering and Ira A. Fulton Schools of Engineering Dean’s Fellow at Arizona State University. He holds a Bachelor of Science in Industrial Engineering from the University of the Philippines Diliman and a Master of Science in Human Systems Engineering from Arizona State University.
Walter C. Fazio is pursuing a Master of Science degree in Human Systems Engineering at Arizona State University. In April 2012, he joined Sandia National Laboratories, where he is currently an engineer in the Human Factors department. He holds BS and MS degrees in Mechanical Engineering from Brigham Young University.
Laura H. Inderberg was born and raised in Italy. In the past, she also lived in the United Kingdom, the United Arab Emirates, and Germany. She moved to the United States 6 years ago and is currently in her junior year in the Human Systems Engineering honors program at Arizona State University.
Craig J. Johnson is a human systems engineering PhD candidate, graduate researcher, and DoD SMART Scholar at Arizona State University. He holds a Bachelor of Science in psychology from Clemson University and Master of Science in human systems engineering from Arizona State University.
Glenn Lematta is a PhD student in Human Systems Engineering at Arizona State University. Glenn holds a Master’s of Science degree in Human Systems Engineering and a Bachelor’s of Science in Industrial and Organizational Psychology from ASU. He is also a Senior Cognitive Engineer at The MITRE Corporation.
Matthew Peel is a Graduate Research Assistant for the Center for Human, AI, and Robot Teaming and a PhD Student of Human Systems Engineering at Ira. A. Fulton School of Engineering, Arizona State University. He received his master’s degree in Human Factors and Ergonomics from San Jose State University and his bachelor’s degree in Psychology from Sacramento State University.
Aaron Teo is a PhD student in Human Systems Engineering at the Ira A. Fulton Schools of Engineering at Arizona State University and a Graduate Research Assistant for the Center for Human, AI, and Robot Teaming. He graduated with his bachelor’s and master’s degree from Purdue University, majoring in Aviation Technology.
KEY POINTS
All teams are not equal, but differ across multiple dimensions that may define “teamness.”
Future research on team cognition should identify how teams work by these dimensions.
Research is needed on team cognition when teammates are not all human.
Laboratory paradigms and measures of team cognition require advances.
REFERENCES
- Barth S., Schraagen J. M., Schmettow M. (2015). Network measures for characterising team adaptation processes. Ergonomics, 58(8), 1287–1302. 10.1080/00140139.2015.1009951 [DOI] [PubMed] [Google Scholar]
- Cannon-Bowers J. A., Salas E., Converse S. (1993). Shared mental models in expert team decision making. In In N. J. Castellan, Jr. (Ed.), Individual and Group Decision Making: Current Issues (pp. 221–246). Hillsdale, NJ: Lawrence Erlbaum Associates. [Google Scholar]
- Case N. (2018). How to become a Centaur. Journal of Design and Science. 10.21428/61b2215c [DOI] [Google Scholar]
- Chiou E. K., Lee J. D. (2021). Trusting automation: Designing for responsivity and resilience. Human Factors: The Journal of the Human Factors and Ergonomics Society, 65(1), 001872082110099. 10.1177/00187208211009995 [DOI] [PubMed] [Google Scholar]
- Chiou E. K., Lee J. D. (2016). Cooperation in human-Aaent systems to support resilience: A microworld experiment. Human Factors, 58(6), 846–863. 10.1177/0018720816649094 [DOI] [PubMed] [Google Scholar]
- Cohen S. G., Bailey D. E. (1997). What makes teams work: Group effectiveness research from the shop floor to the executive suite. Journal of Management, 23(3), 239–290. 10.1177/014920639702300303 [DOI] [Google Scholar]
- Cooke N. J., Demir M., Huang L. (2020). A Framework for human-autonomy team research. In Harris D., Li W.-C. (Eds.), Engineering psychology and cognitive ergonomics. Cognition and design (12187, pp. 134–146). Springer International Publishing. 10.1007/978-3-030-49183-3_11 [DOI] [Google Scholar]
- Cooke N. J., Gorman J. C., Myers C. W., Duran J. L. (2013). Interactive team cognition. Cognitive Science, 37(2), 255–285. 10.1111/cogs.12009 [DOI] [PubMed] [Google Scholar]
- Cooke N. J., Shope S. M. (2004). Designing a synthetic task environment. In Schiflett S. G., Elliott L. R., Salas E., Coovert M. D. (Eds.), Scaled worlds: Development, validation and applications (pp. 263–278). [Google Scholar]
- Demir M., Canan M., Cohen M. C. (2021). Modeling team interaction and interactive decision-making in agile human-machine teams. In 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS), Magdeburg, Germany, 08–10 September 2021: 1–6. 10.1109/ICHMS53169.2021.9582449 [DOI] [Google Scholar]
- Demir M., Cooke N. J., Amazeen P. G. (2018). A conceptual model of team dynamical behaviors and performance in human-autonomy teaming. Cognitive Systems Research, 52(3), 497–507. 10.1016/j.cogsys.2018.07.029 [DOI] [Google Scholar]
- Demir M., Johnson C. J., Cohen M. C., Grimm D. A., Cooke N. J., Gorman J. C. (2022). Multifractal analysis of heart rate dynamics as a predictor of teammate trust in human-machine teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 66(1), 528–529. 10.1177/1071181322661101 [DOI] [Google Scholar]
- Dias R. D., Zenati M. A., Stevens R., Gabany J. M., Yule S. J. (2019). Physiological synchronization and entropy as measures of team cognitive load. Journal of Biomedical Informatics, 96, 103250. 10.1016/j.jbi.2019.103250 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Epley N., Waytz A., Cacioppo J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. 10.1037/0033-295x.114.4.864 [DOI] [PubMed] [Google Scholar]
- Espinosa J. A., Nan N., Carmel E. (2015). Temporal distance, communication patterns, and task performance in teams. Journal of Management Information Systems, 32(1), 151–191. 10.1080/07421222.2015.1029390 [DOI] [Google Scholar]
- Fiore S. M., Wiltshire T. J. (2016). Technology as teammate: Examining the role of external cognition in support of team cognitive processes. Frontiers in Psychology, 7, 1531. 10.3389/fpsyg.2016.01531 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gorman J. C., Amazeen P. G., Cooke N. J. (2010). Team coordination dynamics. Nonlinear Dynamics, Psychology, and Life Sciences, 14(3), 265–289. [PubMed] [Google Scholar]
- Gorman J. C., Cooke N. J., Winner J. L. (2006). Measuring team situation awareness in decentralized command and control environments. Ergonomics, 49(12–13), 1312–1325. 10.1080/00140130600612788 [DOI] [PubMed] [Google Scholar]
- Gorman J. C., Dunbar T. A., Grimm D., Gipson C. L. (2017). Understanding and modeling teams as dynamical systems. Frontiers in Psychology, 8, 1053. 10.3389/fpsyg.2017.01053 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gorman J. C., Grimm D. A., Stevens R. H., Galloway T., Willemsen-Dunlap A. M., Halpin D. J. (2020). Measuring real-time team cognition during team training. Human Factors, 62(5), 825–860. 10.1177/0018720819852791 [DOI] [PubMed] [Google Scholar]
- Gorman J. C., Hessler E. E., Amazeen P. G., Cooke N. J., Shope S. M. (2012). Dynamical analysis in real time: Detecting perturbations to team communication. Ergonomics, 55(8), 825–839. 10.1080/00140139.2012.679317 [DOI] [PubMed] [Google Scholar]
- Groom V., Nass C. (2007). Can robots be teammates? Benchmarks in human–robot teams. Interaction Studies, 8(3), 483–500. 10.1075/is.8.3.10gro [DOI] [Google Scholar]
- Hollenbeck J. R., Beersma B., Schouten M. E. (2012). Beyond team types and taxonomies: A dimensional scaling conceptualization for team description. Academy of Management Review, 37(1), 82–106. 10.5465/amr.2010.0181 [DOI] [Google Scholar]
- Johnson M., Bradshaw J. M. (2021). How interdependence explains the world of teamwork. In Lawless W. F., Llinas J., Sofge D. A., Mittu R. (Eds.), Engineering artificially intelligent systems: A systems engineering approach to realizing synergistic capabilities (pp. 122–146). Springer International Publishing. 10.1007/978-3-030-89385-9_8 [DOI] [Google Scholar]
- Kapp K. M. (2012). The gamification of learning and instruction: Game-based methods and strategies for training and education. Center for Creative Leadership. http://ebookcentral.proquest.com/lib/asulib-ebooks/detail.action?docID=821714 [Google Scholar]
- Kazi S., Khaleghzadegan S., Dinh J. V., Shelhamer M. J., Sapirstein A., Goeddel L. A., Chime N. O., Salas E., Rosen M. A., Rosen M. A. (2021). Team physiological dynamics: A critical review. Human factors, 63(1), 32–65. 10.1177/0018720819874160 [DOI] [PubMed] [Google Scholar]
- Kozlowski S. W. J., Chao G. T. (2012). The dynamics of emergence: Cognition and Cohesion in work teams. Managerial and Decision Economics, 33(5–6), 335–354. 10.1002/mde.2552 [DOI] [Google Scholar]
- Lematta G. J., Corral C. C., Buchanan V., Johnson C. J., Mudigonda A., Scholcover F., Wong M. E., Ezenyilimba A., Baeriswyl M., Kim J., Holder E., Chiou E. K., Cooke N. J. (2022). Remote research methods for Human–AI–Robot teaming. Human Factors and Ergonomics in Manufacturing & Service Industries, 32(1), 133–150. 10.1002/hfm.20929 [DOI] [Google Scholar]
- Marks M. A., Mathieu J. E., Zaccaro S. J. (2001). A temporally based framework and taxonomy of team processes. Academy of Management Review, 26(3), 356–376. 10.5465/amr.2001.4845785 [DOI] [Google Scholar]
- Mathieu J. E., Button S. B. (1992). An examination of the relative impact of normative information and self-efficacy on personal goals and performance over time1. Journal of Applied Social Psychology, 22(22), 1758–1775. 10.1111/j.1559-1816.1992.tb00975.x [DOI] [Google Scholar]
- McChrystal G. S., Collins T., Silverman D., Fussell C. (2015). Team of teams: New rules of engagement for a complex world. Penguin. [Google Scholar]
- McNeese N. J., Demir M., Cooke N. J., Myers C. (2018). Teaming with a synthetic teammate: Insights into human-autonomy teaming. Human Factors: The Journal of the Human Factors and Ergonomics Society, 60(2), 262–273. 10.1177/0018720817743223 [DOI] [PubMed] [Google Scholar]
- Mesmer-Magnus J., Niler A. A., Plummer G., Larson L. E., DeChurch L. A. (2017). The cognitive underpinnings of effective teamwork: A continuation. Career Development International, 22(5), 507–519. 10.1108/CDI-08-2017-0140 [DOI] [Google Scholar]
- Moreland R. L., Argote L., Krishnan R. (1996). Socially shared cognition at work: Transactive memory and group performance. In What’s social about social cognition? Research on socially shared cognition in small groups (pp. 57–84). Sage Publications, Inc. 10.4135/9781483327648.n3 [DOI] [Google Scholar]
- Morrison-Smith S., Ruiz J. (2020). Challenges and barriers in virtual teams: A literature review. SN Applied Sciences, 2(6), 1096. 10.1007/s42452-020-2801-5 [DOI] [Google Scholar]
- Muller E. (2022). How AI-Human symbiotes may reinvent innovation and what the new centaurs will mean for cities. Technology and Investment, 13(01), 1–19. 10.4236/ti.2022.131001 [DOI] [Google Scholar]
- Musick G., O’Neill T. A., Schelble B. G., McNeese N. J., Henke J. B. (2021). What happens when humans believe their teammate is an AI? An investigation into humans teaming with autonomy. Computers in Human Behavior, 122(4), 106852. 10.1016/j.chb.2021.106852 [DOI] [Google Scholar]
- National Academies of Sciences, E (2021). Human-AI Teaming: State-of-the-Art and Research Needs. 10.17226/26355 [DOI] [Google Scholar]
- O’Neill T., McNeese N., Barron A., Schelble B. (2020). Human–autonomy teaming: A review and analysis of the empirical literature. Human Factors, 64(5), 1–35. 10.1177/0018720820960865 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Phillips E., Ososky S., Grove J., Jentsch F. (2011). From tools to teammates: Toward the development of appropriate mental models for intelligent robots. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1), 1491–1495. 10.1177/1071181311551310 [DOI] [Google Scholar]
- Phillips E. K., Schaefer K. E., Billings D. R., Jentsch F., Hancock P. A. (2016). Human-animal teams as an analog for future human-robot teams: Influencing design and fostering trust. Journal of Human-Robot Interaction, 5(1), 100–125. 10.5898/jhri.5.1.phillips [DOI] [Google Scholar]
- Riek L. D. (2012). Wizard of Oz studies in HRI: A systematic review and new reporting guidelines. Journal of Human-Robot Interaction, 1(1), 119–136. 10.5898/JHRI.1.1.Riek [DOI] [Google Scholar]
- Salas E., Dickinson T. L., Converse S. A., Tannenbaum S. I. (1992). Toward an understanding of team performance and training. In Teams: Their training and performance (pp. 3–29). Ablex Publishing. [Google Scholar]
- Seeber I., Bittner E., Briggs R. O., de Vreede T., de Vreede G.-J., Elkins A., Maier R., Merz A. B., Oeste-Reiß S., Randrup N., Schwabe G., Söllner M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2), 103174. 10.1016/j.im.2019.103174 [DOI] [Google Scholar]
- Sheridan T. B. (2012). Human supervisory control. In Salvendy G. (Ed.), Handbook of human factors and ergonomics (4th ed., pp. 990–1015). John Wiley & Sons, Ltd. 10.1002/9781118131350.ch34 [DOI] [Google Scholar]
- Shneiderman B. (1989). A nonanthropomorphic style guide: Overcoming the humpty-dumpty syndrome. The Computing Teacher, 16(7), 5. [Google Scholar]
- Shneiderman B. (2022). Human-Centered AI. Oxford University Press. [Google Scholar]
- Sims V. K., Chin M. G., Yordon R. E., Sushil D. J., Barber D. J., Owens C. W., Smith H. S., Dolezal M. J., Shumaker R., Finkelstein N. (2005). When function follows form: Anthropomorphism of artifact “faces. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 49(3), 595–597. 10.1177/154193120504900381 [DOI] [Google Scholar]
- Stuck R. E., Tomlinson B. J., Walker B. N. (2022). The importance of incorporating risk into human-automation trust. Theoretical Issues in Ergonomics Science, 23(4), 500–516. 10.1080/1463922X.2021.1975170 [DOI] [Google Scholar]
- Sundstrom E. (1999). The challenges of supporting work team effectiveness. In Supporting work team effectiveness (Vol. 3, p. 23). [Google Scholar]
- Sundstrom E., De Meuse K. P., Futrell D. (1990). Work teams: Applications and effectiveness. American Psychologist, 45(2), 120–133. 10.1037/0003-066X.45.2.120 [DOI] [Google Scholar]
- van den Oever F., Schraagen J. M. (2021). Team communication patterns in critical situations. Journal of Cognitive Engineering and Decision Making, 15(1), 28–51. 10.1177/1555343420986657 [DOI] [Google Scholar]
- Walliser J. C., de Visser E. J., Wiese E., Shaw T. H. (2019). Team structure and team building improve human–machine teaming with autonomous agents. Journal of Cognitive Engineering and Decision Making, 13(4), 258–278. 10.1177/1555343419867563 [DOI] [Google Scholar]
- Zieba S., Polet P., Vanderhaegen F., Debernard S. (2010). Principles of adjustable autonomy: A framework for resilient human–machine cooperation. Cognition, Technology & Work, 12(3), 193–203. 10.1007/s10111-009-0134-7 [DOI] [Google Scholar]
