Skip to main content
Frontiers in Psychology logoLink to Frontiers in Psychology
. 2021 May 24;12:590290. doi: 10.3389/fpsyg.2021.590290

Improving Teamwork Competencies in Human-Machine Teams: Perspectives From Team Science

Kimberly Stowers 1,*, Lisa L Brady 1, Christopher MacLellan 2, Ryan Wohleber 3, Eduardo Salas 4
PMCID: PMC8181721  PMID: 34108903

Abstract

In response to calls for research to improve human-machine teaming (HMT), we present a “perspective” paper that explores techniques from computer science that can enhance machine agents for human-machine teams. As part of this paper, we (1) summarize the state of the science on critical team competencies identified for effective HMT, (2) discuss technological gaps preventing machines from fully realizing these competencies, and (3) identify ways that emerging artificial intelligence (AI) capabilities may address these gaps and enhance performance in HMT. We extend beyond extant literature by incorporating recent technologies and techniques and describing their potential for contributing to the advancement of HMT.

Keywords: human-machine team, artificial intelligence, third wave AI, explainable AI, teamwork

Introduction

Human-machine teaming (HMT)1 is increasingly relevant to a variety of modern industries, domains, and work environments. Nearly a decade ago, Amazon added robots to their warehouse facilities to participate in stocking (The Future of Work, 2019). More recently, Google initiated a research program to improve human-machine collaboration (Knight, 2017). These examples and others (e.g., IBM, Facebook; see Davenport, 2018) have demonstrated that machine agents, or machines capable of perceiving and acting upon the world autonomously (Russell and Norvig, 2009), can improve human and organizational performance by providing opportunities for increased safety and productivity.

Effective HMT is contingent upon the success of complex interactions between human and machine agents, and between these agents and their environment (Stowers et al., 2017). However, it is difficult to create machine agents that have the advanced competencies (i.e., knowledge, skills, and abilities) necessary to support these complex interactions (Sukthankar, et al., 2012). Consequently, not all HMT results in heightened performance at the individual, team, or organizational level. In their respective literatures, teams researchers (e.g., Salas et al., 2009) and computer scientists (e.g., Klein et al., 2004; Ososky et al., 2012; Seeber et al., 2020) have identified competencies that are important for successful teaming, but efforts to identify promising new technologies in this area have been limited. Thus, there is a need to explore recent technological developments that may contribute to the advancement of HMT research and practice regarding effective human-machine collaboration.

In this perspective, we (1) briefly summarize the state of the science on critical team competencies identified for effective HMT, (2) highlight gaps preventing machines from fully realizing these competencies, and (3) identify emerging artificial intelligence (AI) capabilities that show promise for enhancing these competencies in machine agent teammates. Our goal is to show how HMT can integrate cutting edge advancements from computer science to improve capabilities of machines to function as teammates.

The Evolution of Human-Machine Teams

Psychologists and engineers have long explored the use of machines to augment and improve human task performance (Fitts, 1951; Dekker and Woods, 2002). In early work, machines operated as tools to facilitate taskwork by automating physical (e.g., product assembly) and cognitive (e.g., text generation) tasks. The goal of the machine was to improve the overall HMT performance (Dekker and Woods, 2002). Historically, the sociotechnical systems approach (Trist and Bamforth, 1951; Cherns, 1976) guided work design for HMT (Trist, 1981). According to this perspective, the human represents the social subsystem, and completes tasks using resources within the technical subsystem, which is represented by the machine (Eason, 2009).

As machines gained intelligence and the ability to adapt in their interactions with humans, researchers (e.g., Parasuraman and Riley, 1997) developed guidelines regarding the appropriate design and use of machines, including guidelines for their autonomy and adaptivity (e.g., Parasuraman et al., 2000). Broader frameworks describe the human, machine, and contextual inputs, and the resulting processes and states that define human-machine performance (Pina et al., 2008; Stowers et al., 2017). These frameworks also highlight the temporal nature of interaction between humans and machines.

In the last decade, the conversation has shifted from machines as tools to machines as teammates (Phillips et al., 2011; Seeber et al., 2020). The introduction of machines as a component of the social – rather than solely the technical – system has resulted in new design-related challenges (Sukthankar et al., 2012). For example, once machines attain a certain level of intelligence, humans tend to judge machines in much the same way they do their fellow humans, seeking human likeness where it may not exist (Nass et al., 1995; Groom and Nass, 2007). From the HMT perspective, this has been referred to as “teammate-likeness,” where the human perceives the machine as possessing agency, altruism, task-interdependence, relationship building, sophisticated communication, and shared mental models (SMM; Wynne and Lyons, 2018). Teammate-likeness is contingent on factors such as trust and ability (Schaefer et al., 2016) and may also be contingent on machine cues that imply emotional intelligence (e.g., empathy, perspective taking; Salovey and Mayer, 1990). Given that machines still lack the capacity for true emotional intelligence and other socio-emotional competencies that are on par with humans (e.g., Picard et al., 2001; Erol et al., 2020), creating technologies and techniques that allow machines to live up to this perceived teammate-likeness presents a unique challenge.

Gaps in Machine Competencies for HMT

It may not be necessary for machines to possess all human socio-emotional competencies to be effective teammates. However, the creation of certain capabilities allows machines to develop attitude-based competencies that have been identified as critical for the optimization of teams (c.f. Salas et al., 2009), such as cohesion and mutual (rather than one-way) trust (Groom and Nass, 2007). Recent work in team science has emphasized three team competencies that are transportable across contexts (Salas et al., 2018), namely communication, coordination, and adaptability (hereafter referred to as adaptation). These competencies, referred to as transportable teamwork competencies, are applicable in any effective team, regardless of the team or task environment (Salas et al., 2018).

Although team researchers discuss communication, coordination, and adaptation strictly in the realm of human teams (Salas et al., 2018), others have highlighted their importance to HMT (Stowers et al., 2017; Seeber et al., 2020). These competencies are considered universally relevant collaborative processes (Salas et al., 2018; Seeber et al., 2020). Due to wide applicability across team and tasks types, we feature them in this perspective piece as critical areas, where additional technological advancements could improve HMT performance on a large scale. Here, we describe the state of the science on these competencies in HMT and identify gaps where new technologies can provide benefit.

Communication

Communication, which refers the process of exchanging information between teammates (Salas et al., 2009), is important for team performance as it contributes to the development and maintenance of SMMs and the successful execution of many necessary team processes, including planning and mission analysis (Salas et al., 2009). In HMT research, the process of information exchange between humans and machines has been examined via the concept of transparency, defined as “the quality of an interface (e.g., visual, linguistic) pertaining to its abilities to afford an operator’s comprehension about an intelligent agent’s intent, performance, future plans, and reasoning process” (Chen et al., 2014, p. 2).

Although perfect transparency has yet to be realized in HMT (Nam and Lyons, 2020), machines have gained the capacity to share information with humans and coordinate more effectively in joint tasks (Lyons, 2013; Chen et al., 2014, 2018). This includes using turn-taking (Chao and Thomaz, 2016), which is integral to human-machine fluency (Hoffman, 2019). Features such as turn-taking and the ability to recognize human language (Tellex et al., 2020) can enhance the bidirectionality of communication between humans and machines, making HMT in general more teamlike and efficient (Chen et al., 2018).

Researchers studying effective HMT have identified that the trust a human member has in a machine is a critical factor in successful team communication (Nam and Lyons, 2020) as well as HMT fluency (Hoffman, 2019). To that end, researchers have looked at how communication can promote trust, and established guidelines for the effective design of information for promoting trust and overall performance (Chen et al., 2014; Sanneman and Shah, 2020). In addition to guidelines regarding the quantity and design of information, researchers have investigated the quality of information shared by HMT members and have applied frameworks of human situation awareness to understand and improve communication processes, such as the mitigation of confusion (Chen et al., 2014; Stowers et al., 2020).

Despite improvements in communication-related abilities, not all machines apply these skills with equal success, leaving gaps between high-performing machines and high-performing HMTs. For example, machines utilizing neural networks tend to have higher performance than other machines, but generally utilize poorer communication due to their use of distributed statistical representations (Nam and Lyons, 2020). Teaming done with machines utilizing neural networks would benefit from an improvement in explainability (Gunning and Aha, 2019). Technological advancements in the area of explainable AI (XAI) show promise for enhancing transparency between humans and machines that utilize neural networks. In short, XAI is AI that can be understood by humans (Gunning and Aha, 2019; Sanneman and Shah, 2020). A goal of XAI research (Gunning and Aha, 2019) is to identify approaches for communicating AI models and their inferences in a format that human operators can comprehend. This research exposes recent XAI-afforded competencies (e.g., transparency), which contributes to understanding and trust between human(s) and machine(s) (e.g., Nam and Lyons, 2020), thus impacting HMT communication.

Communication in HMT must be a two-way street. Machines must be able to effectively share information in a way that humans can understand, but to do so means that machines must possess the ability to accurately model human comprehension of information. This ties in closely with the second transportable teamwork competency: coordination.

Coordination

While communication refers to the information-sharing process, coordination in human teams refers to the organization of team members’ knowledge, skills, and behaviors to meet a specific goal (Salas et al., 2009). In HMT literature, coordination is defined as the process through which humans and machines manage “dependencies between activities” (Malone and Crowston, 1990). In effective team coordination, task-relevant information is communicated in a timely manner, while unnecessary communication is avoided. In this way, effective communication processes can be seen as necessary but not sufficient for effective HMT coordination.

Human-machine teaming scholars have identified three requirements for effective coordination (Klein et al., 2005): members must (1) each be reliable and able to predict each other’s behaviors, (2) possess common historical and present knowledge (Clark, 1996), and (3) be able to re-direct or help each other in tasks (Christoffersen and Woods, 2002). To this end, a machine is considered an effective coordinator if it is reliable, directable, able to communicate intentions, and able to recognize status and intentions of other team members (Klein et al., 2005). These qualities allow the machine to be able to engage in the communication and creation of SMM needed for successful coordination (Matthews et al., 2021).

The primary gap in the development of coordination in HMT lies in the degree to which machines can engage in implicit coordination. Implicit coordination, which refers to the process of synchronizing team member actions based on assumptions of what each teammate is most likely to do (Wittenbaum and Stasser, 1996), is helpful in high workload situations as it reduces “communication overhead” (MacMillan et al., 2004), and allows teammates to focus on the task at hand with minimal distraction. While machines currently possess the ability to detect certain implicit cues; e.g., via the recognition of facial expressions (Picard et al., 2001), they are limited in their ability to detect contextual cues. For example, because coordination involves a complex and varying presentation of implicit communication cues (Lackey et al., 2011), it is difficult and expensive to support machine cue perception on a human level. Detection, interpretation, and reasoning about these cues from a human perspective (Baker et al., 2011) is imperative to ensure effective coordination.

To this end, machines would benefit from developing a theory of mind (Baker et al., 2011). This would translate observations of teammates’ behaviors into a computational model of what they know/do not know, what their goals and preferences are, what capabilities they have, and what behaviors they might take next. Such a model could then be employed to simulate what a teammate might do in different situations or what options they would prefer their machine teammate to take. This kind of capability would support implicit coordination with teammates by enabling the machine to anticipate its teammates’ behaviors and expectations and then to adapt its own behavior to align with those. We elaborate more on adaptation next, including the state of the science and related opportunities for improvement.

Adaptation

Adaptation in HMT has been examined in two ways: (1) adaptability (i.e., human-controlled adaptation; Miller et al., 2005) and (2) adaptiveness (i.e., machine-controlled adaptation). For example, adaptability can be achieved by supporting human choice regarding the machine’s role, behavioral parameters, and level of autonomy. The human might decide a machine teammate’s tasking order, such as choosing the next lesson given by an intelligent tutoring system (Chou et al., 2015). However, research has shown that humans do not always effectively allocate tasking to automated systems (Lin et al., 2019). To overcome this human limitation, machines can engage adaptiveness by prompting a human to take control of a task (Kaber et al., 2005), or by assuming control in cases of suboptimal task management by human teammates. However, the latter solution poses a threat to human agency (Wohleber et al., 2020) and must be carefully executed.

In team science, adaptation or adaptability as a teamwork competency is examined more broadly and refers to the adjustment of strategies and behaviors in response to changes in the team’s circumstances (Driskell et al., 2018). In considering adaptation through this lens, machines are capable of detecting changes in the internal team and external environments (Lackey et al., 2011), allowing them to engage both adaptive and adaptable mechanisms as designed. They may also detect some underlying causes of changing environments through common sense reasoning (Morgenstern et al., 2016), though this capability remains limited by the datasets used to train common sense (Hao, 2020).

The limitation of machines having to rely on datasets to train common sense has led researchers to explore “third wave AI” capabilities for informing machine knowledge and adaptation. The Defense Advanced Research Projects Agency (DARPA) has argued that to move beyond knowledge-based AI methods (first wave) and statistical machine learning AI methods (second wave), we need approaches that can integrate both first and second wave methods to support contextual understanding and adaptation (third wave, Launchbury, 2017). Most machine learning systems operate by identifying correlational relationships between variables. In contrast, as part of third wave AI, causal and counterfactual models (Pearl, 2019) aim to understand the causal relationships between variables. By modeling causal relationships, machines can better support counterfactual inference; i.e., they can generalize from observed operating conditions to unobserved conditions.

In the last section, we suggested that developing theory of mind in machines (Baker et al., 2011) would be beneficial to HMT coordination and adaptation. For the adaptation piece to be fully realized, it is necessary for machines to be able to not only recognize their teammates’ knowledge and behaviors, but also anticipate and respond to new knowledge and behaviors should they arise. Machines that create and apply causal links to new scenarios should be able to do this and therefore be more effective at modifying their behaviors to engage in a truly adaptive team.

Conclusion and a Path Forward

Communication, coordination, and adaptation have been identified as critical to the success of both human teaming and HMT, but machines have yet to fully realize the uniquely human cognitive abilities that are necessary for effective teaming (Matthews et al., 2021). However, there are new AI capabilities that could allow machines to maximize these competencies. These capabilities offer a means for machines to begin meeting the requirements needed for effective collaboration with humans.

The capabilities afforded by recent technological advancements show promise for allowing machines to possess the transportable teamwork competencies identified as universally critical to teams (Salas et al., 2018; Seeber et al., 2020). For example, machines might leverage a theory of mind reasoning capability to build a computational model of their teammate based on observations of their behavior. This model might be used to infer what teammates know, what information they have available to make decisions, and what they are likely to do next – enabling better implicit coordination with humans. This theory of mind model, in conjunction with the ability to generate human-explainable outputs (via XAI), will also enable machines to determine when and how best to communicate with teammates, further enhancing the trust and ability that affords machine teammate-likeness. Finally, the creation of causal and counterfactual inference capabilities unique to third wave AI will allow machines to be truly adaptive teammates that possess the ability to recognize and reason about the underlying factors that produce changes in the HMT and environment.

While these new technological approaches show promise, more work is needed to refine them to the level required for effective teamwork. Current AI research focuses on developing specific learning and performance capabilities and often does not incorporate findings or insights from the teaming and HMT literature. For example, consider OpenAI Five, a team of five trained neural-network models that can coordinate together to beat a team of five top human champions at Dota 2, a multiplayer online battle arena game (Berner et al., 2019). While the five machines were able to coordinate effectively with each other, a subsequent match with a HMT showed the machines performed worse when partnered with humans. In examining this phenomenon, Carroll et al. (2019) found that the machines were limited by the knowledge they gained from initial training with fellow machines.

Contrastingly, if the OpenAI Five possessed the capabilities outlined in this perspective piece (XAI, theory of mind, and third wave counterfactual prediction), then they should better support communication, coordination, and adaptation with humans. As this example shows, more work is needed to better understand how insights from human teaming and HMT might be integrated into the development of these emerging machines. Increasing collaboration between computer scientists and HMT researchers in examining these insights would be beneficial. With machines operating at new levels of sophistication, true HMT may become possible at a larger scale than seen before.

Author Contributions

KS, LB, CM, RW, and ES contributed to the writing and content of this paper. All authors contributed to the article and approved the submitted version.

Conflict of Interest

RW was employed by company Soar Technology, Inc.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The views, opinions, and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the United States Government.

Funding. This work was supported under the DARPA TAILOR program (award no. HR00111990055).

1We refer to human-machine teams (HMT) as humans and machines working together to accomplisha goal, with machines being autonomous enough to engage in decision making (Seeber et al., 2020). Limited forms of HMT have already begun, but additional sophistication still needs to be achievedbefore machines can be considered true teammates (Sukthankar et al., 2012).

References

  1. Baker C., Saxe R., Tenenbaum J. (2011). “Bayesian theory of mind: Modeling joint belief-desire attribution,” in Proceedings of the Annual Meeting of the Cognitive Science Society; July 20–23, 2011; No. 33.
  2. Berner C., Brockman G., Chan B., Cheung V., Dębiak P., Dennison C., et al. (2019). Dota 2 with large scale deep reinforcement learning. arXiv [Preprint]., PMID: 32519963 [Google Scholar]
  3. Carroll M., Shah R., Ho M. K., Griffiths T., Seshia S., Abbeel P., et al. (2019). “On the utility of learning about humans for human-AI coordination,” in Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019. eds. H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. A. Fox and R. Garnett. December 8–14, 2019; Vancouver, BC, Canada, 5175–5186.
  4. Chao C., Thomaz A. (2016). Timed petri nets for fluent turn-taking over multimodal interaction resources in human-robot collaboration. Int. J. Robot. Res. 35, 1330–1353. 10.1177/0278364915627291 [DOI] [Google Scholar]
  5. Chen J. Y., Lakhmani S. G., Stowers K., Selkowitz A. R., Wright J. L., Barnes M. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theor. Issues Ergon. Sci. 19, 259–282. 10.1080/1463922X.2017.1315750 [DOI] [Google Scholar]
  6. Chen J. Y., Procci K., Boyce M., Wright J., Garcia A., Barnes M. (2014). Situation Awareness–Based Agent Transparency. Report No. ARL-TR-6905. Aberdeen Proving Ground, MD: U.S. Army Research Laboratory.
  7. Cherns A. (1976). The principles of sociotechnical design. Hum. Relat. 29, 783–792. 10.1177/001872677602900806 [DOI] [Google Scholar]
  8. Chou C. Y., Lai K. R., Chao P. Y., Lan C. H., Chen T. H. (2015). Negotiation based adaptive learning sequences: combining adaptivity and adaptability. Comput. Educ. 88, 215–226. 10.1016/j.compedu.2015.05.007 [DOI] [Google Scholar]
  9. Christoffersen K., Woods D. D. (2002). “1. How to make automated systems team players,” in Advances in Human Performance and Cognitive Engineering Research. Vol. 2. ed. Salas E. (Bingley: Emerald Group Publishing Limited; ), 1–12. [Google Scholar]
  10. Clark H. (1996). Using Language. Cambridge: Cambridge University Press. [Google Scholar]
  11. Davenport T. H. (2018). The AI advantage: How to Put the Artificial Intelligence Revolution to Work. Cambridge, MA: MIT Press. [Google Scholar]
  12. Dekker S. W. A., Woods D. D. (2002). MABA-MABA or abracadabra? Progress on human-automation co-ordination. Cogn. Tech. Work 4, 240–244. 10.1007/s101110200022 [DOI] [Google Scholar]
  13. Driskell J. E., Salas E., Driskell T. (2018). Foundations of teamwork and collaboration. Am. Psychol. 73, 334–348. 10.1037/amp0000241, PMID: [DOI] [PubMed] [Google Scholar]
  14. Eason K. (2009). Before the internet: the relevance of socio-technical systems theory to emerging forms of virtual organisation. Int. J. Sociotechnol. Knowled. Dev. 1, 23–32. 10.4018/jskd.2009040103 [DOI] [Google Scholar]
  15. Erol B. A., Majumdar A., Benavidez P., Rad P., Choo K.-K. R., Jamshidi M. (2020). Toward artificial emotional intelligence for cooperative social human–machine interaction. IEEE Transact. Comput. Soc. Syst. 7, 234–246. 10.1109/TCSS.2019.2922593 [DOI] [Google Scholar]
  16. Fitts P. M. (1951). Human Engineering for an Effective Air-Navigation and Traffic-Control System. Washington, DC: National Research Council, Division of Anthropology and Psychology, Committee on Aviation Psychology. [Google Scholar]
  17. Groom V., Nass C. (2007). Can robots be teammates? Benchmarks in human-robot teams. Interact. Stud. 8, 493–500. 10.1075/is.8.3.10gro [DOI] [Google Scholar]
  18. Gunning D., Aha D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40, 44–58. 10.1609/aimag.v40i2.2850 [DOI] [Google Scholar]
  19. Hao K. (2020). AI Still Doesn’t Have the Common Sense to Understand Human Language. MIT Technology Review. Available at: https://www.technologyreview.com/2020/01/31/304844/ai-common-sense-reads-human-language-ai2/ (Accessed December 08, 2020).
  20. Hoffman G. (2019). Evaluating fluency in human–robot collaboration. IEEE Transact. Hum. Mach. Syst. 49, 209–218. 10.1109/THMS.2019.2904558 [DOI] [Google Scholar]
  21. Kaber D. B., Wright M. C., Prinzel L. J., Clamann M. P. (2005). Adaptive automation of human-machine system information processing functions. Hum. Factors 47, 730–741. 10.1518/001872005775570989, PMID: [DOI] [PubMed] [Google Scholar]
  22. Klein G., Feltovich P. J., Bradshaw J. M., Woods D. D. (2005). “Common ground and coordination in joint activity,” in Organizational Simulation. Vol. 53. eds. Rouse W. R., Boff K. B. (New York, NY: Wiley; ), 139–184. [Google Scholar]
  23. Klein G., Woods D. D., Bradshaw J. M., Hoffman R. R., Feltovich P. J. (2004). Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intell. Syst. 19, 91–95. 10.1109/MIS.2004.74 [DOI] [Google Scholar]
  24. Knight W. (2017). Your Best Teammate Might Someday be an Algorithm. MIT Technology Review. Available at: https://www.technologyreview.com/2017/07/10/4384/your-best-teammate-might-someday-be-an-algorithm/ (Accessed December 08, 2020).
  25. Lackey S., Barber D., Reinerman L., Badler N. I., Hudson I. (2011). Defining next-generation multi-modal communication in human robot interaction. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 55, 461–464. 10.1177/1071181311551095 [DOI] [Google Scholar]
  26. Launchbury J. (2017). A DARPA Perspective on Artificial Intelligence [PowerPoint slides]. Defense Advanced Research Projects Agency. Available at: https://www.darpa.mil/about-us/darpa-perspective-on-ai (Accessed April 30, 2021).
  27. Lin J., Matthews G., Wohleber R. W., Funke G. J., Calhoun G. L., Ruff H. A., et al. (2019). Overload and automation-dependence in a multi-UAS simulation: task demand and individual difference factors. J. Exp. Psychol. Appl. 26, 218–235. 10.1037/xap0000248, PMID: [DOI] [PubMed] [Google Scholar]
  28. Lyons J. B. (2013). Being transparent about transparency: a model for human-robot interactions. AAAI Spring Symposium: Trust and Autonomous Systems.
  29. MacMillan J., Entin E., Serfaty D. (2004). “Communication overhead: the hidden cost of team cognition,” in Team Cognition: Understanding the Factors that Drive Process and Performance. eds. Salas E., Fiore S. M. (Washington, DC: American Psychological Association; ), 61–82. [Google Scholar]
  30. Malone T. W., Crowston K. (1990). “What is coordination theory and how can it help design cooperative work systems?” in Presented at the Conference on Computer-Supported Cooperative Work, 7 October, 1990, Los Angeles, CA, 357–370.
  31. Matthews G., Panganiban A. R., Lin J., Long M., Schwing M. (2021). “Super-machines or sub-humans: mental models and trust in intelligent autonomous systems,” in Trust in Human-Robot Interaction. eds. Nam C. S., Lyons J. B. (Academic Press; ), 59–82. [Google Scholar]
  32. Miller C. A., Funk H., Goldman R., Meisner J., Wu P. (2005). “Implications of adaptive vs. adaptable UIs on decision making: Why “automated adaptiveness” is not always the right answer,” in Proceedings of the 1st International Conference on Augmented Cognition; July 22–27, 2005.
  33. Morgenstern L., Davis E., Ortiz C. L. (2016). Planning, executing, and evaluating the winograd schema challenge. AI Mag. 37, 50–54. 10.1609/aimag.v37i1.2639 [DOI] [Google Scholar]
  34. Nam C. S., Lyons J. B. (eds.) (2020). Trust in Human-Robot Interaction (Academic Press; ). [Google Scholar]
  35. Nass C., Moon Y., Fogg B. J., Reeves B., Dryer C. (1995). “Can computer personalities be human personalities?” in Conference Companion on Human Factors in Computing Systems; May 1995; 228–229.
  36. Ososky S., Schuster D., Jentsch F., Fiore S., Shumaker R., Lebiere C., et al. (2012). “The importance of shared mental models and shared situation awareness for transforming robots form tools to teammates,” in Proceedings of SPIE 8387, Unmanned Systems Technology XIV; April 25–27, 2012; 838710–838711.
  37. Parasuraman R., Riley V. (1997). Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39, 230–253. 10.1518/001872097778543886 [DOI] [Google Scholar]
  38. Parasuraman R., Sheridan T. B., Wickens C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transact. Syst. Man Cybern. A Syst. Hum. 30, 286–297. 10.1109/3468.844354, PMID: [DOI] [PubMed] [Google Scholar]
  39. Pearl J. (2019). The seven tools of causal inference, with reflections on machine learning. Commun. ACM 62, 54–60. 10.1145/3241036 [DOI] [Google Scholar]
  40. Phillips E., Ososky S., Grove J., Jentsch F. (2011). From tools to teammates: toward the development of appropriate mental models for intelligent robots. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 55, 1491–1495. 10.1177/1071181311551310 [DOI] [Google Scholar]
  41. Picard R. W., Vyzas E., Healey J. (2001). Toward machine emotional intelligence: analysis of affective physiological state. IEEE Transact. Patt. Anal. Mach. Intell. 23, 1175–1191. 10.1109/34.954607 [DOI] [Google Scholar]
  42. Pina P. E., Cummings M. L., Crandall J. W., Penna M. D. (2008). “Identifying generalizable metric classes to evaluate human-robot teams,” in Proceedings of the 3rd Annual Conference on Human–Robot Interaction, New York, NY: ACM; March 12–15, 2008; 13–20.
  43. Russell S., Norvig P. (2009). Artificial Intelligence: A Modern Approach. 3rd Edn. Upper Saddle River, NJ: Prentice Hall. [Google Scholar]
  44. Salas E., Reyes D. L., McDaniel S. H. (2018). The science of teamwork: progress, reflections, and the road ahead. Am. Psychol. 73, 593–600. 10.1037/amp0000334, PMID: [DOI] [PubMed] [Google Scholar]
  45. Salas E., Rosen M. A., Burke C. S., Goodwin G. F. (2009). “The wisdom of collectives in organizations: An update of the teamwork competencies,” in Team Effectiveness in Complex Organizations. Cross-Disciplinary Perspectives and Approaches. eds. Salas E., Goodwin G., Burke C. S. (New York, NY: Psychology Press; ), 39–79. [Google Scholar]
  46. Salovey P., Mayer J. D. (1990). Emotional intelligence. Imagin. Cogn. Pers. 9, 185–211. 10.2190/DUGG-P24E-52WK-6CDG [DOI] [Google Scholar]
  47. Sanneman L., Shah J. A. (2020). “A Situation Awareness-Based Framework for Design and Evaluation of Explainable AI,” in International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems; May 9–13, 2020; Cham: Springer, 94–110.
  48. Schaefer K. E., Chen J. Y. C., Szalma J. L., Hancock P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum. Factors 58, 377–400. 10.1177/0018720816634228, PMID: [DOI] [PubMed] [Google Scholar]
  49. Seeber I., Bittner E., Briggs R. O., de Vreede T., de Vreede G. J., Elkins A., et al. (2020). Machines as teammates: a research agenda on AI in team collaboration. Inf. Manag. 57:103174. 10.1016/j.im.2019.103174 [DOI] [Google Scholar]
  50. Stowers K., Kasdaglis N., Rupp M. A., Newton O. B., Chen J. Y., Barnes M. J. (2020). The IMPACT of agent transparency on human performance. IEEE Transact. Hum. Mach. Syst. 50, 245–253. 10.1109/THMS.2020.2978041 [DOI] [Google Scholar]
  51. Stowers K., Oglesby J., Sonesh S., Leyva K., Iwig C., Salas E. (2017). A framework to guide the assessment of human–machine systems. Hum. Factors 59, 172–188. 10.1177/0018720817695077, PMID: [DOI] [PubMed] [Google Scholar]
  52. Sukthankar G., Shumaker R., Lewis M. (2012). “Intelligent agents as teammates,” in Theories of Team Cognition: Cross-Disciplinary Perspectives. eds. Salas E., Fiore S. M., Letsky M. P. (New York, NY: Routledge; ), 313–343. [Google Scholar]
  53. Tellex S., Gopalan N., Kress-Gazit H., Matuszek C. (2020). Robots that use language. Annu. Rev. Control Robot. Auton. Syst. 3, 25–55. 10.1146/annurev-control-101119-071628 [DOI] [Google Scholar]
  54. The Future of Work (2019). A VICE News Special Report. Reported by Krishna Andavolu (Vice News Correspondent), Vice Media, HBO. Available at: https://www.youtube.com/watch?v=_iaKHeCKcq4
  55. Trist E. (1981). “The evolution of socio-technical systems,” in Perspectives on Organizational Design and Behaviour. eds. Van de Ven A., Joyce W. (Wiley Interscience; ). [Google Scholar]
  56. Trist E. L., Bamforth K. W. (1951). Some social and psychological consequences of the longwall method of coal-getting: an examination of the psychological situation and defences of a work group in relation to the social structure and technological content of the work system. Hum. Relat. 4, 3–38. 10.1177/001872675100400101 [DOI] [Google Scholar]
  57. Wittenbaum G. M., Stasser G. (1996). “Management of information in small groups,” in What’s Social About Social Cognition? Research on Socially Shared Cognition in Small Groups. eds. Nye J. L., Brower A. M. (Thousand Oaks, CA: Sage; ), 3–28. [Google Scholar]
  58. Wohleber R. W., Stowers K., Chen J. Y. C., Barnes M. (2020). “Conducting polyphonic human-robot communication: mastering crescendos and diminuendos in transparency,” in Advances in Simulation and Digital Human Modeling. Advances in Intelligent Systems and Computing. Vol. 1206. eds. Cassenti D., Scataglini S., Rajulu S., Wright J. (Cham: Springer; ), 10–17. [Google Scholar]
  59. Wynne K. T., Lyons J. B. (2018). An integrative model of autonomous agent teammate-likeness. Theor. Issues Ergon. Sci. 19, 353–374. 10.1080/1463922X.2016.1260181 [DOI] [Google Scholar]

Articles from Frontiers in Psychology are provided here courtesy of Frontiers Media SA

RESOURCES