
This study explored the integration and hesitations of AI embedded within human teams (Human-AI Teams, HATs).
30 professionals were interviewed.
Not a summary, but some extracts:
· “As AI takes on more complex roles in the workplace, it is increasingly expected to act as a teammate rather than just a tool”
· HATs “must develop a shared understanding of the task, teammates, and environment and adjust as conditions change … These activities form the foundation of team cognition: the shared knowledge and reasoning necessary for effective team”
· Participants raised concerns about AI integration around 1) trust and reliability, 2) human teammate cognitive skill decline, and 3) communication and emotional processing
· For trust and reliability, people expressed uncertainty about trusting AI teammates and “worried that AI systems might introduce bad information or make errors that would be difficult for teammates to catch”
· This uncertainty could weaken shared understanding and situational awareness that normally helps bind teams – some participants “drew parallels with the Internet’s widespread misinformation, fearing that such a background might lead AI teammates to offer unreliable advice, misguide the team, and pollute shared knowledge”
· People were also concerned about the cognitive skill decline, where “overreliance on AI could restrict human knowledge and analytical capabilities”
· And “rather than supporting team members’ thinking, AI could encourage them to disengage or outsource their responsibilities”
· Moreover, “When people feel like executors instead of contributors, the distributed thinking and mutual monitoring that define team cognition may break down”
· For comms and emotional processing, participants noted that “AI teammates cannot currently understand social or context awareness needed to interpret team communication patterns”
· Importantly, “effective teaming relies on more than just natural language, and it involves recognizing tone, body language and group dynamics”
· This “highlights how AI’s inability to read nonverbal cues may hinder its ability to interpret urgency, uncertainty, or disagreement, all of which is critical to establishing team cognition”
· AI teammates may also go ahead decisions without nuanced understanding of “interpersonal dynamics, they could misinterpret tension, escalate conflicts, or assign roles in ways that damage teamwork”
· The skill decline is supported by other work “emphasizing the need for clear, interdependent roles within HATs … [extending work] by showing that poorly defined roles may limit team efficiency and could actively degrade team cognition by discouraging human teammates from contributing to shared goals”

Ref: Basappa, R., Lancaster, C., Mallick, R., Flathmann, C., & McNeese, N. (2025). In Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Sage CA: Los Angeles, CA: SAGE Publications.

Shout me a coffee (one-off or monthly recurring)
Study link: https://doi.org/10.1177/10711813251361002