As generative AI systems become increasingly embedded in collaborative work, they are evolving from visible tools into human-like communicative actors that participate socially rather than merely providing information. Yet little is known about how such agents shape team dynamics when their artificial nature is not recognised, a growing concern as human-like AI is deployed at scale in education, organisations, and civic contexts where collaboration underpins collective outcomes. In a large-scale mixed-design experiment (N = 905), we examined how AI teammates with distinct communicative personas, supportive or contrarian, affected collaboration across analytical, creative, and ethical tasks. Participants worked in triads that were fully human or hybrid human-AI teams, without being informed of AI involvement. Results show that participants had limited ability to detect AI teammates, yet AI personas exerted robust social effects. Contrarian personas reduced psychological safety and discussion quality, whereas supportive personas improved discussion quality without affecting safety. These effects persisted after accounting for individual differences in detectability, revealing a dissociation between influence and awareness that we term the social blindspot. Linguistic analyses confirmed that personas were enacted through systematic differences in affective and relational language, with partial mediation for discussion quality but largely direct effects on psychological safety. Together, the findings demonstrate that AI systems can tacitly regulate collaborative norms through persona-level cues, even when users remain unaware of their presence. We argue that persona design constitutes a form of social governance in hybrid teams, with implications for the responsible deployment of AI in collective settings.
翻译:随着生成式AI系统日益融入协作工作,它们正从可见的工具演变为类人的交流主体——这些主体以社会性方式参与协作,而非仅仅提供信息。然而,当这类智能体的人工本质未被识别时,它们如何影响团队动态尚不明确。随着类人AI在教育、组织和公民场景中大规模部署(这些场景均以协作为集体成果的基础),这一问题日益受到关注。通过一项大规模混合设计实验(N = 905),我们研究了具有不同交流人格(支持型或对立型)的AI队友如何影响分析性、创造性和伦理性任务中的协作。参与者在完全人类团队或人机混合团队中进行三人协作,且未被告知AI的参与。结果显示:参与者识别AI队友的能力有限,但AI人格产生了显著的社会效应。对立型人格降低了心理安全感和讨论质量,而支持型人格在未影响安全感的情况下提升了讨论质量。这些效应在控制可识别性的个体差异后依然存在,揭示了影响力与认知度之间的分离现象——我们称之为"社会盲点"。语言分析证实,人格通过情感性和关系性语言的系统性差异得以体现,其中对讨论质量的影响存在部分中介效应,而对心理安全感的影响主要为直接效应。综合而言,本研究表明:即使使用者未意识到AI的存在,AI系统仍能通过人格层面的线索隐性调节协作规范。我们认为,人格设计构成了混合团队中的一种社会治理形式,这对AI在集体场景中的负责任部署具有重要启示。