Research on Collaborative Problem Solving (CPS) has traditionally examined how humans rely on one another cognitively and socially to accomplish tasks together. With the rapid advancement of AI and large language models, however, a new question emerge: what happens to team dynamics when one of the "teammates" is not human? In this study, we investigate how the integration of an AI teammate -- a fully autonomous GPT-4 agent with social, cognitive, and affective capabilities -- shapes the socio-cognitive dynamics of CPS. We analyze discourse data collected from human-AI teaming (HAT) experiments conducted on a novel platform specifically designed for HAT research. Using two natural language processing (NLP) methods, specifically Linguistic Inquiry and Word Count (LIWC) and Group Communication Analysis (GCA), we found that AI teammates often assumed the role of dominant cognitive facilitators, guiding, planning, and driving group decision-making. However, they did so in a socially detached manner, frequently pushing agenda in a verbose and repetitive way. By contrast, humans working with AI used more language reflecting social processes, suggesting that they assumed more socially oriented roles. Our study highlights how learning analytics can provide critical insights into the socio-cognitive dynamics of human-AI collaboration.
翻译:暂无翻译