Modern conversational agents such as Alexa and Google Assistant represent significant progress in speech recognition, natural language processing, and speech synthesis. But as these agents have grown more realistic, concerns have been raised over how their social nature might unconsciously shape our interactions with them. Through a survey of 500 voice assistant users, we explore whether users' relationships with their voice assistants can be quantified using the same metrics as social, interpersonal relationships; as well as if this correlates with how much they trust their devices and the extent to which they anthropomorphise them. Using Knapp's staircase model of human relationships, we find that not only can human-device interactions be modelled in this way, but also that relationship development with voice assistants correlates with increased trust and anthropomorphism.
翻译:Alexa 和 Google A助理等现代对话代理人代表了在语音识别、自然语言处理和语音合成方面取得的重大进步。 但是,随着这些代理人变得更加现实,人们对其社会性质如何会无意识地影响我们与它们的互动提出了关切。 通过对500个语音助理用户的调查,我们探讨用户与其语音助理的关系是否可以使用与社交、人际关系相同的指标来量化;以及如果这与他们对其设备的信任程度及其人类形态化的程度有关。 使用Knapp的人类关系楼梯模型,我们发现不仅可以以这种方式模拟人类-构思互动,而且可以将与语音助理的关系发展与增强信任和人类形态化联系起来。