Grice's Cooperative Principle (1975) describes the implicit maxims that guide conversation between humans. As humans begin to interact with non-human dialogue systems more frequently and in a broader scope, an important question emerges: what principles govern those interactions? The present study addresses this question by evaluating human-AI interactions using Grice's four maxims; we demonstrate that humans do, indeed, apply these maxims to interactions with AI, even making explicit references to the AI's performance through a Gricean lens. Twenty-three participants interacted with an American English-speaking Alexa and rated and discussed their experience with an in-lab researcher. Researchers then reviewed each exchange, identifying those that might relate to Grice's maxims: Quantity, Quality, Manner, and Relevance. Many instances of explicit user frustration stemmed from violations of Grice's maxims. Quantity violations were noted for too little but not too much information, while Quality violations were rare, indicating trust in Alexa's responses. Manner violations focused on speed and humanness. Relevance violations were the most frequent, and they appear to be the most frustrating. While the maxims help describe many of the issues participants encountered, other issues do not fit neatly into Grice's framework. Participants were particularly averse to Alexa initiating exchanges or making unsolicited suggestions. To address this gap, we propose the addition of human Priority to describe human-AI interaction. Humans and AIs are not conversational equals, and human initiative takes priority. We suggest that the application of Grice's Cooperative Principles to human-AI interactions is beneficial both from an AI development perspective and as a tool for describing an emerging form of interaction.
翻译:格雷斯合作原则(1975年)描述了指导人类之间对话的隐含原则(1975年)。当人类开始更经常和更广义地与非人类对话系统互动时,出现了一个重要问题:哪些原则指导这些互动?本研究报告用格里斯的四大准则评估了人类-AI互动;我们证明人类确实将这些原则应用于与大赦国际的互动,甚至通过Gricean的视角明确提到大赦国际的绩效。23名参与者与讲英语的美国人Alexa互动,与一个实验室内研究人员评级和讨论他们的经验。研究人员随后审查了每一次交流,确定了哪些原则指导这些互动?本研究报告用格里斯的四大准则评估了这一问题;我们证明,人类与AI的互动确实很少,但并没有太多信息,而质量侵犯是罕见的,表明对Alexa的反应是信任的。侵犯行为以速度和人文研究者为重点。相关性是最为频繁的,而且它们似乎与格雷斯的守则有关:数量、质量、性质和相关性,以及相关性可能与Grice准则有关;同时,我们更准确地描述了人类互动的难度,我们也表明,这是人类与ALs的优先性互动。我们不代表了人类互动。