The rapid advancement of Large Language Models (LLMs) is positioning language at the core of human-computer interaction (HCI). We argue that advancing HCI requires attention to the linguistic foundations of interaction, particularly implicature (meaning conveyed beyond explicit statements through shared context) which is essential for human-AI (HAI) alignment. This study examines LLMs' ability to infer user intent embedded in context-driven prompts and whether understanding implicature improves response generation. Results show that larger models approximate human interpretations more closely, while smaller models struggle with implicature inference. Furthermore, implicature-based prompts significantly enhance the perceived relevance and quality of responses across models, with notable gains in smaller models. Overall, 67.6% of participants preferred responses with implicature-embedded prompts to literal ones, highlighting a clear preference for contextually nuanced communication. Our work contributes to understanding how linguistic theory can be used to address the alignment problem by making HAI interaction more natural and contextually grounded.
翻译:大型语言模型(LLMs)的快速发展正将语言置于人机交互(HCI)的核心。我们认为,推进HCI需要关注交互的语言学基础,特别是言外之意(通过共享语境在明确陈述之外传达的意义),这对于人机智能(HAI)对齐至关重要。本研究考察了LLMs推断嵌入在语境驱动提示中的用户意图的能力,以及理解言外之意是否能改善响应生成。结果表明,较大模型更接近人类解释,而较小模型在言外之意推理方面存在困难。此外,基于言外之意的提示显著提升了所有模型生成响应的感知相关性和质量,较小模型的提升尤为明显。总体而言,67.6%的参与者更喜欢包含言外之意提示的响应而非字面提示的响应,这突显了对语境化细微沟通的明确偏好。我们的工作有助于理解如何利用语言学理论解决对齐问题,使HAI交互更加自然和基于语境。