Pragmatics is an essential part of communication, but it remains unclear what mechanisms underlie human pragmatic communication and whether NLP systems capture pragmatic language understanding. To investigate both these questions, we perform a fine-grained comparison of language models and humans on seven pragmatic phenomena, using zero-shot prompting on an expert-curated set of English materials. We ask whether models (1) select pragmatic interpretations of speaker utterances, (2) make similar error patterns as humans, and (3) use similar linguistic cues as humans to solve the tasks. We find that the largest models achieve high accuracy and match human error patterns: within incorrect responses, models favor the literal interpretation of an utterance over heuristic-based distractors. We also find evidence that models and humans are sensitive to similar linguistic cues. Our results suggest that even paradigmatic pragmatic phenomena may be solved without explicit representations of other agents' mental states, and that artificial models can be used to gain mechanistic insights into human pragmatic processing.
翻译:实用主义是沟通的一个基本部分,但人们仍然不清楚人类务实交流的机制是什么,以及国家语言方案系统是否能够获取务实的语言理解。为了调查这两个问题,我们在七个务实现象上对语言模式和人类进行细微的比较,对专家编写的一套英文材料进行零点提示。我们询问模式(1)是否选择了对讲者言论的务实解释,(2)是否与人有相似的错误模式,(3)是否与人使用类似的语言提示来解决问题。我们发现,最大的模型具有很高的准确性,并且与人类错误模式相匹配:在不正确的应对措施中,模型倾向于对基于超自然的分散器的言词进行字面解释。我们还发现,模型和人类对类似的语言提示十分敏感。我们的结果表明,即使是示范性务实现象也可以在没有明确描述其他代理人精神状态的情况下得到解决,人工模型也可以用来获得对人类务实处理的机械洞察力。