Reasoning is a distinctive human-like characteristic attributed to LLMs in HCI due to their ability to simulate various human-level tasks. However, this work argues that the reasoning behavior of LLMs in HCI is often decontextualized from the underlying mechanics and subjective decisions that condition the emergence and human interpretation of this behavior. Through a systematic survey of 258 CHI papers from 2020-2025 on LLMs, we discuss how HCI hardly perceives LLM reasoning as a product of sociotechnical orchestration and often references it as an object of application. We argue that such abstraction leads to oversimplification of reasoning methodologies from NLP/ML and results in a distortion of LLMs' empirically studied capabilities and (un)known limitations. Finally, drawing on literature from both NLP/ML and HCI, as a constructive step forward, we develop reflection prompts to support HCI practitioners engage with LLM reasoning in an informed and reflective way.
翻译:推理作为大语言模型在人机交互中被赋予的一种类人特质,源于其模拟各类人类水平任务的能力。然而,本文认为,人机交互领域对大语言模型推理行为的理解,往往脱离于制约该行为涌现及其人类解读的底层机制与主观决策。通过对2020至2025年间258篇人机交互顶会论文的系统性综述,我们探讨了人机交互研究如何鲜少将大语言模型推理视为社会技术协同的产物,而常将其作为应用对象加以引用。我们认为,这种抽象化处理导致了对自然语言处理/机器学习领域推理方法的过度简化,并扭曲了经实证研究验证的大语言模型能力及其已知/未知局限。最后,借鉴自然语言处理/机器学习与人机交互领域的文献,我们作为建设性的推进步骤,开发了一套反思提示框架,以支持人机交互从业者以知情且反思的方式运用大语言模型推理。