Human models play a crucial role in human-robot interaction (HRI), enabling robots to consider the impact of their actions on people and plan their behavior accordingly. However, crafting good human models is challenging; capturing context-dependent human behavior requires significant prior knowledge and/or large amounts of interaction data, both of which are difficult to obtain. In this work, we explore the potential of large-language models (LLMs) -- which have consumed vast amounts of human-generated text data -- to act as zero-shot human models for HRI. Our experiments on three social datasets yield promising results; the LLMs are able to achieve performance comparable to purpose-built models. That said, we also discuss current limitations, such as sensitivity to prompts and spatial/numerical reasoning mishaps. Based on our findings, we demonstrate how LLM-based human models can be integrated into a social robot's planning process and applied in HRI scenarios. Specifically, we present one case study on a simulated trust-based table-clearing task and replicate past results that relied on custom models. Next, we conduct a new robot utensil-passing experiment (n = 65) where preliminary results show that planning with a LLM-based human model can achieve gains over a basic myopic plan. In summary, our results show that LLMs offer a promising (but incomplete) approach to human modeling for HRI.
翻译:人类模型在人类机器人互动(HRI)中发挥着关键作用,使机器人能够考虑其行动对人的影响并据此规划其行为。然而,设计良好的人类模型具有挑战性;捕捉基于背景的人类行为需要大量的事先知识和/或大量互动数据,而这两种数据都难以获得。在这项工作中,我们探索了大型语言模型(LLMS)的潜力 -- -- 这些模型消耗了大量人类产生的文本数据 -- -- 以作为人权模型的零弹人模型。我们在三个社会数据集上的实验产生了有希望的结果;LLMs能够取得与目的建模模型相似的性能。他说,我们还讨论当前的局限性,例如对提示的敏感度和空间/数字推理错误。根据我们的研究结果,我们展示了LM 人类模型模型模型模型模型模型模型如何融入社会机器人的规划进程,并应用于人权假设方案。我们介绍了关于模拟基于定制模型的表格清理任务的一项案例研究,并复制了基于定制模型模型的过去成果。接下来,我们用新的机器人模型访问模型模型模型进行一项人类模型实验(noplic=65) 初步结果展示结果,以展示我们的人类成果,以超过人类模型计划。</s>