Within the context of human-robot interaction (HRI), Theory of Mind (ToM) is intended to serve as a user-friendly backend to the interface of robotic systems, enabling robots to infer and respond to human mental states. When integrated into robots, ToM allows them to adapt their internal models to users' behaviors, enhancing the interpretability and predictability of their actions. Similarly, Explainable Artificial Intelligence (XAI) aims to make AI systems transparent and interpretable, allowing humans to understand and interact with them effectively. Since ToM in HRI serves related purposes, we propose to consider ToM as a form of XAI and evaluate it through the eValuation XAI (VXAI) framework and its seven desiderata. This paper identifies a critical gap in the application of ToM within HRI, as existing methods rarely assess the extent to which explanations correspond to the robot's actual internal reasoning. To address this limitation, we propose to integrate ToM within XAI frameworks. By embedding ToM principles inside XAI, we argue for a shift in perspective, as current XAI research focuses predominantly on the AI system itself and often lacks user-centered explanations. Incorporating ToM would enable a change in focus, prioritizing the user's informational needs and perspective.
翻译:在人机交互(HRI)背景下,心智理论(ToM)旨在作为机器人系统界面的用户友好后端,使机器人能够推断并响应人类的心理状态。当ToM被集成到机器人中时,它使机器人能够根据用户行为调整其内部模型,从而增强其行为的可解释性与可预测性。类似地,可解释人工智能(XAI)旨在使人工智能系统透明且可解释,使人类能够有效理解并与之交互。鉴于HRI中的ToM服务于相关目标,我们提出将ToM视为XAI的一种形式,并通过可解释人工智能评估(VXAI)框架及其七项评估准则对其进行评估。本文指出了ToM在HRI应用中的一个关键空白:现有方法很少评估解释与机器人实际内部推理过程的对应程度。为弥补这一局限,我们建议将ToM整合到XAI框架中。通过将ToM原则嵌入XAI,我们主张转变现有视角——当前XAI研究主要聚焦于AI系统本身,往往缺乏以用户为中心的解释。融入ToM将促使关注点转向优先考虑用户的信息需求与认知视角。