To realize autonomous collaborative robots, it is important to increase the trust that users have in them. Toward this goal, this paper proposes an algorithm which endows an autonomous agent with the ability to explain the transition from the current state to the target state in a Markov decision process (MDP). According to cognitive science, to generate an explanation that is acceptable to humans, it is important to present the minimum information necessary to sufficiently understand an event. To meet this requirement, this study proposes a framework for identifying important elements in the decision-making process using a prediction model for the world and generating explanations based on these elements. To verify the ability of the proposed method to generate explanations, we conducted an experiment using a grid environment. It was inferred from the result of a simulation experiment that the explanation generated using the proposed method was composed of the minimum elements important for understanding the transition from the current state to the target state. Furthermore, subject experiments showed that the generated explanation was a good summary of the process of state transition, and that a high evaluation was obtained for the explanation of the reason for an action.
翻译:为了实现自主协作机器人,必须提高用户对自主协作机器人的信任度。 为实现这一目标,本文件提出一种算法,赋予自主代理机构在马尔科夫决策程序中解释从当前状态向目标状态过渡的能力。 根据认知科学,为了提出人类可接受的解释,必须提供足够了解事件所必需的最低限度信息。为了达到这一要求,本研究报告提出了一个框架,用以利用世界预测模型确定决策过程中的重要要素,并根据这些要素作出解释。为了核实拟议方法产生解释的能力,我们利用电网环境进行了实验。根据模拟实验的结果,即使用拟议方法作出的解释包含对了解从当前状态向目标状态过渡至关重要的最低限度要素。此外,主题实验表明,所得出的解释是对国家转型过程的良好总结,并且为解释采取行动的理由进行了高估。