We introduce and evaluate an eXplainable Goal Recognition (XGR) model that uses the Weight of Evidence (WoE) framework to explain goal recognition problems. Our model provides human-centered explanations that answer why? and why not? questions. We computationally evaluate the performance of our system over eight different domains. Using a human behavioral study to obtain the ground truth from human annotators, we further show that the XGR model can successfully generate human-like explanations. We then report on a study with 60 participants who observe agents playing Sokoban game and then receive explanations of the goal recognition output. We investigate participants' understanding obtained by explanations through task prediction, explanation satisfaction, and trust.
翻译:我们引入并评估了一种使用证据重量框架解释目标识别问题的可氧化目标识别模型。我们的模型提供了以人为中心的解释,从而解答原因?和为什么不?问题。我们计算了我们系统在八个不同领域的表现。我们利用人类行为研究从人类的告示者那里获得地面真相,我们进一步表明XGR模型能够成功地产生类似人类的解释。我们随后报告了一项研究,有60名参与者观看了玩Sokoban游戏的代理人,然后收到了对目标识别输出的解释。我们通过任务预测、解释满意度和信任对参与者的理解进行了调查,通过任务预测、解释满意度和信任得到了解释。</s>