Learned knowledge graph representations supporting robots contain a wealth of domain knowledge that drives robot behavior. However, there does not exist an inference reconciliation framework that expresses how a knowledge graph representation affects a robot's sequential decision making. We use a pedagogical approach to explain the inferences of a learned, black-box knowledge graph representation, a knowledge graph embedding. Our interpretable model, uses a decision tree classifier to locally approximate the predictions of the black-box model, and provides natural language explanations interpretable by non-experts. Results from our algorithmic evaluation affirm our model design choices, and the results of our user studies with non-experts support the need for the proposed inference reconciliation framework. Critically, results from our simulated robot evaluation indicate that our explanations enable non-experts to correct erratic robot behaviors due to nonsensical beliefs within the black-box.
翻译:支持机器人的学习知识图形演示包含大量驱动机器人行为的域知识。 但是, 不存在一个表达知识图形表达方式如何影响机器人的顺序决策的推论调框架。 我们使用一种教学方法来解释知识、 黑盒知识图形表达方式、 知识图嵌入过程的推论。 我们的可解释模型, 使用决定树分类器来比较黑盒模型的预测, 并提供非专家可以解释的自然语言解释。 我们的算法评估结果证实了我们的模型设计选择, 以及我们用户与非专家研究的结果支持了对拟议推论调框架的需要。 关键地说, 我们模拟机器人评估的结果表明, 我们的非专家能够纠正黑盒内非敏感信仰造成的无规律机器人行为。