The rise of AI methods to make predictions and decisions has led to a pressing need for more explainable artificial intelligence (XAI) methods. One common approach for XAI is to produce a post-hoc explanation, explaining why a black box ML model made a certain prediction. Formal approaches to post-hoc explanations provide succinct reasons for why a prediction was made, as well as why not another prediction was made. But these approaches assume that features are independent and uniformly distributed. While this means that "why" explanations are correct, they may be longer than required. It also means the "why not" explanations may be suspect as the counterexamples they rely on may not be meaningful. In this paper, we show how one can apply background knowledge to give more succinct "why" formal explanations, that are presumably easier to interpret by humans, and give more accurate "why not" explanations. Furthermore, we also show how to use existing rule induction techniques to efficiently extract background information from a dataset, and also how to report which background information was used to make an explanation, allowing a human to examine it if they doubt the correctness of the explanation.
翻译:AI的预测和决定方法的兴起导致迫切需要更可解释的人工智能(XAI)方法。 XAI的一个共同方法是提出一个后热量解释,解释黑盒ML模型为何作出某种预测。 热量后解释的正式方法提供了预测原因的简明理由,以及为什么没有作出另一种预测。 但是,这些方法假定特征是独立和统一分布的。 这意味着“为什么”解释是正确的,它们可能比要求的要长。它还意味着“为什么”解释可能被怀疑,因为它们所依赖的对应示例可能没有意义。在本文中,我们表明如何运用背景知识来提供更简明的“原因”正式解释,这大概比较容易由人类解释,并给出更准确的“为什么不是”解释。此外,我们还表明如何利用现有规则上岗技术从数据集中有效地提取背景资料,以及如何报告使用哪些背景资料作出解释,如果人们怀疑解释的正确性,则允许他们研究背景知识。