The generation of comprehensible explanations is an essential feature of modern artificial intelligence systems. In this work, we consider probabilistic logic programming, an extension of logic programming which can be useful to model domains with relational structure and uncertainty. Essentially, a program specifies a probability distribution over possible worlds (i.e., sets of facts). The notion of explanation is typically associated with that of a world, so that one often looks for the most probable world as well as for the worlds where the query is true. Unfortunately, such explanations exhibit no causal structure. In particular, the chain of inferences required for a specific prediction (represented by a query) is not shown. In this paper, we propose a novel approach where explanations are represented as programs that are generated from a given query by a number of unfolding-like transformations. Here, the chain of inferences that proves a given query is made explicit. Furthermore, the generated explanations are minimal (i.e., contain no irrelevant information) and can be parameterized w.r.t. a specification of visible predicates, so that the user may hide uninteresting details from explanations.
翻译:理解解释的产生是现代人工智能系统的一个基本特征。 在这项工作中,我们考虑的是概率逻辑编程,这是逻辑编程的延伸,可以用来模拟具有关系结构和不确定性的领域。基本上,一个程序指定了在可能的世界中的概率分布(即一系列事实)。解释的概念通常与世界的概率分布有关,这样人们往往会寻找最可能的世界以及查询真实的世界。不幸的是,这种解释没有因果关系结构。特别是,没有显示具体预测所需的推论链(由查询代表的)。在本文中,我们提出了一个新颖的方法,将解释作为由一系列类似变化的查询产生的程序。在这里,证明某一查询的推论链是明确的。此外,所产生的解释是极少的(即不相关的信息),并且可以对可见的直线进行参数化(w.r.t.),以便用户可以隐藏不感兴趣的细节来解释。