Rule-based decision models are attractive due to their interpretability. However, existing rule induction methods often results in long and consequently less interpretable set of rules. This problem can, in many cases, be attributed to the rule learner's lack of appropriately expressive vocabulary, i.e., relevant predicates. Most existing rule induction algorithms presume the availability of predicates used to represent the rules, naturally decoupling the predicate definition and the rule learning phases. In contrast, we propose the Relational Rule Network (RRN), a neural architecture that learns relational predicates that represent a linear relationship among attributes along with the rules that use them. This approach opens the door to increasing the expressiveness of induced decision models by coupling predicate learning directly with rule learning in an end to end differentiable fashion. On benchmark tasks, we show that these relational predicates are simple enough to retain interpretability, yet improve prediction accuracy and provide sets of rules that are more concise compared to state of the art rule induction algorithms.
翻译:基于规则的决策模式具有吸引力,因为其可解释性。然而,现有的规则上岗培训方法往往导致一套长期的规则,因而难以解释。在许多情况下,这一问题可归因于规则学习者缺乏适当的表达词汇,即相关的上游。大多数现有的规则上岗演算法假定有用于代表规则的上游,自然地将上游定义和规则学习阶段脱钩。相反,我们提议建立关系规则网络(RRN),这是一个神经结构,学习各种属性之间的直线关系以及使用这些属性的规则。这一方法打开了大门,通过将上游学习与规则学习直接结合起来,最终以不同的方式结束规则学习来增加诱导决定模式的清晰度。关于基准任务,我们表明这些关系上的上游很容易保持可解释性,但提高预测准确性,并提供一套比艺术规则上岗演算法更简洁的规则。