Rule-based decision models are attractive due to their interpretability. However, existing rule induction methods often result in long and consequently less interpretable rule models. This problem can often be attributed to the lack of appropriately expressive vocabulary, i.e., relevant predicates used as literals in the decision model. Most existing rule induction algorithms presume pre-defined literals, naturally decoupling the definition of the literals from the rule learning phase. In contrast, we propose the Relational Rule Network (R2N), a neural architecture that learns literals that represent a linear relationship among numerical input features along with the rules that use them. This approach opens the door to increasing the expressiveness of induced decision models by coupling literal learning directly with rule learning in an end-to-end differentiable fashion. On benchmark tasks, we show that these learned literals are simple enough to retain interpretability, yet improve prediction accuracy and provide sets of rules that are more concise compared to state-of-the-art rule induction algorithms.
翻译:基于规则的决策模式因其可解释性而具有吸引力。然而,现有的规则上岗培训方法往往产生长期的、因而不那么容易解释的规则模式。这个问题往往归因于缺乏适当的表达词汇,即作为决定模式的字面文字使用的有关上游。大多数现有的规则上岗演算法假定了预先定义的字面,自然地将字面定义与规则学习阶段脱钩。相反,我们提议采用关系规则网络(R2N),即神经结构,学习数字输入特征与使用它们的规则之间的直线关系。这种方法打开了大门,通过将直接的字面学习与规则学习以端到端不同的方式进行,增加引领决定模式的清晰度。关于基准任务,我们表明这些学到的字面非常简单,足以保留可解释性,但提高预测的准确性,并提供一套与最先进的规则上岗演算法相比更为简洁的规则。