The utilization of digital agents to support crucial decision making is increasing in many industrial scenarios. However, trust in suggestions made by these agents is hard to achieve, though essential for profiting from their application, resulting in a need for explanations for both the decision making process as well as the model itself. For many systems, such as common deep learning black-box models, achieving at least some explainability requires complex post-processing, while other systems profit from being, to a reasonable extent, inherently interpretable. In this paper we propose an easily interpretable rule-based learning system specifically designed and thus especially suited for these scenarios and compare it on a set of regression problems against XCSF, a prominent rule-based learning system with a long research history. One key advantage of our system is that the rules' conditions and which rules compose a solution to the problem are evolved separately. We utilise independent rule fitnesses which allows users to specifically tailor their model structure to fit the given requirements for explainability. We find that the results of SupRB2's evaluation are comparable to XCSF's while allowing easier control of model structure and showing a substantially smaller sensitivity to random seeds and data splits. This increased control aids in subsequently providing explanations for both the training and the final structure of the model.
翻译:在许多工业情景中,对数字代理人支持关键决策的利用正在增加。然而,对数字代理人的建议的信任虽然对于从应用中获益至关重要,但很难实现,因此,需要解释决策过程和模型本身。对于许多系统,例如共同的深层学习黑箱模型,至少实现某种解释需要复杂的后处理,而其他系统则从在合理程度上根据内在的解释性要求而获益。在本文件中,我们建议一种容易解释的、以规则为基础的学习系统,专门设计,因而特别适合这些假设,并将它与一套与具有长期研究历史的突出的基于规则的学习系统XCSF相比的回归问题进行比较。我们系统的一个主要优点是,规则的条件和哪些规则构成解决问题的解决办法是分别演变的。我们采用了独立的规则,使用户能够具体调整其模型结构,以适应解释性要求。我们发现SupRB2的评价结果与XCSF的相似,同时允许更容易地控制模型结构,并显示对随机种子和数据分裂结构的最后敏感性要小得多。我们系统的一个主要优点是规则的条件和规则的构成问题的解决方法。我们采用了独立的规则,随后又增加了控制工具,以便提供任意解释。