While utilization of digital agents to support crucial decision making is increasing, trust in suggestions made by these agents is hard to achieve. However, it is essential to profit from their application, resulting in a need for explanations for both the decision making process and the model. For many systems, such as common black-box models, achieving at least some explainability requires complex post-processing, while other systems profit from being, to a reasonable extent, inherently interpretable. We propose a rule-based learning system specifically conceptualised and, thus, especially suited for these scenarios. Its models are inherently transparent and easily interpretable by design. One key innovation of our system is that the rules' conditions and which rules compose a problem's solution are evolved separately. We utilise independent rule fitnesses which allows users to specifically tailor their model structure to fit the given requirements for explainability.
翻译:虽然利用数字代理支持关键决策的工作正在增加,但对这些代理所提出的建议的信任却难以实现,然而,必须从这些建议的应用中获益,从而需要解释决策过程和模型。对于许多系统,例如共同的黑箱模型,至少实现某些解释需要复杂的后处理,而其他系统则从在合理程度上本身可以解释而获益。我们提议了一个基于规则的学习系统,具体地加以概念化,因而特别适合这些假设情况。其模型本质上是透明的,很容易通过设计加以解释。我们系统的一个关键创新是规则的条件和构成问题解决办法的规则是分别演变的。我们使用独立的规则,使用户能够具体调整其模型结构,以适应解释要求。