Achieving at least some level of explainability requires complex analyses for many machine learning systems, such as common black-box models. We recently proposed a new rule-based learning system, SupRB, to construct compact, interpretable and transparent models by utilizing separate optimizers for the model selection tasks concerning rule discovery and rule set composition.This allows users to specifically tailor their model structure to fulfil use-case specific explainability requirements. From an optimization perspective, this allows us to define clearer goals and we find that -- in contrast to many state of the art systems -- this allows us to keep rule fitnesses independent. In this paper we investigate this system's performance thoroughly on a set of regression problems and compare it against XCSF, a prominent rule-based learning system. We find the overall results of SupRB's evaluation comparable to XCSF's while allowing easier control of model structure and showing a substantially smaller sensitivity to random seeds and data splits. This increased control can aid in subsequently providing explanations for both training and final structure of the model.
翻译:至少实现某种程度的解释性要求对许多机器学习系统进行复杂的分析,例如通用黑盒模型。我们最近提议一个新的基于规则的学习系统SupRB, 以便通过使用单独的优化方法,在规则发现和规则设定构成的模型选择任务中建立精细、可解释和透明的模型。这让用户能够具体调整其模型结构,以满足使用情况的具体解释性要求。从优化角度看,这使我们能够确定更明确的目标,并且我们发现,与艺术系统的许多状况不同,这使我们能够保持规则的稳定性。在本文件中,我们彻底调查了这套系统在一系列回归问题上的绩效,并将其与一个突出的基于规则的学习系统XCSF作比较。我们发现,SupR的评估结果与XCSF相似,同时允许更容易地控制模型结构,并显示对随机种子和数据分裂的敏感度要小得多。这种增强的控制有助于随后对模型的培训和最终结构作出解释。