Boosted trees is a dominant ML model, exhibiting high accuracy. However, boosted trees are hardly intelligible, and this is a problem whenever they are used in safety-critical applications. Indeed, in such a context, rigorous explanations of the predictions made are expected. Recent work have shown how subset-minimal abductive explanations can be derived for boosted trees, using automated reasoning techniques. However, the generation of such well-founded explanations is intractable in the general case. To improve the scalability of their generation, we introduce the notion of tree-specific explanation for a boosted tree. We show that tree-specific explanations are abductive explanations that can be computed in polynomial time. We also explain how to derive a subset-minimal abductive explanation from a tree-specific explanation. Experiments on various datasets show the computational benefits of leveraging tree-specific explanations for deriving subset-minimal abductive explanations.
翻译:灌木树是一种占主导地位的 ML 模型, 显示的精度很高。 但是, 灌木是难以理解的, 当它们被用于安全关键应用时, 推树就是一个问题。 事实上, 在这样的背景下, 预计将对预测做出严格的解释。 最近的工作已经表明, 如何使用自动推理技术为灌木得出细小的绑架解释。 但是, 在一般情况下, 生成这种有充分根据的解释是难以解决的。 为了提高树的可伸缩性, 我们引入了树特有解释的概念 。 我们显示, 树特有解释是诱人的解释, 可以在多角度的时间内计算。 我们还解释了如何从树特有解释中得出微小的绑架性解释。 对各种数据集的实验显示, 利用树特有解释来得出子微小的绑架解释的计算效益 。