Several queries and scores have recently been proposed to explain individual predictions over ML models. Given the need for flexible, reliable, and easy-to-apply interpretability methods for ML models, we foresee the need for developing declarative languages to naturally specify different explainability queries. We do this in a principled way by rooting such a language in a logic, called FOIL, that allows for expressing many simple but important explainability queries, and might serve as a core for more expressive interpretability languages. We study the computational complexity of FOIL queries over two classes of ML models often deemed to be easily interpretable: decision trees and OBDDs. Since the number of possible inputs for an ML model is exponential in its dimension, the tractability of the FOIL evaluation problem is delicate but can be achieved by either restricting the structure of the models or the fragment of FOIL being evaluated. We also present a prototype implementation of FOIL wrapped in a high-level declarative language and perform experiments showing that such a language can be used in practice.
翻译:最近提出了若干查询和评分,以解释对最低限值模型的个别预测。鉴于需要灵活、可靠和易于应用的多功能模型解释方法,我们预计需要开发宣言性语言,自然地具体说明不同的可解释性查询。我们这样做的原则性方式是,用一种逻辑(称为FOIL)将这种语言扎根,这种逻辑可以表达许多简单但重要的可解释性查询,并且可以作为更清晰易解的语言的核心。我们研究了FOIL对两类通常被认为容易解释的多功能模型的查询的计算复杂性:决策树和OBDDs。由于对最低限值模型的可能投入数量在数量上是成倍的,因此,评估问题的可变性是微妙的,但可以通过限制模型的结构或正在评估的FOIL的碎片来实现。我们还介绍了以高层次宣言性语言包装的FOIL的原型实施情况,并进行了实验,表明这种语言可以在实践中使用。