We present a new multi-objective optimization approach for synthesizing interpretations that "explain" the behavior of black-box machine learning models. Constructing human-understandable interpretations for black-box models often requires balancing conflicting objectives. A simple interpretation may be easier to understand for humans while being less precise in its predictions vis-a-vis a complex interpretation. Existing methods for synthesizing interpretations use a single objective function and are often optimized for a single class of interpretations. In contrast, we provide a more general and multi-objective synthesis framework that allows users to choose (1) the class of syntactic templates from which an interpretation should be synthesized, and (2) quantitative measures on both the correctness and explainability of an interpretation. For a given black-box, our approach yields a set of Pareto-optimal interpretations with respect to the correctness and explainability measures. We show that the underlying multi-objective optimization problem can be solved via a reduction to quantitative constraint solving, such as weighted maximum satisfiability. To demonstrate the benefits of our approach, we have applied it to synthesize interpretations for black-box neural-network classifiers. Our experiments show that there often exists a rich and varied set of choices for interpretations that are missed by existing approaches.
翻译:我们提出了一个新的多目标优化方法,用于合成“解释”黑盒机器学习模型的行为。为黑盒模型构建人类可理解的解释往往需要平衡相互冲突的目标。简单解释对于人类来说可能比较容易理解,而对于复杂解释的预测则不那么精确。现有的综合解释方法使用一个单一的客观功能,并且往往为单一解释类别优化。相反,我们提供了一个更笼统和多目标的综合框架,使用户能够选择(1) 综合解释的合成模板类别,(2) 关于解释的正确性和可解释性的定量措施。对于特定的黑盒来说,我们的方法产生一套关于正确性和可解释性措施的Pareto最佳解释。我们表明,潜在的多目标优化问题可以通过减少量化制约来解决,例如加权最大对等对等。为了展示我们方法的好处,我们应用它来综合黑盒神经网络解释的准确性和可解释性,我们常常通过一个不完善的模型来显示,我们现有的对错误的电离式解释方法的模型,我们常常通过一个不同的实验来显示,我们现有的对黑盒的变式解释方法进行了综合解释。