Interpretability in machine learning (ML) is crucial for high stakes decisions and troubleshooting. In this work, we provide fundamental principles for interpretable ML, and dispel common misunderstandings that dilute the importance of this crucial topic. We also identify 10 technical challenge areas in interpretable machine learning and provide history and background on each problem. Some of these problems are classically important, and some are recent problems that have arisen in the last few years. These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and better interpretability; (4) Modern case-based reasoning, including neural networks and matching for causal inference; (5) Complete supervised disentanglement of neural networks; (6) Complete or even partial unsupervised disentanglement of neural networks; (7) Dimensionality reduction for data visualization; (8) Machine learning models that can incorporate physics and other generative or causal constraints; (9) Characterization of the "Rashomon set" of good models; and (10) Interpretable reinforcement learning. This survey is suitable as a starting point for statisticians and computer scientists interested in working in interpretable machine learning.
翻译:在这项工作中,我们为可解释的ML提供了基本原则,并消除了削弱这一重要议题重要性的常见误解。我们还确定了可解释的机器学习中的10个技术挑战领域,并为每个问题提供了历史和背景。其中一些问题在传统上很重要,有些是近年来产生的问题。这些问题包括:(1) 优化决策树等稀疏的逻辑模型;(2) 优化评分系统;(3) 将限制设置为通用的添加模型,以鼓励宽度和更好的解释性;(4) 现代案例推理,包括神经网络和因果推断的匹配;(5) 完全监督神经网络的分解;(6) 神经网络的完整或甚至部分不受控制的分解;(7) 数据可视化的维度减少;(8) 能够纳入物理和其他基因或因果制约的机器学习模型;(9) 良好模型的“拉肖蒙集”的特性;(10) 可解释性强化学习。这项调查适合于计算机科学家的初始统计学和计算机演算。