In order to trust machine learning for high-stakes problems, we need models to be both reliable and interpretable. Recently, there has been a growing body of work on interpretable machine learning which generates human understandable insights into data, models, or predictions. At the same time, there has been increased interest in quantifying the reliability and uncertainty of machine learning predictions, often in the form of confidence intervals for predictions using conformal inference. Yet, there has been relatively little attention given to the reliability and uncertainty of machine learning interpretations, which is the focus of this paper. Our goal is to develop confidence intervals for a widely-used form of machine learning interpretation: feature importance. We specifically seek to develop universal model-agnostic and assumption-light confidence intervals for feature importance that will be valid for any machine learning model and for any regression or classification task. We do so by leveraging a form of random observation and feature subsampling called minipatch ensembles and show that our approach provides assumption-light asymptotic coverage for the feature importance score of any model. Further, our approach is fast as computations needed for inference come nearly for free as part of the ensemble learning process. Finally, we also show that our same procedure can be leveraged to provide valid confidence intervals for predictions, hence providing fast, simultaneous quantification of the uncertainty of both model predictions and interpretations. We validate our intervals on a series of synthetic and real data examples, showing that our approach detects the correct important features and exhibits many computational and statistical advantages over existing methods.
翻译:为了信任机器对高比例问题的学习,我们需要一些模型,以便相信机器对高比例问题的学习,我们需要的是既可靠又可以解释的模型。最近,关于可解释的机器学习特点的工作越来越多,这些解释性机器学习特点使人类对数据、模型或预测产生易懂的洞察力。与此同时,人们越来越有兴趣量化机器学习预测的可靠性和不确定性,往往以对使用一致推理的预测采用信任间隔的形式进行。然而,对于机器学习解释的可靠性和不确定性,这是本文的重点。我们的目标是为广泛使用的机器学习解释形式建立信任间隔:特征重要性。我们特别寻求制定通用的模型和假设-假设-光信任间隔,这些特征对于任何机器学习模型模型和任何回归或分类任务都具有重要性。我们这样做的方式是利用一种随机观测和特征下标的子标本,表明我们的方法为任何模型的特征重要性提供了假设性光度。我们的方法是快速的计算方法,对真实的模型的解读和合成解释方式提供了重要的计算方法,从而可以免费地显示我们最终的预测过程的精确度,从而显示我们的精确度。