Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow humans to tune a model in response to the explanations are similarly useful. While both capabilities are well-developed for transparent learning models (e.g., linear models and GA2Ms), and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, no method for tuning opaque models in response to explanations has been user-tested to date. This paper introduces LIMEADE, a general framework for tuning an arbitrary machine learning model based on an explanation of the model's prediction. We demonstrate the generality of our approach with two case studies. First, we successfully utilize LIMEADE for the human tuning of opaque image classifiers. Second, we apply our framework to a neural recommender system for scientific papers on a public website and report on a user study showing that our framework leads to significantly higher perceived user control, trust, and satisfaction. Analyzing 300 user logs from our publicly-deployed website, we uncover a tradeoff between canonical greedy explanations and diverse explanations that better facilitate human tuning.
翻译:以人为中心的大赦国际的研究显示了能够解释其预测的系统的好处。允许人类根据解释来调整模型的方法也同样有用。虽然两种能力对于透明的学习模型(如线性模型和GA2MS)都十分发达,而且最近的技术(如LIME和SHAP)可以对不透明的模型作出解释,但迄今为止还没有对任何根据解释调整不透明的模型的方法进行用户测试。本文介绍了LIMEADE,这是一个根据模型预测的解释来调整任意机器学习模型的一般框架。我们用两个案例研究来展示我们的方法的普遍性。首先,我们成功地利用LIMEADE对不透明的图像分类器进行人的调整。第二,我们将我们的框架应用于公共网站上的科学论文的神经建议系统,并报告用户研究,表明我们的框架能够大大提高用户对用户的控制、信任和满意度。分析我们公开部署的网站上的300个用户日志,我们发现在更方便人类调整的卡尼基解释和多种解释之间发生了权衡。