Machine learning methods are being increasingly applied in sensitive societal contexts, where decisions impact human lives. Hence it has become necessary to build capabilities for providing easily-interpretable explanations of models' predictions. Recently in academic literature, a vast number of explanations methods have been proposed. Unfortunately, to our knowledge, little has been documented about the challenges machine learning practitioners most often face when applying them in real-world scenarios. For example, a typical procedure such as feature engineering can make some methodologies no longer applicable. The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods. And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain -- poverty estimation and its use for prioritizing access to social policies.
翻译:机械学习方法越来越多地应用于敏感的社会环境,决策影响到人类生活。因此,有必要建立能力,对模型预测提供容易解释的解释。最近,在学术文献中,提出了大量解释方法。不幸的是,据我们所知,关于机器学习实践者在现实世界中应用这些方法时最常遇到的挑战,没有多少文件记载。例如,特征工程等典型程序可以使某些方法不再适用。本案例研究有两个主要目标。首先,暴露这些挑战,以及这些挑战如何影响使用相关和新颖的解释方法。第二,提出一套战略,缓解在相关应用领域实施解释方法时遇到的这些挑战:贫穷估计及其用于优先获取社会政策。