Model explainability has become an important problem in machine learning (ML) due to the increased effect that algorithmic predictions have on humans. Explanations can help users understand not only why ML models make certain predictions, but also how these predictions can be changed. In this thesis, we examine the explainability of ML models from three vantage points: algorithms, users, and pedagogy, and contribute several novel solutions to the explainability problem.
翻译:模型解释性已经成为机器学习中的一个重要问题,因为算法预测对人类的影响越来越大。 解释可以帮助用户不仅理解ML模型做出某些预测的原因,而且理解如何改变这些预测。 在这个论文中,我们研究ML模型从三个有利点(算法、用户和教学法)的解释性,并为解释性问题提供几种新的解决方案。