The interpretability of ML models is important, but it is not clear what it amounts to. So far, most philosophers have discussed the lack of interpretability of black-box models such as neural networks, and methods such as explainable AI that aim to make these models more transparent. The goal of this paper is to clarify the nature of interpretability by focussing on the other end of the 'interpretability spectrum'. The reasons why some models, linear models and decision trees, are highly interpretable will be examined, and also how more general models, MARS and GAM, retain some degree of interpretability. I find that while there is heterogeneity in how we gain interpretability, what interpretability is in particular cases can be explicated in a clear manner.
翻译:ML模型的可解释性很重要,但还不清楚它的意义是什么。 到目前为止,大多数哲学家已经讨论了诸如神经网络等黑箱模型缺乏可解释性的问题,以及使这些模型更加透明的可解释的AI等方法。本文件的目的是通过侧重于“可解释性频谱”的另一端来澄清可解释性的性质。一些模型、线性模型和决定树高度可解释性的原因将会得到研究,以及更多的通用模型(MARS和GAM)如何保持某种程度的可解释性。我发现,虽然我们在如何获得可解释性方面存在着异质性,但在特定情况下,可解释性可以以明确的方式解释。