The increased predictive power of nonlinear models comes at the cost of interpretability of its terms. This trade-off has led to the emergence of eXplainable AI (XAI). XAI attempts to shed light on how models use predictors to arrive at a prediction with local explanations, a point estimate of the linear feature importance in the vicinity of one instance. These can be considered linear projections and can be further explored to understand better the interactions between features used to make predictions across the predictive model surface. Here we describe interactive linear interpolation used for exploration at any instance and illustrate with examples with categorical (penguin species, chocolate types) and quantitative (soccer/football salaries, house prices) output. The methods are implemented in the R package cheem, available on CRAN.
翻译:非线性模型的预测力增加是以其术语的解释性为代价的,这种权衡导致了可氧化性AI(XAI)的出现。XAI试图说明模型如何利用预测器和当地解释作出预测,即对某一实例附近线性特征重要性的点估计。这些可以被视为线性预测,可以进一步探讨,以更好地了解用于预测性模型表面的预测的特征之间的相互作用。这里我们描述了用于任何实例勘探的交互式线性内插,并举例说明绝对产出(直截了当的物种、巧克力类型)和定量产出(scocer/脚球工资、房价)的实例。这些方法在CRAN上提供的R包切姆中实施。