Predictive machine learning models often lack interpretability, resulting in low trust from model end users despite having high predictive performance. While many model interpretation approaches return top important features to help interpret model predictions, these top features may not be well-organized or intuitive to end users, which limits model adoption rates. In this paper, we propose Intellige, a user-facing model explainer that creates user-digestible interpretations and insights reflecting the rationale behind model predictions. Intellige builds an end-to-end pipeline from machine learning platforms to end user platforms, and provides users with an interface for implementing model interpretation approaches and for customizing narrative insights. Intellige is a platform consisting of four components: Model Importer, Model Interpreter, Narrative Generator, and Narrative Exporter. We describe these components, and then demonstrate the effectiveness of Intellige through use cases at LinkedIn. Quantitative performance analyses indicate that Intellige's narrative insights lead to lifts in adoption rates of predictive model recommendations, as well as to increases in downstream key metrics such as revenue when compared to previous approaches, while qualitative analyses indicate positive feedback from end users.
翻译:预测的机器学习模型往往缺乏可解释性,导致模型终端用户对模型终端用户的信任度低,尽管预测性能高,尽管许多模型解释方法回归了有助于解释模型预测的顶级重要特征,但这些顶级特征对终端用户来说可能不是组织良好或直观的,这限制了模型采用率。在本文中,我们建议采用用户影响模型解释器Intellige(Intellige),它能产生用户消化的解释和洞察,反映模型预测背后的理由。Intellige构建了从机器学习平台到终端用户平台的端对端管道,为用户提供了实施模型解释方法和定制叙述洞察力的界面。 Intellige是一个由四个组成部分组成的平台:模型进口器、模型解释器、叙述式生成器和叙述式导出器。我们描述了这些组成部分,然后通过使用LinkedIn. 量化业绩分析表明,Intellig的叙述性洞察力导致采用预测性模型建议的速度上升,以及与以往方法相比,下游关键计量指标增加,例如收入,同时定性分析显示来自终端用户的正面的反馈。