Machine learning (ML) model explainability has received growing attention, especially in the area related to model risk and regulations. In this paper, we reviewed and compared some popular ML model explainability methodologies, especially those related to Natural Language Processing (NLP) models. We then applied one of the NLP explainability methods Layer-wise Relevance Propagation (LRP) to a NLP classification model. We used the LRP method to derive a relevance score for each word in an instance, which is a local explainability. The relevance scores are then aggregated together to achieve global variable importance of the model. Through the case study, we also demonstrated how to apply the local explainability method to false positive and false negative instances to discover the weakness of a NLP model. These analysis can help us to understand NLP models better and reduce the risk due to the black-box nature of NLP models. We also identified some common issues due to the special natures of NLP models and discussed how explainability analysis can act as a control to detect these issues after the model has been trained.
翻译:机器学习(ML)模型解释性受到越来越多的注意,特别是在与示范风险和规章有关的领域。在本文件中,我们审查并比较了一些流行的ML模型解释性方法,特别是与自然语言处理模型有关的模型解释性方法。然后我们应用了NLP解释性方法之一 :从相关性促进(LRP)到NLP分类模式。我们使用LRP方法来为每个单词得出相关评分,这是一个局部解释性。然后将相关得分合并在一起,以实现模型的全球可变重要性。我们通过案例研究,还演示了如何应用当地可解释性方法来错误地发现NLP模型的缺点。这些分析有助于我们更好地理解NLP模型的弱点,并减少由于NLP模型的黑箱性质而带来的风险。我们还确定了由于NLP模型的特殊性质而存在的一些共同问题,并讨论了解释性分析如何作为在模型培训后发现这些问题的控制手段。