The opaqueness of deep NLP models has motivated the development of methods for interpreting how deep models predict. Recently, work has introduced hierarchical attribution, which produces a hierarchical clustering of words, along with an attribution score for each cluster. However, existing work on hierarchical attribution all follows the connecting rule, limiting the cluster to a continuous span in the input text. We argue that the connecting rule as an additional prior may undermine the ability to reflect the model decision process faithfully. To this end, we propose to generate hierarchical explanations without the connecting rule and introduce a framework for generating hierarchical clusters. Experimental results and further analysis show the effectiveness of the proposed method in providing high-quality explanations for reflecting model predicting process.
翻译:深层的NLP模型的不透明促使人们制定解释深层模型预测的方法。最近,工作引入了等级归属,从而产生一个按等级排列的词组,以及每个组组的分数。然而,关于等级归属的现有工作都遵循连接规则,将分组限制在输入文本的连续跨度之内。我们争辩说,作为额外的先入为主的连接规则可能会损害忠实地反映模式决策过程的能力。为此,我们提议在不采用连接规则的情况下产生等级解释,并引入产生等级分组的框架。实验结果和进一步分析表明拟议方法在为反映模型预测过程提供高质量解释方面的有效性。