Existing approaches to explaining deep learning models in NLP usually suffer from two major drawbacks: (1) the main model and the explaining model are decoupled: an additional probing or surrogate model is used to interpret an existing model, and thus existing explaining tools are not self-explainable; (2) the probing model is only able to explain a model's predictions by operating on low-level features by computing saliency scores for individual words but are clumsy at high-level text units such as phrases, sentences, or paragraphs. To deal with these two issues, in this paper, we propose a simple yet general and effective self-explaining framework for deep learning models in NLP. The key point of the proposed framework is to put an additional layer, as is called by the interpretation layer, on top of any existing NLP model. This layer aggregates the information for each text span, which is then associated with a specific weight, and their weighted combination is fed to the softmax function for the final prediction. The proposed model comes with the following merits: (1) span weights make the model self-explainable and do not require an additional probing model for interpretation; (2) the proposed model is general and can be adapted to any existing deep learning structures in NLP; (3) the weight associated with each text span provides direct importance scores for higher-level text units such as phrases and sentences. We for the first time show that interpretability does not come at the cost of performance: a neural model of self-explaining features obtains better performances than its counterpart without the self-explaining nature, achieving a new SOTA performance of 59.1 on SST-5 and a new SOTA performance of 92.3 on SNLI.
翻译:92. 为处理这两个问题,本文件提出一个简单而普遍和有效的自我解释框架,用于解释现有模型,因此现有解释工具不能自我解释;(2) 测试模型只能通过计算单词的突出分数来根据低层次特征来解释模型的预测,但在诸如短语、句子或段落等高层次文本单位中却笨拙。为了处理这两个问题,我们提议为NLP的深层次自我学习模型设计一个简单而普遍和有效的自我解释框架。 拟议的框架的要点是在现有的任何NLP模型上加上一个额外的层次,即解释层;该层通过计算单词的显著分数来综合每个文本跨段的信息,然后将它们的加权组合与最后预测的软模数函数相连接。 拟议的模型有以下优点:(1) 将模型的宽度变成最高级的自译自译自译自译自译自审框架。 在任何深度自译自译自释模型中,不需要额外的SOL直接性判分数结构。