Gradient-based analysis methods, such as saliency map visualizations and adversarial input perturbations, have found widespread use in interpreting neural NLP models due to their simplicity, flexibility, and most importantly, their faithfulness. In this paper, however, we demonstrate that the gradients of a model are easily manipulable, and thus bring into question the reliability of gradient-based analyses. In particular, we merge the layers of a target model with a Facade that overwhelms the gradients without affecting the predictions. This Facade can be trained to have gradients that are misleading and irrelevant to the task, such as focusing only on the stop words in the input. On a variety of NLP tasks (text classification, NLI, and QA), we show that our method can manipulate numerous gradient-based analysis techniques: saliency maps, input reduction, and adversarial perturbations all identify unimportant or targeted tokens as being highly important. The code and a tutorial of this paper is available at http://ucinlp.github.io/facade.
翻译:以渐变为基础的分析方法,如显性地图可视化和对抗性输入扰动等,在解释神经NLP模型时被广泛使用,因为这些模型简单、灵活,而且最重要的是其忠实性。然而,在本文件中,我们证明模型的梯度很容易操纵,从而对基于梯度的分析的可靠性产生疑问。特别是,我们将目标模型的层层与一个覆盖梯度而不影响预测的法形相合并。这个法则可以被训练为具有误导性和与任务无关的梯度,例如只侧重于输入中的停留单词。关于各种NLP任务(文本分类、NLI和QA),我们表明,我们的方法可以操纵许多基于梯度的分析技术:显性地图、减少投入和对抗性渗透技术,它们都确认不重要或有针对性的标语非常重要。本文的代码和教义可在http://ucionlp.github.io/facade上查阅。