Adversarial attacks have shown the vulnerability of machine learning models, however, it is non-trivial to conduct textual adversarial attacks on natural language processing tasks due to the discreteness of data. Most previous approaches conduct attacks with the atomic \textit{replacement} operation, which usually leads to fixed-length adversarial examples and therefore limits the exploration on the decision space. In this paper, we propose variable-length textual adversarial attacks~(VL-Attack) and integrate three atomic operations, namely \textit{insertion}, \textit{deletion} and \textit{replacement}, into a unified framework, by introducing and manipulating a special \textit{blank} token while attacking. In this way, our approach is able to more comprehensively find adversarial examples around the decision boundary and effectively conduct adversarial attacks. Specifically, our method drops the accuracy of IMDB classification by $96\%$ with only editing $1.3\%$ tokens while attacking a pre-trained BERT model. In addition, fine-tuning the victim model with generated adversarial samples can improve the robustness of the model without hurting the performance, especially for length-sensitive models. On the task of non-autoregressive machine translation, our method can achieve $33.18$ BLEU score on IWSLT14 German-English translation, achieving an improvement of $1.47$ over the baseline model.
翻译:Adversarial攻击表明机器学习模式的脆弱性,然而,由于数据的离散性,对自然语言处理任务进行文字对抗性攻击是非三重性的,因为数据离散。大多数以往的做法都是以原子(textit{blank})操作进行攻击,通常导致固定长度的对抗性实例,从而限制对决定空间的探索。在本文中,我们提议多长的文字对抗性攻击~(VL-Attack),并将三种原子行动,即47{textit{delion},\textit{deletion}和\textit{reit{replace}纳入一个统一的框架,方法是在攻击的同时采用和操纵一种特殊的(textit{blank})标志。这样,我们的方法能够更全面地找到围绕决定边界的对抗性例子,从而有效地进行对抗性攻击。具体地说,我们的方法将IMDB的分类的准确性降低了96美元,只编辑了1.3美元的符号,同时攻击了事先训练过的BERT的模型。此外,对受害者翻译模型进行微调,特别使B-Rial-L的模型更能改进。