In the area of natural language processing, deep learning models are recently known to be vulnerable to various types of adversarial perturbations, but relatively few works are done on the defense side. Especially, there exists few effective defense method against the successful synonym substitution based attacks that preserve the syntactic structure and semantic information of the original text while fooling the deep learning models. We contribute in this direction and propose a novel adversarial defense method called Synonym Encoding Method (SEM). Specifically, SEM inserts an encoder before the input layer of the target model to map each cluster of synonyms to a unique encoding and trains the model to eliminate possible adversarial perturbations without modifying the network architecture or adding extra data. Extensive experiments demonstrate that SEM can effectively defend the current synonym substitution based attacks and block the transferability of adversarial examples. SEM is also easy and efficient to scale to large models and big datasets.
翻译:在自然语言处理领域,人们最近知道深层次的学习模式容易受到各种对抗性扰动的影响,但在国防方面却做的工作相对较少。特别是,对于成功的同义词替代攻击,几乎没有有效的防御方法来保护原始文本的同义体结构和语义信息,同时愚弄深层次的学习模式。我们在这方面作出贡献,并提议一种新型的对抗性防御方法,称为同义词编码方法(SEM)。具体地说,SEM在目标模型输入层之前插入一个编码器,以绘制每一组同义词的图解到一个独特的编码中,并训练该模型在不改变网络结构或增加额外数据的情况下消除可能发生的对立词扰动。广泛的实验表明,SEM能够有效地捍卫目前的同义词替代攻击,并阻止对抗性例子的转移。SEM也容易和高效地向大型模型和大数据集扩展。