We attribute the vulnerability of natural language processing models to the fact that similar inputs are converted to dissimilar representations in the embedding space, leading to inconsistent outputs, and we propose a novel robust training method, termed Fast Triplet Metric Learning (FTML). Specifically, we argue that the original sample should have similar representation with its adversarial counterparts and distinguish its representation from other samples for better robustness. To this end, we adopt the triplet metric learning into the standard training to pull words closer to their positive samples (i.e., synonyms) and push away their negative samples (i.e., non-synonyms) in the embedding space. Extensive experiments demonstrate that FTML can significantly promote the model robustness against various advanced adversarial attacks while keeping competitive classification accuracy on original samples. Besides, our method is efficient as it only needs to adjust the embedding and introduces very little overhead on the standard training. Our work shows great potential of improving the textual robustness through robust word embedding.
翻译:我们把自然语言处理模型的脆弱性归因于将类似的投入转换为嵌入空间的不同表述,从而导致不一致的产出,我们提出了一种新型强势培训方法,称为快速三边Metric Learning(FTML ) 。 具体地说,我们主张原样本与其对口对口方应具有类似的代表性,并将其代表性与其他样本区分,以提高稳健性。为此,我们将三重指标学习纳入标准培训,以拉近其正样(即同义词)的文字,并推开其嵌入空间的负面样本(即非同义词)。广泛的实验表明FTML在保持原始样本的竞争性分类准确性的同时,可以极大地促进各种高级对立对立攻击的模型的稳健性。此外,我们的方法是高效的,因为它只需要调整嵌入并给标准培训带来很少的管理费用。我们的工作显示了通过稳健的词嵌入空间改进文字的稳健性的巨大潜力。