Recently, few certified defense methods have been developed to provably guarantee the robustness of a text classifier to adversarial synonym substitutions. However, all existing certified defense methods assume that the defenders are informed of how the adversaries generate synonyms, which is not a realistic scenario. In this paper, we propose a certifiably robust defense method by randomly masking a certain proportion of the words in an input text, in which the above unrealistic assumption is no longer necessary. The proposed method can defend against not only word substitution-based attacks, but also character-level perturbations. We can certify the classifications of over 50% texts to be robust to any perturbation of 5 words on AGNEWS, and 2 words on SST2 dataset. The experimental results show that our randomized smoothing method significantly outperforms recently proposed defense methods across multiple datasets.
翻译:最近,为保证文本分类者对对抗性同义词替代的可靠性而开发的经认证的国防方法很少。 但是,所有现有的经认证的国防方法都假定捍卫者知道对手是如何产生同义词的,这是不现实的情景。 在本文中,我们建议了一种可以验证的强有力的防御方法,随机掩蔽输入文本中某些部分的词,在其中不再需要上述不现实的假设。 拟议的方法不仅可以防御基于文字的替换攻击,还可以防御性水平的干扰。 我们可以证明超过50%的文本的分类对AGNEWS的5个字和SST2数据集的2个字的任何干扰都是可靠的。 实验结果显示,我们随机的平滑方法大大优于最近提出的跨越多个数据集的防御方法。