Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a backdoor in the training phase, the adversary could control model predictions via predefined triggers. As various attack and defense models have been proposed, it is of great significance to perform rigorous evaluations. However, we highlight two issues in previous backdoor learning evaluations: (1) The differences between real-world scenarios (e.g. releasing poisoned datasets or models) are neglected, and we argue that each scenario has its own constraints and concerns, thus requires specific evaluation protocols; (2) The evaluation metrics only consider whether the attacks could flip the models' predictions on poisoned samples and retain performances on benign samples, but ignore that poisoned samples should also be stealthy and semantic-preserving. To address these issues, we categorize existing works into three practical scenarios in which attackers release datasets, pre-trained models, and fine-tuned models respectively, then discuss their unique evaluation methodologies. On metrics, to completely evaluate poisoned samples, we use grammar error increase and perplexity difference for stealthiness, along with text similarity for validity. After formalizing the frameworks, we develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning. With this toolkit, we perform extensive experiments to benchmark attack and defense models under the suggested paradigm. To facilitate the underexplored defenses against poisoned datasets, we further propose CUBE, a simple yet strong clustering-based defense baseline. We hope that our frameworks and benchmarks could serve as the cornerstones for future model development and evaluations.
翻译:文字后门攻击是对NLP系统的一种实际威胁。 通过在培训阶段注入后门,对手可以通过预设触发器控制模型预测。 由于提出了各种攻击和防御模型,因此进行严格的评价非常重要。然而,我们在以前的后门学习评价中强调两个问题:(1) 现实世界情景(例如释放有毒数据集或模型)之间的差异被忽略,我们争辩说,每种情景(例如释放有毒数据集或模型)之间有其自身的制约和关切,因此需要具体的评估协议;(2) 评价指标只考虑这些袭击是否可以通过预先界定的触发模型预测并保留良性样本的性能,但忽视有毒样本也应该是隐性且语义性保存的。为了解决这些问题,我们将现有工程分为三种实际情景,其中攻击者发布数据集、预先培训的模型和微调模型,然后讨论其独特的评估方法。 关于计量,为了彻底评估中毒样本,我们使用校对错误增加和模糊性差异来进行偷盗,同时使用类似性模型来保持良性,但忽略了毒样样本样本样本样本样本的性,同时忽略了毒样样样品样本样本,但是为了解决这些问题,为了解决这些问题,我们在正式的防御框架之下,我们开发了一种开放的基线框架,然后在工具下,我们进行一个基础 学习一个基础评估。