Deep learning is becoming increasingly popular in real-life applications, especially in natural language processing (NLP). Users often choose training outsourcing or adopt third-party data and models due to data and computation resources being limited. In such a situation, training data and models are exposed to the public. As a result, attackers can manipulate the training process to inject some triggers into the model, which is called backdoor attack. Backdoor attack is quite stealthy and difficult to be detected because it has little inferior influence on the model's performance for the clean samples. To get a precise grasp and understanding of this problem, in this paper, we conduct a comprehensive review of backdoor attacks and defenses in the field of NLP. Besides, we summarize benchmark datasets and point out the open issues to design credible systems to defend against backdoor attacks.
翻译:在实际应用中,尤其是在自然语言处理(NLP)中,深层次学习越来越受欢迎。由于数据和计算资源有限,用户往往选择培训外包或采用第三方数据和模型。在这种情况下,培训数据和模型向公众暴露。因此,攻击者可以操纵培训过程,将一些触发器注入模型,即所谓的后门攻击。后门攻击相当隐蔽,难以被发现,因为它对模型的清洁样品性能影响很小。为了准确了解和理解这一问题,我们在本文件中全面审查了NLP领域的后门攻击和防御。此外,我们总结了基准数据集,指出了设计可靠的系统以抵御后门攻击的公开问题。