The frustratingly fragile nature of neural network models make current natural language generation (NLG) systems prone to backdoor attacks and generate malicious sequences that could be sexist or offensive. Unfortunately, little effort has been invested to how backdoor attacks can affect current NLG models and how to defend against these attacks. In this work, by giving a formal definition of backdoor attack and defense, we investigate this problem on two important NLG tasks, machine translation and dialog generation. Tailored to the inherent nature of NLG models (e.g., producing a sequence of coherent words given contexts), we design defending strategies against attacks. We find that testing the backward probability of generating sources given targets yields effective defense performance against all different types of attacks, and is able to handle the {\it one-to-many} issue in many NLG tasks such as dialog generation. We hope that this work can raise the awareness of backdoor risks concealed in deep NLG systems and inspire more future work (both attack and defense) towards this direction.
翻译:令人沮丧的是,神经网络模型的脆弱性质使得目前的自然语言生成系统容易受到后门攻击,并产生可能具有性别歧视或冒犯性的恶意序列。 不幸的是,对于后门攻击如何影响目前的NLG模型以及如何防御这些攻击,没有做出多少努力。 在这项工作中,我们通过对后门攻击和防御作出正式定义,对后门攻击和防御这两个重要的NLG任务、机器翻译和生成对话来调查这一问题。我们适应了NLG模型的固有性质(例如,产生一系列前后连贯的文字),我们设计了防御攻击的战略。我们发现,测试产生源的后方概率可以对所有不同类型的攻击进行有效的防御性工作,并且能够处理许多NLG任务(例如生成对话)中的“一对一”问题。我们希望,这项工作能够提高对深层NLG系统中隐藏的后门风险的认识,并激励今后朝这个方向做更多的工作(包括攻击和防御)。