Although it has been demonstrated that Natural Language Processing (NLP) algorithms are vulnerable to deliberate attacks, the question of whether such weaknesses can lead to software security threats is under-explored. To bridge this gap, we conducted vulnerability tests on Text-to-SQL systems that are commonly used to create natural language interfaces to databases. We showed that the Text-to-SQL modules within six commercial applications can be manipulated to produce malicious code, potentially leading to data breaches and Denial of Service attacks. This is the first demonstration that NLP models can be exploited as attack vectors in the wild. In addition, experiments using four open-source language models verified that straightforward backdoor attacks on Text-to-SQL systems achieve a 100% success rate without affecting their performance. The aim of this work is to draw the community's attention to potential software security issues associated with NLP algorithms and encourage exploration of methods to mitigate against them.
翻译:虽然已经证明自然语言处理算法很容易受到蓄意攻击,但这种弱点是否会导致软件安全威胁的问题尚未得到充分探讨。为了缩小这一差距,我们对通常用于建立数据库自然语言界面的文本到SQL系统进行了脆弱性测试。我们表明,六个商业应用中的文本到SQL模块可以被操纵来产生恶意代码,可能导致数据破坏和拒绝服务攻击。这是第一个证明NLP模型可以在野外被用作攻击矢量的证明。此外,使用四种公开语言模型进行的实验还证实,对文本到SQL系统的直接后门攻击取得了100%的成功率,而不影响其性能。这项工作的目的是提请社区注意与NLP算法相关的潜在软件安全问题,并鼓励探索减轻这些风险的方法。</s>