Training robust deep learning models for down-stream tasks is a critical challenge. Research has shown that down-stream models can be easily fooled with adversarial inputs that look like the training data, but slightly perturbed, in a way imperceptible to humans. Understanding the behavior of natural language models under these attacks is crucial to better defend these models against such attacks. In the black-box attack setting, where no access to model parameters is available, the attacker can only query the output information from the targeted model to craft a successful attack. Current black-box state-of-the-art models are costly in both computational complexity and number of queries needed to craft successful adversarial examples. For real world scenarios, the number of queries is critical, where less queries are desired to avoid suspicion towards an attacking agent. In this paper, we propose Explain2Attack, a black-box adversarial attack on text classification task. Instead of searching for important words to be perturbed by querying the target model, Explain2Attack employs an interpretable substitute model from a similar domain to learn word importance scores. We show that our framework either achieves or out-performs attack rates of the state-of-the-art models, yet with lower queries cost and higher efficiency.
翻译:为下游任务提供强有力的深层次培训模式是一项关键挑战。研究表明,下游模式很容易被看起来像培训数据那样的对抗性投入所欺骗,但稍受扰动,其方式对人来说是无法察觉的。了解这些攻击下的自然语言模式的行为对于更好地防范这些攻击模式至关重要。在黑盒攻击环境中,无法获取模型参数,攻击者只能查询目标模型的输出信息来策划成功的攻击。目前的黑盒最新技术模型在计算复杂程度和设计成功对抗性范例所需的查询数量方面代价高昂。对于现实世界情景而言,查询数量至关重要,因为较少的查询是为了避免对攻击剂的怀疑。在本文中,我们提议对文本分类任务进行黑盒对抗性攻击。在黑盒攻击环境中,攻击者不能通过查询目标模型来寻找要渗透的重要词,而要从类似领域使用可解释的替代模型来学习名词重要性分数。我们显示,我们的框架不是达到效率,就是用更高的成本率,而是用较低的攻击率。