Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input. However in the real-world, the attacker does not need to comply with this restriction. In fact, more threats to the deep model come from unrestricted adversarial examples, that is, the attacker makes large and visible modifications on the image, which causes the model classifying mistakenly, but does not affect the normal observation in human perspective. Unrestricted adversarial attack is a popular and practical direction but has not been studied thoroughly. We organize this competition with the purpose of exploring more effective unrestricted adversarial attack algorithm, so as to accelerate the academical research on the model robustness under stronger unbounded attacks. The competition is held on the TianChi platform (\url{https://tianchi.aliyun.com/competition/entrance/531853/introduction}) as one of the series of AI Security Challengers Program.
翻译:许多作品都调查了在这种环境下的对抗性攻击或防御,在这种环境中,可以添加受约束和无法察觉的扰动,但是在现实世界中,攻击者不需要遵守这一限制。事实上,对深层次模型的威胁更多来自不受限制的对抗性例子,即攻击者对图像进行大规模和可见的修改,造成模型错误分类,但不影响人类的正常观察。不受限制的对抗性攻击是一个广受欢迎的实用方向,但尚未进行彻底研究。我们组织这次竞争的目的是探索更有效的不受限制的对抗性攻击算法,以便在更强的无约束攻击下加速对模型坚固性的学术研究。比赛在天花平台(https://tianchi.aliyun.com/competition/entrance/53153/1853/Introductions)上进行,作为国际安全挑战者方案系列之一。