Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to confuse the model into making a mistake. Such examples pose a serious threat to the applicability of machine-learning-based systems, especially in life- and safety-critical domains. To address this problem, the area of adversarial robustness investigates mechanisms behind adversarial attacks and defenses against these attacks. This survey reviews literature that focuses on the effects of data used by a model on the model's adversarial robustness. It systematically identifies and summarizes the state-of-the-art research in this area and further discusses gaps of knowledge and promising future research directions.
翻译:翻译后的标题:
全在于数据:对数据对抗鲁棒性影响的综述
翻译后的摘要:
对抗样本是攻击者有意设计的混淆机器学习模型输入。此类样本对于在生命关键领域的基于机器学习的系统具有严重的威胁。为解决此问题,抗对抗性领域研究防御和攻击机制。本文重点审查了详细研究在采用模型的数据对模型的抗对抗性影响的文献。它系统地辨别和总结了该领域的最新研究,并进一步讨论了知识的空白和有前途的未来研究方向。