Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability. With the proliferation of deep-learning-based technology, the potential risks associated with model development and deployment can be amplified and become dreadful vulnerabilities. This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models, including attacks, defenses, verification, and novel applications.
翻译:反向强力研究机器学习模式最差的绩效,以确保安全和可靠性。随着深学习技术的扩散,与模型开发和部署有关的潜在风险可以扩大,并成为可怕的脆弱性。本文件全面概述了研究课题和研究方法的基本原则,以对抗性地强化深学习模式,包括攻击、防御、核查和新应用。