Many state-of-the-art ML models have outperformed humans in various tasks such as image classification. With such outstanding performance, ML models are widely used today. However, the existence of adversarial attacks and data poisoning attacks really questions the robustness of ML models. For instance, Engstrom et al. demonstrated that state-of-the-art image classifiers could be easily fooled by a small rotation on an arbitrary image. As ML systems are being increasingly integrated into safety and security-sensitive applications, adversarial attacks and data poisoning attacks pose a considerable threat. This chapter focuses on the two broad and important areas of ML security: adversarial attacks and data poisoning attacks.
翻译:许多最先进的ML模型在图像分类等各种任务方面超过了人的业绩,由于这种杰出的成绩,今天广泛使用ML模型,但对抗性攻击和数据中毒攻击的存在确实对ML模型的稳健性提出了疑问。例如,Engstrom等人表明,最先进的图像分类者很容易被任意图像的小型旋转所蒙骗。随着ML系统日益融入安全和安保敏感应用中,对抗性攻击和数据中毒攻击构成了相当大的威胁。本章侧重于ML安全的两个广泛而重要的领域:对抗性攻击和数据中毒攻击。