Machine learning (ML) algorithms are increasingly being integrated into embedded and IoT systems that surround us, and they are vulnerable to adversarial attacks. The deployment of these ML algorithms on resource-limited embedded platforms also requires the use of model compression techniques. The impact of such model compression techniques on adversarial robustness in ML is an important and emerging area of research. This article provides an overview of the landscape of adversarial attacks and ML model compression techniques relevant to embedded systems. We then describe efforts that seek to understand the relationship between adversarial attacks and ML model compression before discussing open problems in this area.
翻译:机器学习(ML)算法正日益被纳入我们周围的嵌入式和IoT系统,它们很容易受到对抗性攻击。在资源有限的嵌入式平台上部署这些ML算法也需要使用模型压缩技术。这种模型压缩技术对ML对抗性强力的影响是一个重要和新出现的研究领域。本文章概述了与嵌入式系统有关的对抗性攻击和ML模式压缩技术的全景。然后我们描述了在讨论该领域的公开问题之前,试图了解对抗性攻击与ML模式压缩之间的关系的努力。</s>