The increased adoption of Artificial Intelligence (AI) presents an opportunity to solve many socio-economic and environmental challenges; however, this cannot happen without securing AI-enabled technologies. In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques. This challenge has motivated concerted research efforts into adversarial AI, with the aim of developing robust machine and deep learning models that are resilient to different types of adversarial scenarios. In this paper, we present a holistic cyber security review that demonstrates adversarial attacks against AI applications, including aspects such as adversarial knowledge and capabilities, as well as existing methods for generating adversarial examples and existing cyber defence models. We explain mathematical AI models, especially new variants of reinforcement and federated learning, to demonstrate how attack vectors would exploit vulnerabilities of AI models. We also propose a systematic framework for demonstrating attack techniques against AI applications and reviewed several cyber defences that would protect AI applications against those attacks. We also highlight the importance of understanding the adversarial goals and their capabilities, especially the recent attacks against industry applications, to develop adaptive defences that assess to secure AI applications. Finally, we describe the main challenges and future research directions in the domain of security and privacy of AI technologies.
翻译:越来越多的人造情报(AI)的采用为解决许多社会经济和环境挑战提供了一个机会;然而,如果不确保由AI带动的技术,这种情况就不可能发生。近年来,大多数AI模型都容易受到先进和尖端的黑客技术的利用。这一挑战促使对对抗性AI进行协调一致的研究,目的是开发强大的机器和深层次的学习模型,以适应不同类型的敌对情况。在本文件中,我们提出了一个整体的网络安全审查,以证明对AI应用的对抗性攻击,包括对抗性知识和能力,以及现有的产生对抗性实例和网络防御模型的方法。我们解释了数学AI模型,特别是新的强化和联合学习的变种,以表明攻击性病媒如何利用AI模型的脆弱性。我们还提出了一个系统框架,用以演示针对AI应用的攻击技术,并审查了保护AI应用免受这些攻击的若干网络防御。我们还强调必须了解对抗性目标及其能力,特别是最近对工业应用的攻击,以发展适应性防卫,评估AI应用的安全性。最后,我们描述了AI技术安全和隐私领域的主要挑战和未来研究方向。