Utilization of Machine Learning (ML) algorithms, especially Deep Neural Network (DNN) models, becomes a widely accepted standard in many domains more particularly IoT-based systems. DNN models reach impressive performances in several sensitive fields such as medical diagnosis, smart transport or security threat detection, and represent a valuable piece of Intellectual Property. Over the last few years, a major trend is the large-scale deployment of models in a wide variety of devices. However, this migration to embedded systems is slowed down because of the broad spectrum of attacks threatening the integrity, confidentiality and availability of embedded models. In this review, we cover the landscape of attacks targeting the confidentiality of embedded DNN models that may have a major impact on critical IoT systems, with a particular focus on model extraction and data leakage. We highlight the fact that Side-Channel Analysis (SCA) is a relatively unexplored bias by which model's confidentiality can be compromised. Input data, architecture or parameters of a model can be extracted from power or electromagnetic observations, testifying a real need from a security point of view.
翻译:机器学习(ML)算法的利用,特别是深神经网络(DNN)模型的利用,在许多领域,特别是基于IoT的系统,成为广泛接受的标准。DNN模型在若干敏感领域,如医学诊断、智能运输或安全威胁探测等,取得了令人印象深刻的成绩,并代表了知识产权的宝贵部分。在过去几年中,一个主要趋势是在各种装置中大规模部署模型。然而,由于这种向嵌入系统的迁移由于威胁嵌入模型的完整性、保密性和可用性的广泛攻击而减缓。在本次审查中,我们涵盖了针对嵌入的DNN模型的保密性的攻击场景,这些模型可能对关键的IOT系统产生重大影响,特别侧重于模型提取和数据泄漏。我们强调,侧通道分析是一个相对没有探索的偏差,因为其保密性可能受到破坏。一个模型的输入数据、结构或参数可以从电磁观测中提取,从安全角度证明确实需要。