For many IoT domains, Machine Learning and more particularly Deep Learning brings very efficient solutions to handle complex data and perform challenging and mostly critical tasks. However, the deployment of models in a large variety of devices faces several obstacles related to trust and security. The latest is particularly critical since the demonstrations of severe flaws impacting the integrity, confidentiality and accessibility of neural network models. However, the attack surface of such embedded systems cannot be reduced to abstract flaws but must encompass the physical threats related to the implementation of these models within hardware platforms (e.g., 32-bit microcontrollers). Among physical attacks, Fault Injection Analysis (FIA) are known to be very powerful with a large spectrum of attack vectors. Most importantly, highly focused FIA techniques such as laser beam injection enable very accurate evaluation of the vulnerabilities as well as the robustness of embedded systems. Here, we propose to discuss how laser injection with state-of-the-art equipment, combined with theoretical evidences from Adversarial Machine Learning, highlights worrying threats against the integrity of deep learning inference and claims that join efforts from the theoretical AI and Physical Security communities are a urgent need.
翻译:在许多IOT领域,机器学习,特别是深学习,为处理复杂数据、执行具有挑战性且主要是关键的任务带来了非常有效的解决方案。然而,在大量各种装置中部署模型在信任和安全方面面临着若干障碍。最新情况特别关键,因为出现了影响神经网络模型完整性、保密性和可及性的严重缺陷。然而,这些嵌入系统的攻击面不能缩小为抽象缺陷,而必须包括硬件平台(例如32位微控制器)内实施这些模型的有形威胁。在实物攻击中,据知,大量攻击矢量的反射分析(FIA)非常强大。最重要的是,诸如激光射线注射等高度集中的FIA技术能够非常准确地评估脆弱性以及嵌入系统的稳健性。在这里,我们提议讨论如何用最先进的设备用激光注射,加上Adversarial Maining的理论证据,凸显出对深层次推断的完整性的令人担忧的威胁,并声称理论性AI和实体安全界的共同努力是迫切需要的。