With the increased usage of AI accelerators on mobile and edge devices, on-device machine learning (ML) is gaining popularity. Consequently, thousands of proprietary ML models are being deployed on billions of untrusted devices. This raises serious security concerns about model privacy. However, protecting the model privacy without losing access to the AI accelerators is a challenging problem. In this paper, we present a novel on-device model inference system, ShadowNet. ShadowNet protects the model privacy with Trusted Execution Environment (TEE) while securely outsourcing the heavy linear layers of the model to the untrusted hardware accelerators. ShadowNet achieves this by transforming the weights of the linear layers before outsourcing them and restoring the results inside the TEE. The nonlinear layers are also kept secure inside the TEE. The transformation of the weights and the restoration of the results are designed in a way that can be implemented efficiently. We have built a ShadowNet prototype based on TensorFlow Lite and applied it on four popular CNNs, namely, MobileNets, ResNet-44, AlexNet and MiniVGG. Our evaluation shows that ShadowNet achieves strong security guarantees with reasonable performance, offering a practical solution for secure on-device model inference.
翻译:随着在移动和边缘设备上更多地使用AI加速器,安装机器学习(ML)日益受欢迎。因此,数千个专有ML模型正在几十亿个不受信任的装置上部署。这引起了对模型隐私的严重安全关切。然而,保护模型隐私而不失去AI加速器的准入是一个具有挑战性的问题。在本文中,我们提出了一个小说关于设计模型推断系统的新书,即ShadowNet。ShowNet保护模型隐私与信任的执行环境(TEE),同时安全地将模型的重线性层外包给不受信任的硬件加速器。ShowNet通过在外包前改变线性层的重量并恢复TEE内部的结果来实现这一点。非线性层也在TEE内部保持安全。重量的转换和结果的恢复设计方式可以有效地实施。我们已经在TensorFlowte(TensorFlowte)的基础上建立了一个影子网络原型模型,并将其应用于四个受欢迎的CNN,即移动网络、ResNet-44号、AlexNet-实际解决方案和MiniVGA(MING)中展示了强有力的安全性保证。我们通过在安全性网络上展示了一种可靠的安全性能。