With the increased usage of AI accelerators on mobile and edge devices, on-device machine learning (ML) is gaining popularity. Thousands of proprietary ML models are being deployed today on billions of untrusted devices. This raises serious security concerns about model privacy. However, protecting model privacy without losing access to the untrusted AI accelerators is a challenging problem. In this paper, we present a novel on-device model inference system, ShadowNet. ShadowNet protects the model privacy with Trusted Execution Environment (TEE) while securely outsourcing the heavy linear layers of the model to the untrusted hardware accelerators. ShadowNet achieves this by transforming the weights of the linear layers before outsourcing them and restoring the results inside the TEE. The non-linear layers are also kept secure inside the TEE. ShadowNet's design ensures efficient transformation of the weights and the subsequent restoration of the results. We build a ShadowNet prototype based on TensorFlow Lite and evaluate it on five popular CNNs, namely, MobileNet, ResNet-44, MiniVGG, ResNet-404, and YOLOv4-tiny. Our evaluation shows that ShadowNet achieves strong security guarantees with reasonable performance, offering a practical solution for secure on-device model inference.
翻译:随着在移动和边缘设备上更多地使用AI加速器,安装机器学习(ML)日益受到欢迎。今天,成千上万个专有ML模型被部署在数十亿个不受信任的硬件加速器上。这引起了对模型隐私的严重安全关切。然而,保护模型隐私而不失去对不信任的AI加速器的接触是一个具有挑战性的问题。在本文中,我们提出了一个关于设计模型推断系统的新书,即ShadowNet。影网保护模型隐私与信任的执行环境(TEE),同时将模型的重线性层安全地外包给不受信任的硬件加速器。影网通过在外包前改变线性层的重量并恢复TEE内部的结果来实现这一目标。非线性层也在TEE内保持安全。影网的设计确保了重量的有效转换和随后的结果恢复。我们根据TensorFlow Lite建立了一个阴影网原型,并在五种流行的CNN上对它进行了评价,即移动网络、ResNet-4、MiniVGG、ResNet-4Net-4 和Shoevice-developie-Proview Proview Proview Proviews aviews aviews aviews aview