Performing deep learning on end-user devices provides fast offline inference results and can help protect the user's privacy. However, running models on untrusted client devices reveals model information which may be proprietary, i.e., the operating system or other applications on end-user devices may be manipulated to copy and redistribute this information, infringing on the model provider's intellectual property. We propose the use of ARM TrustZone, a hardware-based security feature present in most phones, to confidentially run a proprietary model on an untrusted end-user device. We explore the limitations and design challenges of using TrustZone and examine potential approaches for confidential deep learning within this environment. Of particular interest is providing robust protection of proprietary model information while minimizing total performance overhead.
翻译:在终端用户装置上进行深入学习可提供快速离线推断结果,有助于保护用户隐私。然而,在无信任客户装置上运行的模型揭示出可能是专有的模型信息,即操作系统或终端用户装置上的其他应用程序可能被操纵来复制和重新分配这一信息,从而侵犯示范供应商的知识产权。我们提议使用大多数电话中存在的硬件安全功能ARM Trust区,秘密运行一个无信任终端装置的专有模型。我们探索使用信任区的局限性和设计挑战,并研究在这一环境中保密深层学习的潜在方法。特别感兴趣的是提供强有力的专利模型信息保护,同时尽量减少总体性能管理。