As machine learning (ML) technologies and applications are rapidly changing many domains of computing, security issues associated with ML are also emerging. In the domain of systems security, many endeavors have been made to ensure ML model and data confidentiality. ML computations are often inevitably performed in untrusted environments and entail complex multi-party security requirements. Hence, researchers have leveraged the Trusted Execution Environments (TEEs) to build confidential ML computation systems. This paper conducts a systematic and comprehensive survey by classifying attack vectors and mitigation in TEE-protected confidential ML computation in the untrusted environment, analyzes the multi-party ML security requirements, and discusses related engineering challenges.
翻译:由于机器学习(ML)技术和应用正在迅速改变许多计算领域,与ML有关的安全问题也正在出现,在系统安全领域,为确保ML模型和数据保密作出了许多努力,ML计算往往不可避免地在不信任的环境中进行,并产生复杂的多党安全要求,因此研究人员利用信任执行环境(TEE)来建立保密ML计算系统,本文件进行系统和全面的调查,对攻击矢量进行分类,并在TEE保护的保密 ML计算中减少不受信任环境中的进攻矢量,分析多党ML安全要求,并讨论相关的工程挑战。