As machine learning (ML) technologies and applications are rapidly changing many computing domains, security issues associated with ML are also emerging. In the domain of systems security, many endeavors have been made to ensure ML model and data confidentiality. ML computations are often inevitably performed in untrusted environments and entail complex multi-party security requirements. Hence, researchers have leveraged the Trusted Execution Environments (TEEs) to build confidential ML computation systems. We conduct a systematic and comprehensive survey by classifying attack vectors and mitigation in confidential ML computation in untrusted environments, analyzing the complex security requirements in multi-party scenarios, and summarizing engineering challenges in confidential ML implementation. Lastly, we suggest future research directions based on our study.
翻译:由于机器学习(ML)技术和应用正在迅速改变许多计算领域,与ML有关的安全问题也正在出现,在系统安全领域,已作出许多努力确保ML模型和数据保密,ML计算往往不可避免地在不信任的环境中进行,并产生复杂的多党安全要求,因此研究人员利用信任执行环境来建立保密ML计算系统,我们进行系统和全面的调查,对攻击矢量进行分类,并在保密的 ML计算中减少不受信任的环境中的 ML计算,分析多党设想的复杂安全要求,总结保密ML执行方面的工程挑战。最后,我们根据我们的研究建议今后的研究方向。