Machine learning (ML) is increasingly being adopted in a wide variety of application domains. Usually, a well-performing ML model relies on a large volume of training data and high-powered computational resources. Such a need for and the use of huge volumes of data raise serious privacy concerns because of the potential risks of leakage of highly privacy-sensitive information; further, the evolving regulatory environments that increasingly restrict access to and use of privacy-sensitive data add significant challenges to fully benefiting from the power of ML for data-driven applications. A trained ML model may also be vulnerable to adversarial attacks such as membership, attribute, or property inference attacks and model inversion attacks. Hence, well-designed privacy-preserving ML (PPML) solutions are critically needed for many emerging applications. Increasingly, significant research efforts from both academia and industry can be seen in PPML areas that aim toward integrating privacy-preserving techniques into ML pipeline or specific algorithms, or designing various PPML architectures. In particular, existing PPML research cross-cut ML, systems and applications design, as well as security and privacy areas; hence, there is a critical need to understand state-of-the-art research, related challenges and a research roadmap for future research in PPML area. In this paper, we systematically review and summarize existing privacy-preserving approaches and propose a Phase, Guarantee, and Utility (PGU) triad based model to understand and guide the evaluation of various PPML solutions by decomposing their privacy-preserving functionalities. We discuss the unique characteristics and challenges of PPML and outline possible research directions that leverage as well as benefit multiple research communities such as ML, distributed systems, security and privacy.
翻译:通常,完善的ML模式依赖于大量的培训数据和高功率计算资源,因此,许多新兴应用程序都亟需设计完善的维护隐私ML(PPML)解决方案,因为高度隐私敏感信息可能泄漏;此外,不断演变的监管环境日益限制获取和使用隐私敏感数据,这给充分利用ML对数据驱动应用程序的影响力带来重大挑战。受过培训的ML模式还可能容易受到对抗性攻击,如成员身份、属性或财产感知攻击和模型反向攻击等。因此,对许多新兴应用程序而言,非常需要设计完善的维护隐私ML(PPMML)解决方案,因此,学术界和工业界的大规模研究工作越来越多地出现在PPML领域,目的是将隐私保护技术纳入ML管道或具体算法,或设计各种PPML模型。 特别是,现有的PPML研究跨级、系统和应用设计以及安全和隐私领域的独特性挑战。 因此,我们非常需要系统化地理解目前以PMPML为主的研究方向的研究方向和路线图,从而理解目前以我们为主的研究基础的研究方向和方向。