Deploying machine learning models in production may allow adversaries to infer sensitive information about training data. There is a vast literature analyzing different types of inference risks, ranging from membership inference to reconstruction attacks. Inspired by the success of games (i.e., probabilistic experiments) to study security properties in cryptography, some authors describe privacy inference risks in machine learning using a similar game-based style. However, adversary capabilities and goals are often stated in subtly different ways from one presentation to the other, which makes it hard to relate and compose results. In this paper, we present a game-based framework to systematize the body of knowledge on privacy inference risks in machine learning.
翻译:在生产过程中部署机器学习模型可能使对手能够推断培训数据方面的敏感信息。有大量文献分析从会员推论到重建攻击等不同类型的推论风险。在游戏成功(即概率实验)的启发下,一些作者描述了使用类似游戏风格的机器学习中的隐私推论风险。然而,从一个介绍到另一个介绍,对立能力和目标的表述方式往往截然不同,因此很难联系,也难以形成结果。在本文中,我们提出了一个基于游戏的框架,将关于机器学习中的隐私推论风险的知识系统化。