There is a known tension between the need to analyze personal data to drive business and privacy concerns. Many data protection regulations, including the EU General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA), set out strict restrictions and obligations on companies that collect or process personal data. Moreover, machine learning models themselves can be used to derive personal information, as demonstrated by recent membership and attribute inference attacks. Anonymized data, however, is exempt from data protection principles and obligations. Thus, models built on anonymized data are also exempt from any privacy obligations, in addition to providing better protection against such attacks on the training data. Learning on anonymized data typically results in a significant degradation in accuracy. We address this challenge by guiding our anonymization using the knowledge encoded within the model, and targeting it to minimize the impact on the model's accuracy, a process we call accuracy-guided anonymization. We demonstrate that by focusing on the model's accuracy rather than information loss, our method outperforms state of the art k-anonymity methods in terms of the achieved utility, in particular with high values of k and large numbers of quasi-identifiers. We also demonstrate that our approach achieves similar results in its ability to prevent membership inference attacks as alternative approaches based on differential privacy. This shows that model-guided anonymization can, in some cases, be a legitimate substitute for such methods, while averting some of their inherent drawbacks such as complexity, performance overhead and being fitted to specific model types. As opposed to methods that rely on adding noise during training, our approach does not rely on making any modifications to the training algorithm itself.
翻译:许多数据保护条例,包括欧盟一般数据保护条例(GDPR)和加利福尼亚消费者保护法(CCPA),对收集或处理个人数据的公司规定了严格的限制和义务;此外,机器学习模式本身可用于获取个人信息,如最近的会员资格和属性推断攻击所显示的那样;匿名数据不受数据保护原则和义务的约束。因此,匿名数据也免除了任何隐私义务,除了对培训数据提供更好的保护之外,还免除了以匿名数据为基础的模型。关于匿名数据的研究通常导致对收集或处理个人数据的公司实行严格的限制和义务。我们应对这一挑战时,利用模型内编码的知识指导我们的匿名化工作,并以此为目标,以最大限度地减少对模型准确性的影响。我们称之为准确性指南的匿名数据不受数据保护原则和义务的约束。我们通过侧重于模型的准确性而不是信息损失,我们的方法在达到效用的这种攻击时,我们的方法与 k- 相对称方法的逻辑化方法相比,在达到效用的正确性方法方面, 也取决于这种精确性方法本身。