Machine learned models trained on organizational communication data, such as emails in an enterprise, carry unique risks of breaching confidentiality, even if the model is intended only for internal use. This work shows how confidentiality is distinct from privacy in an enterprise context, and aims to formulate an approach to preserving confidentiality while leveraging principles from differential privacy. The goal is to perform machine learning tasks, such as learning a language model or performing topic analysis, using interpersonal communications in the organization, while not learning about confidential information shared in the organization. Works that apply differential privacy techniques to natural language processing tasks usually assume independently distributed data, and overlook potential correlation among the records. Ignoring this correlation results in a fictional promise of privacy. Naively extending differential privacy techniques to focus on group privacy instead of record-level privacy is a straightforward approach to mitigate this issue. This approach, although providing a more realistic privacy-guarantee, is over-cautious and severely impacts model utility. We show this gap between these two extreme measures of privacy over two language tasks, and introduce a middle-ground solution. We propose a model that captures the correlation in the social network graph, and incorporates this correlation in the privacy calculations through Pufferfish privacy principles.
翻译:企业电子邮件等组织通信数据培训的机器学习模型具有破坏保密的独特风险,即使该模型仅用于内部使用,也具有破坏保密的独特风险。这项工作表明,保密与企业隐私之间如何区别,目的是在利用不同隐私原则的同时制定保密办法;目标是执行机器学习任务,如学习语言模型或进行专题分析,在组织内使用人际通信,同时不学习在组织内共享的保密信息;对自然语言处理任务采用不同的隐私技术,通常独立分配数据,并忽视记录之间的潜在关联;不注意这种关联导致虚构的隐私承诺;将差异隐私技术扩大到群体隐私而不是记录层面的隐私,是缓解这一问题的一个直截了当的办法。这种做法虽然提供了更现实的隐私保障,但过于谨慎和严重影响了模式的效用。我们展示了这两种极端的隐私衡量方法在两种语言任务上的差距,并引入了一种中层解决方案。我们提出了一个模型,用以捕捉社会网络图表中的关联性,并将这种关联性纳入Puffish隐私原则的隐私计算中。