Privacy has become a major concern in machine learning. In fact, the federated learning is motivated by the privacy concern as it does not allow to transmit the private data but only intermediate updates. However, federated learning does not always guarantee privacy-preservation as the intermediate updates may also reveal sensitive information. In this paper, we give an explicit information-theoretical analysis of a federated expectation maximization algorithm for Gaussian mixture model and prove that the intermediate updates can cause severe privacy leakage. To address the privacy issue, we propose a fully decentralized privacy-preserving solution, which is able to securely compute the updates in each maximization step. Additionally, we consider two different types of security attacks: the honest-but-curious and eavesdropping adversary models. Numerical validation shows that the proposed approach has superior performance compared to the existing approach in terms of both the accuracy and privacy level.
翻译:隐私已经成为机器学习中的一个主要问题。 事实上,联邦学习是出于隐私考虑,因为它不允许传输私人数据,而只是中间更新。 但是,联邦学习并不总是保证隐私保护,因为中间更新也可能披露敏感信息。 在本文中,我们对加乌西亚混合模型的联邦期望最大化算法进行了明确的信息理论分析,并证明中期更新可能导致严重的隐私渗漏。为了解决隐私问题,我们提出了一个完全分散的隐私保护解决方案,它能够安全地计算每个最大化步骤中的最新信息。此外,我们考虑两种不同类型的安全攻击:诚实但有争议和隐蔽的敌对模式。数字验证表明,拟议的方法在准确性和隐私水平上比现有方法有更高的性能。