Federated learning is a collaborative method that aims to preserve data privacy while creating AI models. Current approaches to federated learning tend to rely heavily on secure aggregation protocols to preserve data privacy. However, to some degree, such protocols assume that the entity orchestrating the federated learning process (i.e., the server) is not fully malicious or dishonest. We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious and attempts to obtain access to private, potentially sensitive data. Furthermore, we provide a method to further defend against such a malicious server, and demonstrate effectiveness against known attacks that reconstruct data in a federated learning setting.
翻译:联邦学习是一种合作方法,目的是在创建AI模式的同时保护数据隐私。目前联邦学习的方法往往严重依赖安全的汇总协议来保护数据隐私。不过,在某种程度上,这种协议假定,协调联邦学习过程的实体(即服务器)不是完全恶意或不诚实的。我们调查如果服务器是完全恶意的并试图获取私人的、潜在的敏感数据,就可能出现的脆弱性,以确保汇总。此外,我们提供了一种进一步防范这种恶意服务器的方法,并展示了在联邦学习环境中重建数据的已知袭击的有效性。