Federated learning (FL) is a type of distributed machine learning at the wireless edge that preserves the privacy of clients' data from adversaries and even the central server. Existing federated learning approaches either use (i) secure multiparty computation (SMC) which is vulnerable to inference or (ii) differential privacy which may decrease the test accuracy given a large number of parties with relatively small amounts of data each. To tackle the problem with the existing methods in the literature, In this paper, we introduce PHY-Fed, a new framework that secures federated algorithms from an information-theoretic point of view.
翻译:联邦学习(FL)是一种在无线边缘分布式的机器学习,它保护客户数据从对手甚至中央服务器获得的隐私,现有的联邦学习方法有:(一) 安全的多功能计算(SMC),容易被推断,或(二) 不同的隐私,这可能会降低测试的准确性,因为许多缔约方各自拥有相对较少的数据。为了解决文献中现有方法的问题,我们在本文件中引入了PHY-Fed,这是一个从信息理论角度确保联合算法安全的新框架。