Federated learning (FL) is a new distributed learning paradigm, with privacy, utility, and efficiency as its primary pillars. Existing research indicates that it is unlikely to simultaneously attain infinitesimal privacy leakage, utility loss, and efficiency. Therefore, how to find an optimal trade-off solution is the key consideration when designing the FL algorithm. One common way is to cast the trade-off problem as a multi-objective optimization problem, i.e., the goal is to minimize the utility loss and efficiency reduction while constraining the privacy leakage not exceeding a predefined value. However, existing multi-objective optimization frameworks are very time-consuming, and do not guarantee the existence of the Pareto frontier, this motivates us to seek a solution to transform the multi-objective problem into a single-objective problem because it is more efficient and easier to be solved. To this end, in this paper, we propose FedPAC, a unified framework that leverages PAC learning to quantify multiple objectives in terms of sample complexity, such quantification allows us to constrain the solution space of multiple objectives to a shared dimension, so that it can be solved with the help of a single-objective optimization algorithm. Specifically, we provide the results and detailed analyses of how to quantify the utility loss, privacy leakage, privacy-utility-efficiency trade-off, as well as the cost of the attacker from the PAC learning perspective.
翻译:联邦学习是一种新的分布式学习范式,其主要支柱是隐私、效用和效率。现有研究表明,要同时达到无穷小的隐私泄漏、效用损失和效率降低是不可能的。因此,在设计联邦学习算法时,如何找到最佳的权衡解决方案是关键问题。一种常见的方法是将权衡问题看作多目标优化问题,即目标是在约束隐私泄漏不超过预定义值的情况下,最小化效用损失和效率降低。然而,现有的多目标优化框架非常耗时,并且不能保证帕累托前沿的存在,这促使我们寻求将多目标问题转化为单目标问题的解决方案,因为这更高效且更易于解决。为此,在本文中,我们提出了FedPAC,这是一个统一框架,它利用PAC学习来量化多个目标,以样本复杂度作为方案的约束条件,从而将多个目标的解决空间约束到共享维度中,从而可以使用单目标优化算法解决。具体来说,我们从PAC学习的视角提供了效用损失、隐私泄漏、隐私-效用-效率权衡以及攻击者成本的量化结果和详细分析。