Federated learning (FL) is a new distributed learning paradigm, with privacy, utility, and efficiency as its primary pillars. Existing research indicates that it is unlikely to simultaneously attain infinitesimal privacy leakage, utility loss, and efficiency. Therefore, how to find an optimal trade-off solution is the key consideration when designing the FL algorithm. One common way is to cast the trade-off problem as a multi-objective optimization problem, i.e., the goal is to minimize the utility loss and efficiency reduction while constraining the privacy leakage not exceeding a predefined value. However, existing multi-objective optimization frameworks are very time-consuming, and do not guarantee the existence of the Pareto frontier, this motivates us to seek a solution to transform the multi-objective problem into a single-objective problem because it is more efficient and easier to be solved. To this end, in this paper, we propose FedPAC, a unified framework that leverages PAC learning to quantify multiple objectives in terms of sample complexity, such quantification allows us to constrain the solution space of multiple objectives to a shared dimension, so that it can be solved with the help of a single-objective optimization algorithm. Specifically, we provide the results and detailed analyses of how to quantify the utility loss, privacy leakage, privacy-utility-efficiency trade-off, as well as the cost of the attacker from the PAC learning perspective.
翻译:联邦学习(FL)是一种新的分布式学习范式,其主要支柱是隐私、效用和效率。现有研究表明,在同时实现无穷小隐私泄露、效用损失和效率下,这是不太可能的。因此,在设计FL算法时,如何找到最优的权衡解决方案是关键考虑因素。一种常见的方法是将权衡问题转化为多目标优化问题,即目标是在约束隐私泄漏不超过预定值的前提下,最小化效用损失和效率降低。然而,现有的多目标优化框架非常耗时,并不能保证存在帕累托前沿,这促使我们寻求一种解决方案,将多目标问题转化为单目标问题,因为它更高效且更容易解决。因此,在本文中,我们提出了FedPAC,这是一个统一的框架,它利用PAC学习来量化多个目标,这样的量化使我们可以将多个目标的解决空间约束到一个共享维度上,从而可以借助单目标优化算法来解决。具体而言,我们提供了量化效用损失、隐私泄漏、隐私-效用-效率权衡以及攻击者成本的PAC学习视角的结果和详细分析。