Federated learning (FL) is a new distributed learning paradigm, with privacy, utility, and efficiency as its primary pillars. Existing research indicates that it is unlikely to simultaneously attain infinitesimal privacy leakage, utility loss, and efficiency. Therefore, how to find an optimal trade-off solution is the key consideration when designing the FL algorithm. One common way is to cast the trade-off problem as a multi-objective optimization problem, i.e., the goal is to minimize the utility loss and efficiency reduction while constraining the privacy leakage not exceeding a predefined value. However, existing multi-objective optimization frameworks are very time-consuming, and do not guarantee the existence of the Pareto frontier, this motivates us to seek a solution to transform the multi-objective problem into a single-objective problem because it is more efficient and easier to be solved. To this end, we propose FedPAC, a unified framework that leverages PAC learning to quantify multiple objectives in terms of sample complexity, such quantification allows us to constrain the solution space of multiple objectives to a shared dimension, so that it can be solved with the help of a single-objective optimization algorithm. Specifically, we provide the results and detailed analyses of how to quantify the utility loss, privacy leakage, privacy-utility-efficiency trade-off, as well as the cost of the attacker from the PAC learning perspective.
翻译:分布式学习是一种新的分布式学习范式,具有隐私,效用和效率三个主要支柱。现有研究表明,很难同时实现无限小的隐私泄露,效用损失和效率。因此,如何找到最佳的折衷解决方案是设计FL算法的关键考虑因素。一种常见的方法是将折衷问题视为多目标优化问题,即目标是在约束隐私泄漏不超过预定义值的同时,最小化效用损失和效率降低。然而,现有的多目标优化框架非常耗时,并不能保证存在Pareto前沿,这促使我们寻求解决方案,将多目标问题转化为单目标问题,因为它更高效,更容易解决。为此,我们提出了FedPAC,这是一个利用PAC学习来量化多个目标的统一框架,这样的量化让我们将多个目标的解空间约束到一个共享维度上,以便可以借助单目标优化算法进行求解。具体来说,我们从PAC学习的角度提供了优化效用损失,隐私泄露,隐私-效用-效率折衷以及攻击者成本的结果和详细分析。