The canonical formulation of federated learning treats it as a distributed optimization problem where the model parameters are optimized against a global loss function that decomposes across client loss functions. A recent alternative formulation instead treats federated learning as a distributed inference problem, where the goal is to infer a global posterior from partitioned client data (Al-Shedivat et al., 2021). This paper extends the inference view and describes a variational inference formulation of federated learning where the goal is to find a global variational posterior that well-approximates the true posterior. This naturally motivates an expectation propagation approach to federated learning (FedEP), where approximations to the global posterior are iteratively refined through probabilistic message-passing between the central server and the clients. We conduct an extensive empirical study across various algorithmic considerations and describe practical strategies for scaling up expectation propagation to the modern federated setting. We apply FedEP on standard federated learning benchmarks and find that it outperforms strong baselines in terms of both convergence speed and accuracy.
翻译:联盟式学习的剖析公式将它视为一个分布式优化问题,模型参数优化于一个全球损失函数,使客户损失函数分解。最近的一种替代公式将联合会式学习作为分布式推论问题处理,目的是从分隔式客户数据中推导出一个全球后遗物(Al-Shedivat等人,2021年)。本文扩展了推论观点,并描述了联合会式学习的变式推论公式,目的是找到一个全球变异后遗物,在真实的后遗物附近找到。这自然地激励一种预想性传播方法,以联结式学习(FedEP)为主,通过中央服务器和客户之间的概率性信息传递,反复完善全球后遗物的近似值。我们从各种算法考虑中进行广泛的实证研究,并描述将预期传播扩大至现代节化环境的实用战略。我们将FedEP用于标准的联式学习基准,发现它在趋近速度和准确性两方面都超越了强基线。