The objective of Federated Learning (FL) is to perform statistical inference for data which are decentralised and stored locally on networked clients. FL raises many constraints which include privacy and data ownership, communication overhead, statistical heterogeneity, and partial client participation. In this paper, we address these problems in the framework of the Bayesian paradigm. To this end, we propose a novel federated Markov Chain Monte Carlo algorithm, referred to as Quantised Langevin Stochastic Dynamics which may be seen as an extension to the FL setting of Stochastic Gradient Langevin Dynamics, which handles the communication bottleneck using gradient compression. To improve performance, we then introduce variance reduction techniques, which lead to two improved versions coined \texttt{QLSD}$^{\star}$ and \texttt{QLSD}$^{++}$. We give both non-asymptotic and asymptotic convergence guarantees for the proposed algorithms. We illustrate their performances using various Bayesian Federated Learning benchmarks.
翻译:联邦学习联合会(FL)的目标是对在网络客户中分散和储存的本地数据进行统计推断。FL提出了许多制约因素,包括隐私和数据所有权、通信管理费、统计多样性和部分客户参与。在本文件中,我们在巴伊西亚范式框架内处理这些问题。为此,我们提议采用新的联邦式马克夫链蒙特卡洛算法,称为量化的兰格文斯托切斯特动态,可视为FL设置Stochatic Gradient Langevin Dynamics的延伸,该套法使用梯度压缩处理通信瓶颈。为改进性能,我们随后引入了减少差异技术,从而导致两种改进版本的硬币值 \ textt ⁇ LSD}$ 和\ textttt ⁇ LSD}$。我们为拟议的算法提供非抽调和抽调调调调调调调调的合并保证。我们用各种Bayesian联邦学习基准来说明其业绩。