Personalised federated learning (FL) aims at collaboratively learning a machine learning model taylored for each client. Albeit promising advances have been made in this direction, most of existing approaches works do not allow for uncertainty quantification which is crucial in many applications. In addition, personalisation in the cross-device setting still involves important issues, especially for new clients or those having small number of observations. This paper aims at filling these gaps. To this end, we propose a novel methodology coined FedPop by recasting personalised FL into the population modeling paradigm where clients' models involve fixed common population parameters and random effects, aiming at explaining data heterogeneity. To derive convergence guarantees for our scheme, we introduce a new class of federated stochastic optimisation algorithms which relies on Markov chain Monte Carlo methods. Compared to existing personalised FL methods, the proposed methodology has important benefits: it is robust to client drift, practical for inference on new clients, and above all, enables uncertainty quantification under mild computational and memory overheads. We provide non-asymptotic convergence guarantees for the proposed algorithms and illustrate their performances on various personalised federated learning tasks.
翻译:个人联手学习(FL)旨在协作学习每个客户都得到优美的机器学习模式。尽管在这方面已经取得了有希望的进展,但大多数现有方法都无法对许多应用中至关重要的不确定性进行量化。此外,跨设备设置的个人化仍然涉及重要问题,特别是对新客户或观察次数少的人而言。本文旨在填补这些差距。为此,我们提议了一种新的方法,通过将个人化的FedPop重新纳入人口模型模式,将个人化的FL重新纳入人口模型,客户模式涉及固定的共同人口参数和随机效应,目的是解释数据异质性。为了为我们的计划取得趋同性保证,我们引入了新型的联结性随机优化算法,依赖Markov链 Monte Carlo方法。与现有的个性化FL方法相比,拟议的方法具有重要好处:对客户流非常强大,对新客户的推断非常实用,而且最重要的是,在轻度计算和记忆管理下可以对不确定性进行量化。我们为拟议的个人算法学习提供了非抽象的合并保证。