Personalization in federated learning can improve the accuracy of a model for a user by trading off the model's bias (introduced by using data from other users who are potentially different) against its variance (due to the limited amount of data on any single user). In order to develop training algorithms that optimally balance this trade-off, it is necessary to extend our theoretical foundations. In this work, we formalize the personalized collaborative learning problem as stochastic optimization of a user's objective $f_0(x)$ while given access to $N$ related but different objectives of other users $\{f_1(x), \dots, f_N(x)\}$. We give convergence guarantees for two algorithms in this setting -- a popular personalization method known as \emph{weighted gradient averaging}, and a novel \emph{bias correction} method -- and explore conditions under which we can optimally trade-off their bias for a reduction in variance and achieve linear speedup w.r.t.\ the number of users $N$. Further, we also empirically study their performance confirming our theoretical insights.
翻译:联合学习中的个性化能通过将模型的偏差(通过使用其他可能不同用户的数据)与其差异(由于任何单一用户的数据数量有限)进行交换,提高用户个人化模型的准确性。为了发展最佳平衡这一权衡的训练算法,有必要扩展我们的理论基础。在这项工作中,我们正式确定个人化协作学习问题,作为用户目标$f_0(x)的随机优化,同时允许其他用户获取$$(f_1(x)),\dots,f_N(x) $(x) $) 。我们为这一设置中的两种算法提供了趋同保证 -- -- 一种流行的个人化方法,称为\emph{加权偏差平均值},和一种新型的emph{biabalis修正}方法 --,并探索在何种条件下,我们可以最佳地交换其偏差,减少差异,实现线性超速度(w.r.t) 用户数量 $。此外,我们还根据经验研究其业绩,确认我们的理论见解。