Collaborative training can improve the accuracy of a model for a user by trading off the model's bias (introduced by using data from other users who are potentially different) against its variance (due to the limited amount of data on any single user). In this work, we formalize the personalized collaborative learning problem as a stochastic optimization of a task $0$ while given access to $N$ related but different tasks $1,\dots, N$. We give convergence guarantees for two algorithms in this setting -- a popular collaboration method known as \emph{weighted gradient averaging}, and a novel \emph{bias correction} method -- and explore conditions under which we can achieve linear speedup w.r.t. the number of auxiliary tasks $N$. Further, we also empirically study their performance confirming our theoretical insights.
翻译:在这项工作中,我们将个人化合作学习问题正式确定为一项任务的一个随机优化,同时允许获得1美元的相关但不同的任务,1美元,\dots,N$。我们为这一背景下两种算法提供了趋同保证 -- -- 一种流行的合作方法,称为\emph{加权梯度平均值,和一种新颖的\emph{bias校正}方法 -- -- 并探索了我们能够实现线性加速(w.r.t.t.)辅助任务数量的条件。此外,我们还从经验上研究这些算法的绩效,以证实我们的理论洞察力。