We study collaborative learning among distributed clients facilitated by a central server. Each client is interested in maximizing a personalized objective function that is a weighted sum of its local objective and a global objective. Each client has direct access to random bandit feedback on its local objective, but only has a partial view of the global objective and relies on information exchange with other clients for collaborative learning. We adopt the kernel-based bandit framework where the objective functions belong to a reproducing kernel Hilbert space. We propose an algorithm based on surrogate Gaussian process (GP) models and establish its order-optimal regret performance (up to polylogarithmic factors). We also show that the sparse approximations of the GP models can be employed to reduce the communication overhead across clients.
翻译:我们研究了由中央服务器促进的分布式客户端之间的协同学习。每个客户端都希望最大化其个性化目标函数,该函数是其本地目标和全局目标的加权和。每个客户端直接访问其本地目标上的随机赌博反馈,但只有对全局目标的部分视图,并依靠与其他客户端的信息交换进行协作学习。我们采用内核赌博机框架,其中目标函数属于再生核希尔伯特空间。我们提出了一种基于代理高斯过程(GP)模型的算法,并确定了其最优大致性能(多项式对数因子)。我们还表明,GP模型的稀疏逼近可以用于减少客户端之间的通信开销。