We consider learning in decentralized heterogeneous networks: agents seek to minimize a convex functional that aggregates data across the network, while only having access to their local data streams. We focus on the case where agents seek to estimate a regression \emph{function} that belongs to a reproducing kernel Hilbert space (RKHS). To incentivize coordination while respecting network heterogeneity, we impose nonlinear proximity constraints. To solve the constrained stochastic program, we propose applying a functional variant of stochastic primal-dual (Arrow-Hurwicz) method which yields a decentralized algorithm. To handle the fact that agents' functions have complexity proportional to time (owing to the RKHS parameterization), we project the primal iterates onto subspaces greedily constructed from kernel evaluations of agents' local observations. The resulting scheme, dubbed Heterogeneous Adaptive Learning with Kernels (HALK), when used with constant step-sizes, yields $\ccalO(\sqrt{T})$ attenuation in sub-optimality and exactly satisfies the constraints in the long run, which improves upon the state of the art rates for vector-valued problems.
翻译:我们考虑在分散化的多元网络中学习:代理商寻求将集合整个网络的数据的组合功能最小化,而只是能够访问本地数据流。我们关注的是代理商试图估计属于复制核心Hilbert空间(RKHS)的回归 emph{forpy} 的情况。为了激励协调,同时尊重网络的异质性,我们施加了非线性近距离限制。为了解决受限制的随机程序,我们提议应用一个功能变异的随机原始(Arrow-Hurwiz)方法,该方法产生一种分散式算法。为了处理代理商的功能与时间成比例的复杂性(由于RKHS参数化),我们根据对代理商本地观测的内核评估,将原始的偏差投到细小空间上。由此形成的方案,在使用固定的步尺寸时,将电离子适应性适应性学习(HALK)产生 $\calO(Sqrt{T) 。要处理代理人功能的复杂度与时间成比例的这一事实,我们预测在亚值水平上是如何改进的矢量问题。