We establish a framework of random inverse problems with real-time observations over graphs, and present a decentralized online learning algorithm based on online data streams, which unifies the distributed parameter estimation in Hilbert space and the least mean square problem in reproducing kernel Hilbert space (RKHS-LMS). We transform the algorithm convergence into the asymptotic stability of randomly time-varying difference equations in Hilbert space with L2-bounded martingale difference terms and develop the L2 -asymptotic stability theory. It is shown that if the network graph is connected and the sequence of forward operators satisfies the infinitedimensional spatio-temporal persistence of excitation condition, then the estimates of all nodes are mean square and almost surely strongly consistent. By equivalently transferring the distributed learning problem in RKHS to the random inverse problem over graphs, we propose a decentralized online learning algorithm in RKHS based on non-stationary and non-independent online data streams, and prove that the algorithm is mean square and almost surely strongly consistent if the operators induced by the random input data satisfy the infinite-dimensional spatio-temporal persistence of excitation condition.
翻译:我们建立了一个随机图上实时观测的反问题框架,并提出了一种基于在线数据流的去中心化在线学习算法,该算法统一了希尔伯特空间中的分布式参数估计和再生核希尔伯特空间 (RKHS-LMS) 中的最小平方问题。我们将算法收敛性转化为带有 L2-有界鞅差分项的希尔伯特空间中随机时变差分方程的渐进稳定性,并发展了 L2-渐进稳定性理论。结果表明,如果网络图是连通的,并且正向算子序列满足无限维时空激励持续条件,则所有节点的估计值均为均方和几乎必然强一致的。通过等效地将 RKHS 中的分布式学习问题转化为随机图上的反问题,我们提出了一种基于非平稳和非独立在线数据流的 RKHS 中去中心化的在线学习算法,并证明如果由随机输入数据诱导的算子满足无限维时空激励持续条件,则算法是均方和几乎必然强一致的。