We study the decentralized online regularized linear regression algorithm over random time-varying graphs. At each time step, every node runs an online estimation algorithm consisting of an innovation term processing its own new measurement, a consensus term taking a weighted sum of estimations of its own and its neighbors with additive and multiplicative communication noises and a regularization term preventing over-fitting. It is not required that the regression matrices and graphs satisfy special statistical assumptions such as mutual independence, spatio-temporal independence or stationarity. We develop the nonnegative supermartingale inequality of the estimation error, and prove that the estimations of all nodes converge to the unknown true parameter vector almost surely if the algorithm gains, graphs and regression matrices jointly satisfy the sample path spatio-temporal persistence of excitation condition. Especially, this condition holds by choosing appropriate algorithm gains if the graphs are uniformly conditionally jointly connected and conditionally balanced, and the regression models of all nodes are uniformly conditionally spatio-temporally jointly observable, under which the algorithm converges in mean square and almost surely. In addition, we prove that the regret upper bound $\mathcal O(T^{1-\tau}\ln T)$, where $\tau\in (0.5,1)$ is a constant depending on the algorithm gains.
翻译:通过随机时间变化的图表,我们研究分散的在线常规线性回归算法。每个节点在每一个时间步骤中都运行一个在线估算算法,其中包括一个创新术语,处理自己的新测量,一个协商一致术语,用添加和多倍复制的通信噪音,对其自身及其邻居的估计进行加权总和,以及一个防止过度匹配的正规化术语。我们不需要这些回归矩阵和图表满足特殊统计假设,如相互独立、时空独立或静态。我们开发了估算错误的非负性超均匀不平等,并证明所有节点的估算值都与未知的真实参数矢量相融合,如果算法的增益、图表和回归矩阵共同满足样本路径spantio-时空持续性,则几乎可以肯定。特别是,如果这些图表是统一的有条件联合连接和有条件平衡的,那么,所有节点的回归模型是一致的有条件的spatotio-poral-t可观测,根据这些算法正正方和几乎肯定的美元。此外,我们证明,在O1\\\Qxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx