This paper provides a finite-time analysis of linear stochastic approximation (LSA) algorithms with fixed step size, a core method in statistics and machine learning. LSA is used to compute approximate solutions of a $d$-dimensional linear system $\bar{\mathbf{A}} \theta = \bar{\mathbf{b}}$ for which $(\bar{\mathbf{A}}, \bar{\mathbf{b}})$ can only be estimated by (asymptotically) unbiased observations $\{(\mathbf{A}(Z_n),\mathbf{b}(Z_n))\}_{n \in \mathbb{N}}$. We consider here the case where $\{Z_n\}_{n \in \mathbb{N}}$ is an i.i.d. sequence or a uniformly geometrically ergodic Markov chain. We derive $p$-th moment and high-probability deviation bounds for the iterates defined by LSA and its Polyak-Ruppert-averaged version. Our finite-time instance-dependent bounds for the averaged LSA iterates are sharp in the sense that the leading term we obtain coincides with the local asymptotic minimax limit. Moreover, the remainder terms of our bounds admit a tight dependence on the mixing time $t_{\operatorname{mix}}$ of the underlying chain and the norm of the noise variables. We emphasize that our result requires the SA step size to scale only with logarithm of the problem dimension $d$.
翻译:线性随机逼近Polyak-Ruppert平均迭代的有限时间高概率下界
翻译后的摘要:
本文提供了线性随机逼近(LSA)算法的有限时间分析,LSA是统计学和机器学习中的一种核心方法。LSA用于计算大致解决一个$d$维线性系统$\bar{\mathbf{A}}\theta=\bar{\mathbf{b}}$的方法,其中$(\bar{\mathbf{A}}, \bar{\mathbf{b}})$仅能通过(渐近)无偏观测$\{(\mathbf{A}(Z_n),\mathbf{b}(Z_n))\}_{n \in \mathbb{N}}$来估计。我们考虑这种情况:$\{Z_n\}_{n \in \mathbb{N}}$是一个i.i.d.序列或均匀地几何长度验收的马尔可夫链。我们为LSA及其Polyak-Ruppert平均版本定义的迭代提供了$p$阶矩和高概率偏差界。我们的平均LSA迭代的有限时间实例依赖界是尖锐的,因为我们获得的主导项与局部渐进极小值极限一致。此外,我们的界的余项对底层链的混合时间$t_{\operatorname{mix}}$和噪声变量的范数具有紧密的依赖关系。我们强调,我们的结果要求SA步长仅随问题维度$d$的对数尺度。