In machine learning and statistical data analysis, we often run into objective function that is a summation: the number of terms in the summation possibly is equal to the sample size, which can be enormous. In such a setting, the stochastic mirror descent (SMD) algorithm is a numerically efficient method -- each iteration involving a very small subset of the data. The variance reduction version of SMD (VRSMD) can further improve SMD by inducing faster convergence. On the other hand, algorithms such as gradient descent and stochastic gradient descent have the implicit regularization property that leads to better performance in terms of the generalization errors. Little is known on whether such a property holds for VRSMD. We prove here that the discrete VRSMD estimator sequence converges to the minimum mirror interpolant in the linear regression. This establishes the implicit regularization property for VRSMD. As an application of the above result, we derive a model estimation accuracy result in the setting when the true model is sparse. We use numerical examples to illustrate the empirical power of VRSMD.
翻译:在机器学习和统计数据分析中,我们往往会遇到客观的功能,即一个总和:总和中的术语数可能与样本大小相等,而且可能非常巨大。在这种环境下,随机镜底(SMD)算法是一种数字效率高的方法 -- -- 每种循环都涉及极小的一组数据。SMD(VRSMD)的差异减少版(VRSMD)可以通过更快的趋同进一步改进 SMD(VRSMD) 来进一步改进 SMD(VRSMD) 。另一方面,梯度下降和随机梯度梯度下降等算法具有隐含的正规化属性,导致在一般化错误方面更好地表现。对于这种属性是否为VRSMD(SMD) 几乎没有了解。我们在这里证明,离散的VRSMD(SMD) 估计序列与线性回归中最小的镜面图相一致。这确立了VRSMD(VRSMD) 隐含的正规化属性。作为上述结果的一个应用,我们在真实模型稀少时在设定中得出一个模型的精确性估计结果。我们用数字例子来说明VRSMD的经验。