We develop an efficient stochastic variance reduced gradient descent algorithm to solve the affine rank minimization problem consists of finding a matrix of minimum rank from linear measurements. The proposed algorithm as a stochastic gradient descent strategy enjoys a more favorable complexity than full gradients. It also reduces the variance of the stochastic gradient at each iteration and accelerate the rate of convergence. We prove that the proposed algorithm converges linearly in expectation to the solution under a restricted isometry condition. The numerical experiments show that the proposed algorithm has a clearly advantageous balance of efficiency, adaptivity, and accuracy compared with other state-of-the-art greedy algorithms.
翻译:我们开发了一个高效的随机偏差降低梯度的梯度下位算法,以解决折叠级最小化问题,包括从线性测量中找到一个最低等级矩阵。作为随机梯度梯度下降战略的拟议算法比整个梯度下降战略更为复杂。它还降低了每次迭代的随机梯度差异,并加快了趋同速度。我们证明,在有限的等量条件下,拟议的算法线性趋同于解决方案。数字实验表明,拟议的算法与其他最先进的贪婪算法相比,在效率、适应性和准确性方面有着明显的有利平衡。