We consider the problem of estimating the factors of a rank-$1$ matrix with i.i.d. Gaussian, rank-$1$ measurements that are nonlinearly transformed and corrupted by noise. Considering two prototypical choices for the nonlinearity, we study the convergence properties of a natural alternating update rule for this nonconvex optimization problem starting from a random initialization. We show sharp convergence guarantees for a sample-split version of the algorithm by deriving a deterministic recursion that is accurate even in high-dimensional problems. Notably, while the infinite-sample population update is uninformative and suggests exact recovery in a single step, the algorithm -- and our deterministic prediction -- converges geometrically fast from a random initialization. Our sharp, non-asymptotic analysis also exposes several other fine-grained properties of this problem, including how the nonlinearity and noise level affect convergence behavior. On a technical level, our results are enabled by showing that the empirical error recursion can be predicted by our deterministic sequence within fluctuations of the order $n^{-1/2}$ when each iteration is run with $n$ observations. Our technique leverages leave-one-out tools originating in the literature on high-dimensional $M$-estimation and provides an avenue for sharply analyzing higher-order iterative algorithms from a random initialization in other high-dimensional optimization problems with random data.
翻译:我们考虑的是用 i.d.d.gausian 来估计一至一美元矩阵的系数问题。考虑到非线性两种原型选择,我们研究自然交替更新规则对非convex优化问题的趋同性,从随机初始化开始,我们研究的是自然交替更新规则对这个非convex优化问题的趋同性特性。我们通过得出一个确定性递归(即使在高度问题中也是准确的),为该算法的抽样分解性重现提供了强烈的趋同性保障。值得注意的是,虽然无限抽样人口更新是非信息性的,并表明在单步中准确的恢复,但算法 -- -- 和我们的确定性预测 -- -- 从随机初始的初始值来看,算法 -- -- 和我们的确定性递增性递增(美元) -- -- 每步调一个高水平的递增性平流数据分析工具,我们用高额递减(美元)的正值计算方法来预测这个经验性重错。