We consider the problem of recovering a real-valued $n$-dimensional signal from $m$ phaseless, linear measurements and analyze the amplitude-based non-smooth least squares objective. We establish local convergence of subgradient descent with optimal sample complexity based on the uniform concentration of a random, discontinuous matrix-valued operator arising from the objective's gradient dynamics. While common techniques to establish uniform concentration of random functions exploit Lipschitz continuity, we prove that the discontinuous matrix-valued operator satisfies a uniform matrix concentration inequality when the measurement vectors are Gaussian as soon as $m = \Omega(n)$ with high probability. We then show that satisfaction of this inequality is sufficient for subgradient descent with proper initialization to converge linearly to the true solution up to the global sign ambiguity. As a consequence, this guarantees local convergence for Gaussian measurements at optimal sample complexity. The concentration methods in the present work have previously been used to establish recovery guarantees for a variety of inverse problems under generative neural network priors. This paper demonstrates the applicability of these techniques to more traditional inverse problems and serves as a pedagogical introduction to those results.
翻译:我们考虑的是从无阶段性、线性测量中恢复实际价值为美元元元的尺寸信号的问题,并分析基于振幅的非悬浮最小平方的目标。我们根据目标梯度动态产生的随机、不连续矩阵估值操作器的统一集中,将次梯度下位下降与最佳样本复杂性建立本地趋同。虽然确定随机功能统一集中的通用技术利用了Lipschitz的连续性,但我们证明,不连续矩阵估值操作器在测量矢量为Gaussian时,满足了统一的矩阵浓度不平等,而测量矢量为Gaussian时,该值即为$m=\Omega(n),其概率很高。我们随后表明,这种不平等的满足程度足以满足亚梯度下位下降,而适当初始化则直线地将直位回归到真正解决全球标志模糊的状态。因此,这保证了高斯测量结果在最佳取样复杂性上的地方趋同。目前工作中的集中方法过去曾被用来确立恢复保证在精准神经网络下的各种反问题的保证。本文表明,这些技术适用于更传统的反向问题,并用作这些结果的教学的介绍。