Ill-posed linear inverse problems appear in many scientific setups, and are typically addressed by solving optimization problems, which are composed of data fidelity and prior terms. Recently, several works have considered a back-projection (BP) based fidelity term as an alternative to the common least squares (LS), and demonstrated excellent results for popular inverse problems. These works have also empirically shown that using the BP term, rather than the LS term, requires fewer iterations of optimization algorithms. In this paper, we examine the convergence rate of the projected gradient descent (PGD) algorithm for the BP objective. Our analysis allows to identify an inherent source for its faster convergence compared to using the LS objective, while making only mild assumptions. We also analyze the more general proximal gradient method under a relaxed contraction condition on the proximal mapping of the prior. This analysis further highlights the advantage of BP when the linear measurement operator is badly conditioned. Numerical experiments with both $\ell_1$-norm and GAN-based priors corroborate our theoretical results.
翻译:在许多科学设置中,出现了不测的线性反问题,这些问题通常通过解决优化问题来解决,而优化问题由数据忠诚和以前的条件组成。最近,一些作品将基于背预测(BP)的忠诚术语作为普通最低方的替代物(LS),并展示出对流行反问题的良好结果。这些作品还从经验上表明,使用BP术语而不是LS术语,需要减少优化算法的迭代。在本文中,我们研究了预测的BP目标梯度下降算法(PGD)的趋同率(PGD)趋同率(PGD)的趋同率。我们的分析可以找出一个内在来源,以便比使用LS目标更快的趋同率更快,同时只作出温和的假设。我们还分析了在对前一线性测量操作员条件恶劣时,在较宽松的收缩条件下比较一般的准梯度梯度梯度方法。这项分析进一步强调了BP的优势。我们理论结果以$_1美元-诺姆和基于GAN前的Numerical实验为依据。