We introduce a novel optimization algorithm for image recovery under learned sparse and low-rank constraints, which we parameterize as weighted extensions of the $\ell_p^p$-vector and $\mathcal S_p^p$ Schatten-matrix quasi-norms for $0\!<p\!\le1$, respectively. Our proposed algorithm generalizes the Iteratively Reweighted Least Squares (IRLS) method, used for signal recovery under $\ell_1$ and nuclear-norm constrained minimization. Further, we interpret our overall minimization approach as a recurrent network that we then employ to deal with inverse low-level computer vision problems. Thanks to the convergence guarantees that our IRLS strategy offers, we are able to train the derived reconstruction networks using a memory-efficient implicit back-propagation scheme, which does not pose any restrictions on their effective depth. To assess our networks' performance, we compare them against other existing reconstruction methods on several inverse problems, namely image deblurring, super-resolution, demosaicking and sparse recovery. Our reconstruction results are shown to be very competitive and in many cases outperform those of existing unrolled networks, whose number of parameters is orders of magnitude higher than that of our learned models.
翻译:我们引入了一种新的优化算法,用于在学习的稀疏和低秩约束下进行图像恢复,这种约束是以参数化的、加权的$\ell_p^p$-向量和$\mathcal S_p^p$ Schatten-矩阵准范数($0\!<p\!\le1$)扩展形式出现的。我们提出的算法推广了迭代重新加权最小二乘(IRLS)方法,该方法用于$\ell_1$和核范数受约束的最小化。此外,我们将我们的整体最小化方法解释为一个循环网络,然后使用该网络处理反向低级计算机视觉问题。由于我们的IRLS策略提供的收敛保证,我们能够使用内存效率高的隐式反向传播方案对推导的重构网络进行训练,而且不会对它们的有效深度造成任何限制。为了评估我们的网络性能,我们将它们与其他现有的重构方法在几个反问题(图像去模糊、超分辨率、去马赛克和稀疏恢复)上进行比较。我们的重构结果被证明非常有竞争力,并且在许多情况下优于现有的展开网络的重构结果,其参数数量比我们学习的模型高几个数量级。