This paper presents a new convergent Plug-and-Play (PnP) algorithm. PnP methods are efficient iterative algorithms for solving image inverse problems formulated as the minimization of the sum of a data-fidelity term and a regularization term. PnP methods perform regularization by plugging a pre-trained denoiser in a proximal algorithm, such as Proximal Gradient Descent (PGD). To ensure convergence of PnP schemes, many works study specific parametrizations of deep denoisers. However, existing results require either unverifiable or suboptimal hypotheses on the denoiser, or assume restrictive conditions on the parameters of the inverse problem. Observing that these limitations can be due to the proximal algorithm in use, we study a relaxed version of the PGD algorithm for minimizing the sum of a convex function and a weakly convex one. When plugged with a relaxed proximal denoiser, we show that the proposed PnP-$\alpha$PGD algorithm converges for a wider range of regularization parameters, thus allowing more accurate image restoration.
翻译:本文提出了一种全新的收敛插入并播放(PnP)算法。PnP方法是用于解决形式化为数据适应项和正则化项之和的影像反问题的有效迭代算法。PnP方法通过将预训练去噪器插入到近端算法(如近端梯度下降法(PGD))中进行正则化。为确保PnP方案的收敛性,许多研究探讨了深度去噪器的特定参数化。然而,现有结果要么需要无法验证的假设或次优的假设,要么对反问题的参数施加了限制性条件。观察到这些限制可能是由于所使用的近端算法,我们研究了一种最小化凸函数和弱凸函数之和的PGD算法的松弛版本。当与松弛近端去噪器搭配使用时,我们展示了所提出的PnP-$\alpha$PGD算法可以收敛于更广泛的正则化参数范围,从而实现更准确的影像恢复。