This paper presents a new convergent Plug-and-Play (PnP) algorithm. PnP methods are efficient iterative algorithms for solving image inverse problems formulated as the minimization of the sum of a data-fidelity term and a regularization term. PnP methods perform regularization by plugging a pre-trained denoiser in a proximal algorithm, such as Proximal Gradient Descent (PGD). To ensure convergence of PnP schemes, many works study specific parametrizations of deep denoisers. However, existing results require either unverifiable or suboptimal hypotheses on the denoiser, or assume restrictive conditions on the parameters of the inverse problem. Observing that these limitations can be due to the proximal algorithm in use, we study a relaxed version of the PGD algorithm for minimizing the sum of a convex function and a weakly convex one. When plugged with a relaxed proximal denoiser, we show that the proposed PnP-$\alpha$PGD algorithm converges for a wider range of regularization parameters, thus allowing more accurate image restoration.
翻译:本文展示了一种新的趋同的 Plug 和 Play (PnP) 算法。 PnP 方法是一种高效的迭代算法, 用来解决作为数据- Fidity 术语和正规化术语之和最小化之和而形成的图像反向问题。 PnP 方法通过将预先训练的去noiser 纳入一个预产值算法(如Proximal 梯子( PGD) ) 来进行正规化。 为确保PnP 方法的趋同, 许多工作都研究深沉积器的具体匹配。 但是, 现有的结果需要非可核实或亚最佳的除尘器假设, 或对反向问题参数设定限制性条件。 观察这些限制可能是由于正在使用的精度算法, 我们研究一种宽松的 PGD 算法, 以最小化的convex 函数和微弱的 convex 一种算法。 当与较宽松的pximal denoiser 连接时, 我们显示, 提议的PnP- pal$\pha$ PGD 算法在更广泛的正规化参数上会合。