The Plug-and-Play (PnP) framework makes it possible to integrate advanced image denoising priors into optimization algorithms, to efficiently solve a variety of image restoration tasks generally formulated as Maximum A Posteriori (MAP) estimation problems. The Plug-and-Play alternating direction method of multipliers (ADMM) and the Regularization by Denoising (RED) algorithms are two examples of such methods that made a breakthrough in image restoration. However, while the former method only applies to proximal algorithms, it has recently been shown that there exists no regularization that explains the RED algorithm when the denoisers lack Jacobian symmetry, which happen to be the case of most practical denoisers. To the best of our knowledge, there exists no method for training a network that directly represents the gradient of a regularizer, which can be directly used in Plug-and-Play gradient-based algorithms. We show that it is possible to train a network directly modeling the gradient of a MAP regularizer while jointly training the corresponding MAP denoiser. We use this network in gradient-based optimization methods and obtain better results comparing to other generic Plug-and-Play approaches. We also show that the regularizer can be used as a pre-trained network for unrolled gradient descent. Lastly, we show that the resulting denoiser allows for a better convergence of the Plug-and-Play ADMM.
翻译:Plug- Play (PnP) 框架使得有可能将高级图像分解前端( PnP) 纳入优化算法, 以有效解决通常以最大后台( MAP) 估计问题的形式拟订的各种图像恢复任务。 Plug- Play 交替方向法( ADMM ) 和 Denoising (RED) 算法( 校正) 是这种在图像恢复方面实现突破的方法的两个例子。 然而, 虽然前一种方法只适用于准成像算法, 但最近已经表明, 当Demilosers缺乏Jacobian 的对称法时, 并不存在解释RED 算法的正规化法, 这恰好是最实用的隐喻法。 据我们所知, Plug- P 和 Plickral 的网络培训方法没有直接代表了正轨化器的加速度, 我们用这个网络的定型平流法来比较其他的平流法。