It's well-known that inverse problems are ill-posed and to solve them meaningfully, one has to employ regularization methods. Traditionally, popular regularization methods are the penalized Variational approaches. In recent years, the classical regularization approaches have been outclassed by the so-called plug-and-play (PnP) algorithms, which copy the proximal gradient minimization processes, such as ADMM or FISTA, but with any general denoiser. However, unlike the traditional proximal gradient methods, the theoretical underpinnings, convergence, and stability results have been insufficient for these PnP-algorithms. Hence, the results obtained from these algorithms, though empirically outstanding, can't always be completely trusted, as they may contain certain instabilities or (hallucinated) features arising from the denoiser, especially when using a pre-trained learned denoiser. In fact, in this paper, we show that a PnP-algorithm can induce hallucinated features, when using a pre-trained deep-learning-based (DnCNN) denoiser. We show that such instabilities are quite different than the instabilities inherent to an ill-posed problem. We also present methods to subdue these instabilities and significantly improve the recoveries. We compare the advantages and disadvantages of a learned denoiser over a classical denoiser (here, BM3D), as well as, the effectiveness of the FISTA-PnP algorithm vs. the ADMM-PnP algorithm. In addition, we also provide an algorithm to combine these two denoisers, the learned and the classical, in a weighted fashion to produce even better results. We conclude with numerical results which validate the developed theories.
翻译:众所周知, 反面的问题是不正确的, 要有意义地解决这些问题, 就必须使用正规化方法。 传统上, 流行的正规化方法是惩罚性的变异方法。 近几年来, 古典的正规化方法被所谓的插件和游戏( PnP) 算法( PnP) 算法所超越, 它复制了原始的梯度最小化过程, 如 ADMM 或 FISTA, 但是, 它具有任何一般的脱意。 但是, 与传统的纯度梯度方法不同, 理论的时态基础、 趋同和稳定性结果对于PnPal- algority 来说是不够的。 因此, 这些算法的结果虽然在经验上是突出的, 但不能永远被完全信任, 因为它们可能包含某些不稳性或( halled) 。 特别是当使用经过事先训练的学术解调时。 事实上, 我们用PnPal- Palticrecial- paltistrational- recal commodiciateal max max mabilal mabilal mabil max max max max max max max dal max max max max max max max max max max max max max mod dir ma ma modal mod dal modal modal modal modal modal modal modal modal modal max max max max max max max max max max max max max max max mod daldaldaldaldaldaldaldaldaldaldaldaldaldaldal modal ma ma mod modal madaldal modaldaldal modal modaldaldaldaldaldaldaldaldal madaldaldal mad