Deep learning based methods hold state-of-the-art results in low-level image processing tasks, but remain difficult to interpret due to their black-box construction. Unrolled optimization networks present an interpretable alternative to constructing deep neural networks by deriving their architecture from classical iterative optimization methods without use of tricks from the standard deep learning tool-box. So far, such methods have demonstrated performance close to that of state-of-the-art models while using their interpretable construction to achieve a comparably low learned parameter count. In this work, we propose an unrolled convolutional dictionary learning network (CDLNet) and demonstrate its competitive denoising and joint denoising and demosaicing (JDD) performance both in low and high parameter count regimes. Specifically, we show that the proposed model outperforms state-of-the-art fully convolutional denoising and JDD models when scaled to a similar parameter count. In addition, we leverage the model's interpretable construction to propose a noise-adaptive parameterization of thresholds in the network that enables state-of-the-art blind denoising performance, and near perfect generalization on noise-levels unseen during training. Furthermore, we show that such performance extends to the JDD task and unsupervised learning.
翻译:深层次的学习方法在低层次图像处理任务中具有最先进的成果,但由于黑盒子的构造,仍然难以解释。无序优化网络通过不使用标准的深层次学习工具箱的把戏技巧而从古典迭代优化方法中衍生出建筑结构,为构建深神经网络提供了一个可解释的替代方案。迄今为止,这些方法显示了接近最先进的模型的性能,同时利用最先进的模型的可解释构造来实现可比的低知识参数计数。在这项工作中,我们建议建立一个未滚动的共生字典学习网络(CDLNet),并展示其具有竞争力的分解和联合分解和演示功能的网络。具体地说,我们展示了拟议的模型在向类似参数计数缩放时,超越了最先进的全革命性分解和JDD模型。此外,我们利用模型的可解释构造来提出一个对网络阈值的调噪声参数化参数化参数(CDLNet ), 并展示其低高参数计数和高参数计数体系下的竞争性、 联合解析和解析化(JDDD) 性表现。我们最先进的盲分化状态的状态、 和最精确化的演示阶段的JDDD- 更新的演化, 更新的演化阶段的J-DDDD-演演演演制,在不全性能阶段的状态上, 高级性能。