Recently, deep learning (DL) methods such as convolutional neural networks (CNNs) have gained prominence in the area of image denoising. This is owing to their proven ability to surpass state-of-the-art classical image denoising algorithms such as BM3D. Deep denoising CNNs (DnCNNs) use many feedforward convolution layers with added regularization methods of batch normalization and residual learning to improve denoising performance significantly. However, this comes at the expense of a huge number of trainable parameters. In this paper, we address this issue by reducing the number of parameters while achieving a comparable level of performance. We derive motivation from the improved performance obtained by training networks using the dense-sparse-dense (DSD) training approach. We extend this training approach to a reduced DnCNN (RDnCNN) network resulting in a faster denoising network with significantly reduced parameters and comparable performance to the DnCNN.
翻译:最近,深刻的学习(DL)方法,如进化神经网络(CNNs),在图像分解领域越来越突出,这是因为这些方法已证明有能力超过最先进的古典图像分解算法,如BM3D。深淡CNN(DnCNNs)使用许多分批正常化和剩余学习的分解调控方法,使分批标准化和剩余学习的分解化方法增多,从而显著改善分解性能。然而,这牺牲了大量可训练参数。在本文件中,我们通过减少参数数量,同时实现类似水平的性能来解决这一问题。我们从培训网络使用密集的分解(DSD)培训方法获得的改进业绩中获取动力。我们将这一培训方法推广到减少的DNNN(RDNNNN)网络,从而加快了分解网络,参数大大降低,与DNNN的类似性能。