Image deblurring task is an ill-posed one, where exists infinite feasible solutions for blurry image. Modern deep learning approaches usually discard the learning of blur kernels and directly employ end-to-end supervised learning. Popular deblurring datasets define the label as one of the feasible solutions. However, we argue that it's not reasonable to specify a label directly, especially when the label is sampled from a random distribution. Therefore, we propose to make the network learn the distribution of feasible solutions, and design based on this consideration a novel multi-head output architecture and corresponding loss function for distribution learning. Our approach enables the model to output multiple feasible solutions to approximate the target distribution. We further propose a novel parameter multiplexing method that reduces the number of parameters and computational effort while improving performance. We evaluated our approach on multiple image-deblur models, including the current state-of-the-art NAFNet. The improvement of best overall (pick the highest score among multiple heads for each validation image) PSNR outperforms the compared baselines up to 0.11~0.18dB. The improvement of the best single head (pick the best-performed head among multiple heads on validation set) PSNR outperforms the compared baselines up to 0.04~0.08dB. The codes are available at https://github.com/Liu-SD/multi-output-deblur.
翻译:图像分流任务是一个错误的任务, 存在模糊图像的无限可行解决方案。 现代深层学习方法通常会丢弃模糊内核的学习, 直接使用端到端监督的学习。 大众分流数据集将标签定义为可行的解决方案之一。 然而, 我们争辩说, 直接指定标签是不合理的, 特别是当标签是从随机分布中抽样的标签时。 因此, 我们提议让网络学习可行的解决方案的分布, 并根据这一考虑设计一个新的多头输出结构, 以及相应的分配学习损失函数。 我们的方法使得模型能够输出多种可行的解决方案, 以接近目标分布。 我们进一步提议一种新的参数多x化方法, 以减少参数和计算努力的数量, 同时改进性能。 我们评估了我们对于多个图像分解模式的方法, 包括当前的最新的NAFAFNet。 最佳总体改进( 选择每个校正图像的多个头最高分) PSNR 将比较基准到0. 0. 0. 0. 0. 18 B B. 我们进一步建议采用新的参数, 将最佳的PRB- brub 设置为可选的P- brudeal 。 在可选的P- bal_ brub 上的最佳的基线 。 在可选的P- bal_ brub_ brudeal_ brudeal_ 上的最佳的底 上的最佳的比 。