Self-similarity refers to the image prior widely used in image restoration algorithms that small but similar patterns tend to occur at different locations and scales. However, recent advanced deep convolutional neural network based methods for image restoration do not take full advantage of self-similarities by relying on self-attention neural modules that only process information at the same scale. To solve this problem, we present a novel Pyramid Attention module for image restoration, which captures long-range feature correspondences from a multi-scale feature pyramid. Inspired by the fact that corruptions, such as noise or compression artifacts, drop drastically at coarser image scales, our attention module is designed to be able to borrow clean signals from their "clean" correspondences at the coarser levels. The proposed pyramid attention module is a generic building block that can be flexibly integrated into various neural architectures. Its effectiveness is validated through extensive experiments on multiple image restoration tasks: image denoising, demosaicing, compression artifact reduction, and super resolution. Without any bells and whistles, our PANet (pyramid attention module with simple network backbones) can produce state-of-the-art results with superior accuracy and visual quality.
翻译:自我相似是指在图像恢复算法中先前广泛使用的图像,即小型但相似的模式往往在不同地点和尺度上发生。然而,最近先进的深演神经网络中以图像恢复方法在不同的地点和尺度上往往会发生。然而,最近先进的以深演神经网络为基础的图像恢复方法并没有通过依赖仅以相同规模处理信息的自控神经模块来充分利用自我相似性。为了解决这个问题,我们提出了一个用于图像恢复的新型金字塔关注模块,该模块将捕捉多级特征金字塔的长距离特征通信。受以下事实的启发:腐败,如噪音或压缩制品,在粗略图像尺度上急剧下降,我们的注意力模块设计能够从粗略的“清洁”通信中借用干净的信号。拟议的金字塔关注模块是一个通用的建筑块,可以灵活地融入各种神经结构。它的有效性通过对多个图像恢复任务的广泛实验得到验证:图像解析、解析、压缩、压缩制品减少和超级分辨率。没有任何钟和哨音,我们的 PANNet(带有简单网络骨架的磁带关注模块)可以产生高端和图像质量结果。