Under-display camera (UDC) provides an elegant solution for full-screen smartphones. However, UDC captured images suffer from severe degradation since sensors lie under the display. Although this issue can be tackled by image restoration networks, these networks require large-scale image pairs for training. To this end, we propose a modular network dubbed MPGNet trained using the generative adversarial network (GAN) framework for simulating UDC imaging. Specifically, we note that the UDC imaging degradation process contains brightness attenuation, blurring, and noise corruption. Thus we model each degradation with a characteristic-related modular network, and all modular networks are cascaded to form the generator. Together with a pixel-wise discriminator and supervised loss, we can train the generator to simulate the UDC imaging degradation process. Furthermore, we present a Transformer-style network named DWFormer for UDC image restoration. For practical purposes, we use depth-wise convolution instead of the multi-head self-attention to aggregate local spatial information. Moreover, we propose a novel channel attention module to aggregate global information, which is critical for brightness recovery. We conduct evaluations on the UDC benchmark, and our method surpasses the previous state-of-the-art models by 1.23 dB on the P-OLED track and 0.71 dB on the T-OLED track, respectively.
翻译:然而,UDC拍摄到的图像由于传感器处于显示状态而严重退化。尽管这一问题可以通过图像恢复网络来解决,但这些网络需要大规模图像配对培训。为此,我们提议建立一个模块网络,称为MPGNet,使用基因对抗网络(GAN)框架来模拟 UDC成像。具体地说,我们注意到UDC成像退化过程包含亮度减弱、模糊和噪音腐败。因此,我们用一个与特性有关的模块网络来模拟每一次退化,所有模块网络都组装成生成器。我们可以与一个像素分析器和受监督的损失一起,训练生成者模拟UDC成像变形进程。此外,我们提出一个名为DWFortormer的变异式网络,用于模拟UDC成像。为了实际目的,我们使用深度进化变异过程,而不是多头自我感应,以汇总当地空间信息。此外,我们提议一个全新的频道关注模块模块,用于全球综合信息,该模块对于光谱化数据采集至关重要。