As the computing power of modern hardware is increasing strongly, pre-trained deep learning models (e.g., BERT, GPT-3) learned on large-scale datasets have shown their effectiveness over conventional methods. The big progress is mainly contributed to the representation ability of transformer and its variant architectures. In this paper, we study the low-level computer vision task (e.g., denoising, super-resolution and deraining) and develop a new pre-trained model, namely, image processing transformer (IPT). To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs. The IPT model is trained on these images with multi-heads and multi-tails. In addition, the contrastive learning is introduced for well adapting to different image processing tasks. The pre-trained model can therefore efficiently employed on desired task after fine-tuning. With only one pre-trained model, IPT outperforms the current state-of-the-art methods on various low-level benchmarks.
翻译:由于现代硬件的计算能力正在大大增强,在大型数据集方面受过预先训练的深层学习模型(如BERT、GPT-3)已经表明其相对于常规方法的有效性,大的进展主要在于变压器及其变异结构的代表性能力。在本文中,我们研究了低层次的计算机视觉任务(如脱色、超分辨率和排水),并开发了一种新的经过训练的模型,即图像处理变压器(IPT)。为了最大限度地挖掘变压器的能力,我们提出利用众所周知的图像网基准来产生大量腐蚀成型的图像配对。IPT模型是对这些图像进行多头和多尾的训练。此外,还引入了对比学习,以便很好地适应不同的图像处理任务。因此,经过微调后,经过训练的模型能够有效地应用到所需的任务。只有一种经过预先训练的模型,IPT超越了目前对各种低层次基准的先进方法。