Recent years have witnessed an increased interest in image dehazing. Many deep learning methods have been proposed to tackle this challenge, and have made significant accomplishments dealing with homogeneous haze. However, these solutions cannot maintain comparable performance when they are applied to images with non-homogeneous haze, e.g., NH-HAZE23 dataset introduced by NTIRE challenges. One of the reasons for such failures is that non-homogeneous haze does not obey one of the assumptions that is required for modeling homogeneous haze. In addition, a large number of pairs of non-homogeneous hazy image and the clean counterpart is required using traditional end-to-end training approaches, while NH-HAZE23 dataset is of limited quantities. Although it is possible to augment the NH-HAZE23 dataset by leveraging other non-homogeneous dehazing datasets, we observe that it is necessary to design a proper data-preprocessing approach that reduces the distribution gaps between the target dataset and the augmented one. This finding indeed aligns with the essence of data-centric AI. With a novel network architecture and a principled data-preprocessing approach that systematically enhances data quality, we present an innovative dehazing method. Specifically, we apply RGB-channel-wise transformations on the augmented datasets, and incorporate the state-of-the-art transformers as the backbone in the two-branch framework. We conduct extensive experiments and ablation study to demonstrate the effectiveness of our proposed method.
翻译:近年来,图像去雾已吸引了越来越多的关注。许多深度学习方法已被提出来应对这一挑战,并在处理均质雾的方面取得了重要成果。然而,当这些解决方案应用于具有非均质雾,例如NTIRE挑战引入的NH-HAZE23数据集时,它们无法保持相当的性能。其中一个原因是非均质雾不遵循建模均质雾所需的假设之一。此外,利用传统的端到端训练方法需要大量的非均质雾图像及其清晰的同行,而NH-HAZE23数据集数量有限。虽然可以通过利用其他非均匀去雾数据集扩充NH-HAZE23数据集,但我们观察到需要设计适当的数据预处理方法,以减少目标数据集与扩充数据集之间的分布差距。这项发现实际上符合数据中心方法的本质。通过一种新颖的网络架构和一种系统地提高数据质量的原则性数据预处理方法,我们提出了一种创新的去雾方法。具体而言,我们对扩充数据集进行RGB通道变换,并在两通道框架的backbone中加入最先进的Transformers。我们进行了大量实验和剖析研究,证明了我们所提出的方法的有效性。