Recent years have witnessed an increased interest in image dehazing. Many deep learning methods have been proposed to tackle this challenge, and have made significant accomplishments dealing with homogeneous haze. However, these solutions cannot maintain comparable performance when they are applied to images with non-homogeneous haze, e.g., NH-HAZE23 dataset introduced by NTIRE challenges. One of the reasons for such failures is that non-homogeneous haze does not obey one of the assumptions that is required for modeling homogeneous haze. In addition, a large number of pairs of non-homogeneous hazy image and the clean counterpart is required using traditional end-to-end training approaches, while NH-HAZE23 dataset is of limited quantities. Although it is possible to augment the NH-HAZE23 dataset by leveraging other non-homogeneous dehazing datasets, we observe that it is necessary to design a proper data-preprocessing approach that reduces the distribution gaps between the target dataset and the augmented one. This finding indeed aligns with the essence of data-centric AI. With a novel network architecture and a principled data-preprocessing approach that systematically enhances data quality, we present an innovative dehazing method. Specifically, we apply RGB-channel-wise transformations on the augmented datasets, and incorporate the state-of-the-art transformers as the backbone in the two-branch framework. We conduct extensive experiments and ablation study to demonstrate the effectiveness of our proposed method.
翻译:近年来,图像去雾一直备受关注。许多深度学习方法已被提出来解决这一挑战,并在处理均匀雾的任务中取得了显著的成绩。然而,当这些方案应用到存在非均匀雾的图像时,例如 NTIRE 挑战介绍的 NH-HAZE23 数据集,它们无法保持可比的性能。其中一个原因是,非均匀雾不遵循建模均匀雾所需要的假设之一。此外,传统的端到端训练方法需要大量的非均匀有雾图像和清晰图像成对数据,而 NH-HAZE23 数据集数量有限。尽管可以通过利用其他非均匀去雾数据集增加 NH-HAZE23 数据集,但我们观察到有必要设计一种合适的数据预处理方法,以减少目标数据集和增强集之间的分布差异。事实上,这一发现与数据中心人工智能的本质相符。通过一个新颖的网络架构并采用系统性增强数据质量的明智的数据预处理方法,我们提出了一种创新的去雾方法。具体来说,我们在增强数据集上应用 RGB 通道变换,并将最先进的 Transformer 作为双分支框架的骨干。我们进行了广泛的实验和消融研究,以证明我们提出的方法的有效性。