Deep learning-based source dehazing methods trained on synthetic datasets have achieved remarkable performance but suffer from dramatic performance degradation on real hazy images due to domain shift. Although certain Domain Adaptation (DA) dehazing methods have been presented, they inevitably require access to the source dataset to reduce the gap between the source synthetic and target real domains. To address these issues, we present a novel Source-Free Unsupervised Domain Adaptation (SFUDA) image dehazing paradigm, in which only a well-trained source model and an unlabeled target real hazy dataset are available. Specifically, we devise the Domain Representation Normalization (DRN) module to make the representation of real hazy domain features match that of the synthetic domain to bridge the gaps. With our plug-and-play DRN module, unlabeled real hazy images can adapt existing well-trained source networks. Besides, the unsupervised losses are applied to guide the learning of the DRN module, which consists of frequency losses and physical prior losses. Frequency losses provide structure and style constraints, while the prior loss explores the inherent statistic property of haze-free images. Equipped with our DRN module and unsupervised loss, existing source dehazing models are able to dehaze unlabeled real hazy images. Extensive experiments on multiple baselines demonstrate the validity and superiority of our method visually and quantitatively.
翻译:在合成数据集方面受过培训的深层基于学习的源解色方法取得了显著的绩效,但由于域位转移,真实的烟雾图像的性能急剧退化。虽然介绍了某些Domain适应(DA)脱色方法,但这些方法不可避免地需要访问源数据集,以减少源合成与目标真实域之间的差距。为了解决这些问题,我们提出了一个全新的无源、无源、无监督的Domain适应(SFUDA)图像脱色范例,其中只有经过良好训练的源模型和没有标记的目标真实烟雾数据集。具体地说,我们设计了Domain代表正常化(DRN)模块,使真实的隐性域域域特征与合成域的功能相匹配,以弥合差距。有了我们的插放和播放 DRN模块,没有标记的真正遮羞图像可以调整现有的受过良好训练的源网络。此外,未加固的损耗损率用于指导DRN模块的学习,包括频率损失和先前的物理损失。频率损失提供了结构和风格限制,而先前的隐隐域域域域域域域域域特征则探索了当前无底底图模的图像的统计属性。