The GAN-based infrared and visible image fusion methods have gained ever-increasing attention due to its effectiveness and superiority. However, the existing methods adopt the global pixel distribution of source images as the basis for discrimination, which fails to focus on the key modality information. Moreover, the dual-discriminator based methods suffer from the confrontation between the discriminators. To this end, we propose a dual-domain adversarial based infrared and visible image fusion method (D2AFGAN). In this method, two unique discrimination strategies are designed to improve the fusion performance. Specifically, we introduce the spatial attention modules (SAM) into the generator to obtain the spatial attention maps, and then the attention maps are utilized to force the discrimination of infrared images to focus on the target regions. In addition, we extend the discrimination range of visible information to the wavelet subspace, which can force the generator to restore the high-frequency details of visible images. Ablation experiments demonstrate the effectiveness of our method in eliminating the confrontation between discriminators. And the comparison experiments on public datasets demonstrate the effectiveness and superiority of the proposed method.
翻译:以GAN为基础的红外线和可见图像聚合方法因其有效性和优越性而日益受到越来越多的关注。然而,现有方法采用源图像的全球像素分布作为歧视的依据,而这种方法没有侧重于关键模式信息。此外,基于双重差异的方法还受到歧视者之间的对抗。为此,我们建议采用基于红外线和可见图像聚合的双面对立对立法(D2AFGAN)。在这个方法中,设计了两种独特的区别战略来改进聚合性能。具体地说,我们将空间关注模块(SAM)引入生成器,以获得空间关注地图,然后利用这些关注图来迫使红外图像的区别以目标区域为重点。此外,我们把可见信息的范围扩大到波列子空间,这可以迫使生成器恢复可见图像的高频细节。模拟实验表明我们消除歧视者之间对抗的方法的有效性。关于公共数据集的比较实验显示了拟议方法的有效性和优越性。