Unsupervised multimodal change detection is a practical and challenging topic that can play an important role in time-sensitive emergency applications. To address the challenge that multimodal remote sensing images cannot be directly compared due to their modal heterogeneity, we take advantage of two types of modality-independent structural relationships in multimodal images. In particular, we present a structural relationship graph representation learning framework for measuring the similarity of the two structural relationships. Firstly, structural graphs are generated from preprocessed multimodal image pairs by means of an object-based image analysis approach. Then, a structural relationship graph convolutional autoencoder (SR-GCAE) is proposed to learn robust and representative features from graphs. Two loss functions aiming at reconstructing vertex information and edge information are presented to make the learned representations applicable for structural relationship similarity measurement. Subsequently, the similarity levels of two structural relationships are calculated from learned graph representations and two difference images are generated based on the similarity levels. After obtaining the difference images, an adaptive fusion strategy is presented to fuse the two difference images. Finally, a morphological filtering-based postprocessing approach is employed to refine the detection results. Experimental results on five datasets with different modal combinations demonstrate the effectiveness of the proposed method.
翻译:未经监督的多式联运变化探测是一个实用和具有挑战性的专题,可以在时间敏感的紧急应用中发挥重要作用。为了应对多式联运遥感图像因其模式异质性而无法直接比较的挑战,我们利用多式联运图像中两种模式独立的结构关系。特别是,我们提出了一个结构关系图代表学习框架,以衡量两种结构关系的相似性。首先,通过基于目标的图像分析方法,从预处理的多式联运图像配对中产生结构图。然后,建议从图表中学习一个结构关系图示相电联自动电解码(SR-GCAE),以学习稳健和有代表性的特征。提出两个旨在重建顶端信息和优势信息的亏损功能,以使学习的表述适用于结构关系相似性测量。随后,根据学习的图形表解,计算出两种结构关系的相似性水平,根据相似性水平生成两种不同图像。在获得差异图像后,提出适应性融合战略,将两种差异图像结合起来。最后,采用基于形态的过滤后处理方法,用五个基于变式的后处理方法来改进不同检测结果。实验:用五种基于形态的后处理方法,用不同的分析结果来分析结果。