Training deep learning-based change detection (CD) model heavily depends on labeled data. Contemporary transfer learning-based methods to alleviate the CD label insufficiency mainly upon ImageNet pre-training. A recent trend is using remote sensing (RS) data to obtain in-domain representations via supervised or self-supervised learning (SSL). Here, different from traditional supervised pre-training that learns the mapping from image to label, we leverage semantic supervision in a contrastive manner. There are typically multiple objects of interest (e.g., buildings) distributed in varying locations in RS images. We propose dense semantic-aware pre-training for RS image CD via sampling multiple class-balanced points. Instead of manipulating image-level representations that lack spatial information, we constrain pixel-level cross-view consistency and cross-semantic discrimination to learn spatially-sensitive features, thus benefiting downstream dense CD. Apart from learning illumination invariant features, we fulfill consistent foreground features insensitive to irrelevant background changes via a synthetic view using background swapping. We additionally achieve discriminative representations to distinguish foreground land-covers and others. We collect large-scale image-mask pairs freely available in the RS community for pre-training. Extensive experiments on three CD datasets verify the effectiveness of our method. Ours significantly outperforms ImageNet, in-domain supervision, and several SSL methods. Empirical results indicate ours well alleviates data insufficiency in CD. Notably, we achieve competitive results using only 20% training data than baseline (random) using 100% data. Both quantitative and qualitative results demonstrate the generalization ability of our pre-trained model to downstream images even remaining domain gaps with the pre-training data. Our Code will make public.
翻译:以深层次培训为基础的深层次变化检测模式(CD)在很大程度上取决于标签数据。当代传输基于学习的学习方法以缓解CD标签不足的CD标签不足,主要是在图像网络培训前。最近的趋势是使用遥感(RS)数据,通过监管或自我监管的学习(SSL)获得内部代表。这里,不同于传统的监管前培训,即从图像到标签的绘图,我们以对比的方式利用语义监督。在塞族共和国图像的不同位置上分布着多种感兴趣的对象(例如建筑物)。我们建议通过抽样多个类平衡点,为RS图像CD进行密集的语义认知前培训。我们不是利用缺乏空间信息的图像层面代表,而是利用像素级的跨视图一致性和跨语义歧视来学习空间敏感特征,从而让下游密集的CD受益。我们除了学习变异性的前特征外,我们只能通过合成背景转换来保持对不相关的背景差异的前后特征。我们还可以进一步实现歧视性的图像演示,在地面覆盖和 RSS 之前的图像中,我们可以大规模地进行数据校验。