Image translation with convolutional neural networks has recently been used as an approach to multimodal change detection. Existing approaches train the networks by exploiting supervised information of the change areas, which, however, is not always available. A main challenge in the unsupervised problem setting is to avoid that change pixels affect the learning of the translation function. We propose two new network architectures trained with loss functions weighted by priors that reduce the impact of change pixels on the learning objective. The change prior is derived in an unsupervised fashion from relational pixel information captured by domain-specific affinity matrices. Specifically, we use the vertex degrees associated with an absolute affinity difference matrix and demonstrate their utility in combination with cycle consistency and adversarial training. The proposed neural networks are compared with state-of-the-art algorithms. Experiments conducted on three real datasets show the effectiveness of our methodology.
翻译:现有方法通过利用变化领域的监督信息对网络进行培训,但并非总能获得这些变化领域的信息。在未经监督的问题设置中,一个主要挑战是避免变化像素影响翻译功能的学习。我们提议了两个新的网络结构,其损失功能由先前的计算法加权,以减少变化像素对学习目标的影响。以前的改变是以未经监督的方式从特定领域近亲矩阵获取的关系像素信息中产生。具体地说,我们使用与绝对亲近差异矩阵相关的脊椎度,并结合周期一致性和对抗性培训来展示其效用。拟议的神经网络与最新算法进行比较。在三个真实数据集上进行的实验显示了我们的方法的有效性。