Street scene change detection continues to capture researchers' interests in the computer vision community. It aims to identify the changed regions of the paired street-view images captured at different times. The state-of-the-art network based on the encoder-decoder architecture leverages the feature maps at the corresponding level between two channels to gain sufficient information of changes. Still, the efficiency of feature extraction, feature correlation calculation, even the whole network requires further improvement. This paper proposes the temporal attention and explores the impact of the dependency-scope size of temporal attention on the performance of change detection. In addition, based on the Temporal Attention Module (TAM), we introduce a more efficient and light-weight version - Dynamic Receptive Temporal Attention Module (DRTAM) and propose the Concurrent Horizontal and Vertical Attention (CHVA) to improve the accuracy of the network on specific challenging entities. On street scene datasets `GSV', `TSUNAMI' and `VL-CMU-CD', our approach gains excellent performance, establishing new state-of-the-art scores without bells and whistles, while maintaining high efficiency applicable in autonomous vehicles.
翻译:街道变化探测继续捕捉研究人员对计算机视觉界的兴趣,目的是查明在不同时间拍摄的配对街景图像的变化区域,以编码器-编码器结构为基础的最先进的网络利用两个渠道之间相应水平的地貌图获得足够的变化信息,但地物提取效率、特征相关计算、甚至整个网络都需要进一步改进,本文件提出时间关注的时间范围,并探讨时间关注的依附范围大小对变化检测工作效果的影响,此外,根据 " 时间关注模块 " (TAM),我们引入了一个更高效、更轻的版本 -- -- 动态感应温度关注模块(DRTAM),并提议同步水平和垂直关注(CHVA),以提高网络对特定具有挑战性实体的准确性。在街头场景数据集 " GSV " 、 " TS联伊援助团 " 和 " VL-CMU-CD " 上,我们的方法取得了出色的业绩,建立了没有钟哨的新状态分,同时保持对自主车辆适用的效率。