The popularity and promotion of depth maps have brought new vigor and vitality into salient object detection (SOD), and a mass of RGB-D SOD algorithms have been proposed, mainly concentrating on how to better integrate cross-modality features from RGB image and depth map. For the cross-modality interaction in feature encoder, existing methods either indiscriminately treat RGB and depth modalities, or only habitually utilize depth cues as auxiliary information of the RGB branch. Different from them, we reconsider the status of two modalities and propose a novel Cross-modality Discrepant Interaction Network (CDINet) for RGB-D SOD, which differentially models the dependence of two modalities according to the feature representations of different layers. To this end, two components are designed to implement the effective cross-modality interaction: 1) the RGB-induced Detail Enhancement (RDE) module leverages RGB modality to enhance the details of the depth features in low-level encoder stage. 2) the Depth-induced Semantic Enhancement (DSE) module transfers the object positioning and internal consistency of depth features to the RGB branch in high-level encoder stage. Furthermore, we also design a Dense Decoding Reconstruction (DDR) structure, which constructs a semantic block by combining multi-level encoder features to upgrade the skip connection in the feature decoding. Extensive experiments on five benchmark datasets demonstrate that our network outperforms $15$ state-of-the-art methods both quantitatively and qualitatively. Our code is publicly available at: https://rmcong.github.io/proj_CDINet.html.
翻译:深度地图的普及和推广使新的活力和活力成为显要对象的探测(SOD),并提出了大量RGB-D SOD算法,主要侧重于如何更好地整合来自RGB图像和深度地图的跨模式特征。对于地码编码器的跨模式互动,现有方法要么不加区别地对待RGB和深度模式,要么只是习惯地利用深度提示作为RGB分支的辅助信息。不同的是,我们重新考虑两种模式的现状,并提议为RGB-D SOD建立一个全新的跨模式差异调和互动网络(CDINet),根据不同层次的特征,对两种模式的依赖性进行不同的模型。为此,设计了两个模块来实施有效的跨模式互动:1)RGB引发的“详细增强”模块利用RGB模式来强化低级编码阶段的深度特征的细节。2)由深度测试(DSDE)模块将RGB-DO的深度定位和内部兼容性网络的深度特征的不同模型,通过高层次的 RGB-DRDRDB 结构的升级系统结构。