Salient Object Detection (SOD) domain using RGB-D data has lately emerged with some current models' adequately precise results. However, they have restrained generalization abilities and intensive computational complexity. In this paper, inspired by the best background/foreground separation abilities of deformable convolutions, we employ them in our Densely Deformable Network (DDNet) to achieve efficient SOD. The salient regions from densely deformable convolutions are further refined using transposed convolutions to optimally generate the saliency maps. Quantitative and qualitative evaluations using the recent SOD dataset against 22 competing techniques show our method's efficiency and effectiveness. We also offer evaluation using our own created cross-dataset, surveillance-SOD (S-SOD), to check the trained models' validity in terms of their applicability in diverse scenarios. The results indicate that the current models have limited generalization potentials, demanding further research in this direction. Our code and new dataset will be publicly available at https://github.com/tanveer-hussain/EfficientSOD
翻译:使用 RGB-D 数据的显性天体探测(SOD) 域最近出现了,目前有些模型取得了相当准确的结果,但是,这些模型限制了一般化能力和密集的计算复杂性。在本文中,我们利用变形变形变异的最佳背景/前景分解能力,将之用于我们的深度变形网络(DDNet),以达到高效的 SOD。来自高度变形变异的突出区域正在进一步加以改进,利用转换的变异来最佳生成突出的地图。使用最新的SOD数据集进行定量和定性评价,展示了我们的方法的效率和效力。我们还利用我们自己创建的交叉数据集、监视-SOD(S-OD)提供评价,以检查经过训练的模型在不同情景中的可适用性。结果显示,目前的模型具有有限的概括潜力,要求在这方面进行进一步的研究。我们的代码和新数据集将在https://github.com/tanveer-husain/EfficentSOD上公开提供。