We present a simple yet effective progressive self-guided loss function to facilitate deep learning-based salient object detection (SOD) in images. The saliency maps produced by the most relevant works still suffer from incomplete predictions due to the internal complexity of salient objects. Our proposed progressive self-guided loss simulates a morphological closing operation on the model predictions for progressively creating auxiliary training supervisions to step-wisely guide the training process. We demonstrate that this new loss function can guide the SOD model to highlight more complete salient objects step-by-step and meanwhile help to uncover the spatial dependencies of the salient object pixels in a region growing manner. Moreover, a new feature aggregation module is proposed to capture multi-scale features and aggregate them adaptively by a branch-wise attention mechanism. Benefiting from this module, our SOD framework takes advantage of adaptively aggregated multi-scale features to locate and detect salient objects effectively. Experimental results on several benchmark datasets show that our loss function not only advances the performance of existing SOD models without architecture modification but also helps our proposed framework to achieve state-of-the-art performance.
翻译:我们提出了一个简单而有效的渐进式自我指导损失功能,以便利在图像中进行深层次的基于学习的显著物体探测(SOD),最相关作品制作的突出的地图仍然因由于突出物体的内部复杂性而不完整的预测而受到影响。我们提议的渐进式自我指导损失模拟模型预测的形态封闭操作,以逐步建立辅助性培训监督,从而以循序渐进的方式指导培训过程。我们证明这一新的损失功能可以指导SOD模型,以显示更完整的、分步骤的突出物体,同时帮助发现显著物体像素在日益扩大的区域的空间依赖性。此外,还提出了一个新的特征汇总模块,以捕捉多尺度的特征,并以分支式关注机制加以适应性地汇总。从这个模块中受益,我们的SOD框架利用了适应性综合的多尺度特征,以有效定位和探测突出物体。几个基准数据集的实验结果表明,我们的损失功能不仅在不修改结构的情况下提高现有特殊对象模型的性能,而且还有助于我们拟议的框架实现状态。