We present a simple yet effective progressive self-guided loss function to facilitate deep learning-based salient object detection (SOD) in images. The saliency maps produced by the most relevant works still suffer from incomplete predictions due to the internal complexity of salient objects. Our proposed progressive self-guided loss simulates a morphological closing operation on the model predictions for epoch-wisely creating progressive and auxiliary training supervisions to step-wisely guide the training process. We demonstrate that this new loss function can guide the SOD model to highlight more complete salient objects step-by-step and meanwhile help to uncover the spatial dependencies of the salient object pixels in a region growing manner. Moreover, a new feature aggregation module is proposed to capture multi-scale features and aggregate them adaptively by a branch-wise attention mechanism. Benefiting from this module, our SOD framework takes advantage of adaptively aggregated multi-scale features to locate and detect salient objects effectively. Experimental results on several benchmark datasets show that our loss function not only advances the performance of existing SOD models without architecture modification but also helps our proposed framework to achieve state-of-the-art performance.
翻译:我们提出了一个简单而有效的渐进式自我指导损失监测功能,以便利在图像中进行深层次的基于学习的显著物体探测(SOD),最相关作品制作的突出的地图仍然由于突出物体内部复杂性的预测不完全而受到影响。我们提议的渐进式自我指导损失模拟了一种模式预测的形态封闭操作,这种模型预测是先入为主地建立渐进式和辅助性的培训监督,以逐步地指导培训过程。我们证明这一新的损失功能可以指导SOD模型,以显示更完整的显著物体的逐步发现,同时帮助发现突出物体像素的空间依赖性,以日益扩大的区域。此外,还提出了一个新的特征汇总模块,以捕捉多尺度的特征,并用分支关注机制加以调整汇总。从这个模块中受益,我们的SOD框架利用了适应性综合的多尺度特征,以有效定位和探测突出物体。几个基准数据集的实验结果显示,我们的损失功能不仅在不修改结构的情况下推进现有SPD模型的性能,而且还有助于我们拟议的框架实现状态性业绩。