Most existing salient object detection (SOD) models are difficult to apply due to the complex and huge model structures. Although some lightweight models are proposed, the accuracy is barely satisfactory. In this paper, we design a novel semantics-guided contextual fusion network (SCFNet) that focuses on the interactive fusion of multi-level features for accurate and efficient salient object detection. Furthermore, we apply knowledge distillation to SOD task and provide a sizeable dataset KD-SOD80K. In detail, we transfer the rich knowledge from a seasoned teacher to the untrained SCFNet through unlabeled images, enabling SCFNet to learn a strong generalization ability to detect salient objects more accurately. The knowledge distillation based SCFNet (KDSCFNet) achieves comparable accuracy to the state-of-the-art heavyweight methods with less than 1M parameters and 174 FPS real-time detection speed. Extensive experiments demonstrate the robustness and effectiveness of the proposed distillation method and SOD framework. Code and data: https://github.com/zhangjinCV/KD-SCFNet.
翻译:由于模型结构复杂而庞大,很难应用大多数现有突出物体探测模型(SOD),虽然提出了一些轻量级模型,但准确性几乎不令人满意。在本文中,我们设计了一个新型语义引导背景聚合网络(SCFNet),重点是交互融合多层次特征,以便准确和高效地探测显著物体。此外,我们将知识蒸馏应用到SOD任务中,并提供数量可观的数据集KD-SOD80K。我们通过未加标记的图像,将老练教师的丰富知识传递给未受过训练的SCFNet,使SCFNet能够学习更精确地探测突出物体的强大通用能力。基于SCFNet(KDSCNet)的知识蒸馏方法的精度与最先进的重量方法相当,其参数小于1M,实时探测速度为174FPS。广泛的实验表明拟议的蒸馏方法和SOD框架的坚固性和有效性。代码和数据:https://github.com/zhanginCV/KD-SFSNet。