In this paper, we present a novel end-to-end group collaborative learning network, termed GCoNet+, which can effectively and efficiently (250 fps) identify co-salient objects in natural scenes. The proposed GCoNet+ achieves the new state-of-the-art performance for co-salient object detection (CoSOD) through mining consensus representations based on the following two essential criteria: 1) intra-group compactness to better formulate the consistency among co-salient objects by capturing their inherent shared attributes using our novel group affinity module (GAM); 2) inter-group separability to effectively suppress the influence of noisy objects on the output by introducing our new group collaborating module (GCM) conditioning on the inconsistent consensus. To further improve the accuracy, we design a series of simple yet effective components as follows: i) a recurrent auxiliary classification module (RACM) promoting the model learning at the semantic level; ii) a confidence enhancement module (CEM) helping the model to improve the quality of the final predictions; and iii) a group-based symmetric triplet (GST) loss guiding the model to learn more discriminative features. Extensive experiments on three challenging benchmarks, i.e., CoCA, CoSOD3k, and CoSal2015, demonstrate that our GCoNet+ outperforms the existing 12 cutting-edge models. Code has been released at https://github.com/ZhengPeng7/GCoNet_plus.
翻译:在本文中,我们展示了一个新的端对端小组协作学习网络,称为GCoNet+,它能切实有效地(250 fps)识别自然场景中的共振天体。拟议的GCoNet+通过基于以下两个基本标准的采矿协商一致代表,实现了共振天体探测(COSOD)的新状态:根据以下两个基本标准,1) 集团内部的紧凑性,通过利用我们的新小组亲近模块(GAM)获取共同导航天体的内在共享属性,更好地形成共同导航天体之间的一致性;2) 集团间分离性,以有效抑制噪音天体对产出的影响,方法是引入我们新的小组协作模块(GCM),以不一致的共识为条件。为了进一步提高准确性,我们设计了一系列简单但有效的组成部分如下:i) 经常性辅助分类模块(RACM),促进在语界一级进行模型学习;二) 增强信心模块(CEM) 帮助模型改进最后预测的质量;三) 以集团为基础的三对级数据模型(GST+CA) 展示了我们现有的COS格式,该模型具有更具有挑战性、CODS格式的模型。