In recent years, deep network-based methods have continuously refreshed state-of-the-art performance on Salient Object Detection (SOD) task. However, the performance discrepancy caused by different implementation details may conceal the real progress in this task. Making an impartial comparison is required for future researches. To meet this need, we construct a general SALient Object Detection (SALOD) benchmark to conduct a comprehensive comparison among several representative SOD methods. Specifically, we re-implement 14 representative SOD methods by using consistent settings for training. Moreover, two additional protocols are set up in our benchmark to investigate the robustness of existing methods in some limited conditions. In the first protocol, we enlarge the difference between objectness distributions of train and test sets to evaluate the robustness of these SOD methods. In the second protocol, we build multiple train subsets with different scales to validate whether these methods can extract discriminative features from only a few samples. In the above experiments, we find that existing loss functions usually specialized in some metrics but reported inferior results on the others. Therefore, we propose a novel Edge-Aware (EA) loss that promotes deep networks to learn more discriminative features by integrating both pixel- and image-level supervision signals. Experiments prove that our EA loss reports more robust performances compared to existing losses.
翻译:近年来,基于网络的深层次方法不断更新了关于显性物体探测(SOD)任务的最新表现,然而,不同执行细节导致的绩效差异可能掩盖了这项任务的实际进展。需要为今后的研究进行公正的比较。为了满足这一需要,我们设计了一个一般性的 " 显性物体探测(SALOD) " (SALOD)基准,以便对若干具有代表性的SOD方法进行全面比较。具体地说,我们通过使用一致的培训环境,重新实施14种具有代表性的SOD方法。此外,还在我们的基准中设置了两个额外的协议,以调查某些有限条件下现有方法的稳健性。在第一项协议中,我们扩大了火车目标分布和测试装置之间的差异,以评价这些SOD方法的稳健性。在第二项协议中,我们建造了多个具有不同尺度的火车子集,以验证这些方法能否从少数具有区别性特征的方法。在上文的实验中,我们发现现有的损失功能通常专门用于某些计量标准,但报告的结果较差。因此,我们建议了一个新的E-Aware(EA)损失等级,这可以促进深层次的网络学习更可靠的实验性图像,以便学习更牢固的实验性损失。