Detecting and segmenting salient objects from given image scenes has received great attention in recent years. A fundamental challenge in training the existing deep saliency detection models is the requirement of large amounts of annotated data. While gathering large quantities of training data becomes cheap and easy, annotating the data is an expensive process in terms of time, labor and human expertise. To address this problem, this paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only, thus dramatically alleviating human labor in training models. To this end, we name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario. Essentially, APL is derived from the self-paced learning (SPL) regime but it infers the robust learning pace through the data-driven adversarial learning mechanism rather than the heuristic design of the learning regularizer. Comprehensive experiments on four widely-used benchmark datasets demonstrate that the proposed method can effectively approach to the existing supervised deep salient object detection models with only 1k human-annotated training images. The project page is available at https://github.com/hb-stone/FC-SOD.
翻译:近年来,从特定图像场景中检测和分解突出的物体的工作受到极大关注,培训现有深显性探测模型的一个根本挑战是需要大量附加说明的数据。收集大量培训数据变得廉价和容易,但从时间、劳动和人的专门知识来看,指出数据是一个昂贵的过程。为了解决这一问题,本文件建议学习基于人工注释的、有效的显著物体探测模型,仅对少数培训图像进行说明,从而大大减少培训模型中的人类劳动。为此,我们将这项任务命名为少数成本显著对象探测,并提议一个以对抗性节奏为基础的学习框架,以促进低成本学习设想。基本上,APL是从自定进度学习(SPL)制度衍生出来的,但它推断出通过数据驱动的对抗性学习机制而不是学习正规化器的超自然设计,通过四种广泛使用的基准数据集进行的全面实验表明,拟议的方法能够有效地处理现有有监督的深显性物体探测模型,只有1k HR-b/FCAD 培训图象。