Co-saliency detection within a single image is a common vision problem that has received little attention and has not yet been well addressed. Existing methods often used a bottom-up strategy to infer co-saliency in an image in which salient regions are firstly detected using visual primitives such as color and shape and then grouped and merged into a co-saliency map. However, co-saliency is intrinsically perceived complexly with bottom-up and top-down strategies combined in human vision. To address this problem, this study proposes a novel end-to-end trainable network comprising a backbone net and two branch nets. The backbone net uses ground-truth masks as top-down guidance for saliency prediction, whereas the two branch nets construct triplet proposals for regional feature mapping and clustering, which drives the network to be bottom-up sensitive to co-salient regions. We construct a new dataset of 2,019 natural images with co-saliency in each image to evaluate the proposed method. Experimental results show that the proposed method achieves state-of-the-art accuracy with a running speed of 28 fps.
翻译:在单一图像中发现共通性是一个共同的愿景问题,很少引起注意,而且尚未得到充分处理; 现有方法往往使用自下而上的战略来推断在一种图像中的共通性,在这种图像中,先使用彩色和形状等视觉原始特征先检测出显著区域,然后将共通性分组并合并成共通性地图; 然而,共通性与自下而上和自上而下的战略相结合的人类愿景有着内在的复杂认识; 为解决这一问题,本研究提出了一个新的端到端的可培训网络,由主干网和两个分支网组成。主干网使用地真伪面罩作为显著预测的自上而下指南,而两个分支网则为区域特征测绘和集群提出了三重建议,使网络对共通性区域具有自下而上敏感性。 我们为每个图像建立一个2 019个具有共通性、可评价拟议方法的新数据集。 实验结果显示,拟议的方法以运行速度28英尺的速度实现了最新技术精确度。