This paper tackles the problem of video object segmentation, given some user annotation which indicates the object of interest. The problem is formulated as pixel-wise retrieval in a learned embedding space: we embed pixels of the same object instance into the vicinity of each other, using a fully convolutional network trained by a modified triplet loss as the embedding model. Then the annotated pixels are set as reference and the rest of the pixels are classified using a nearest-neighbor approach. The proposed method supports different kinds of user input such as segmentation mask in the first frame (semi-supervised scenario), or a sparse set of clicked points (interactive scenario). In the semi-supervised scenario, we achieve results competitive with the state of the art but at a fraction of computation cost (275 milliseconds per frame). In the interactive scenario where the user is able to refine their input iteratively, the proposed method provides instant response to each input, and reaches comparable quality to competing methods with much less interaction.
翻译:本文处理视频对象分割问题, 给一些用户说明, 表明感兴趣的对象。 问题被写成在学习的嵌入空间中以像素方式检索: 我们将同一对象实例的像素嵌入彼此的邻近地区, 使用一个完全演化的网络, 其培训是经过修改的三重损失的嵌入模型。 然后, 附加说明的像素设定为参考, 其余像素则使用近邻方式分类 。 提议的方法支持不同种类的用户输入, 如第一个框架的分割面罩( 半监控情景), 或一组稀少的点击点( 互动情景) 。 在半监督的情景中, 我们取得与艺术状态相竞争的结果, 但以计算成本的一小部分( 每框架275毫秒) 。 在用户能够以迭代方式改进其输入的交互式假设中, 拟议的方法为每种输入提供即时反应, 并达到相似的质量, 与相竞方法相比, 互动性要少得多 。