Many of the recent successful methods for video object segmentation (VOS) are overly complicated, heavily rely on fine-tuning on the first frame, and/or are slow, and are hence of limited practical use. In this work, we propose FEELVOS as a simple and fast method which does not rely on fine-tuning. In order to segment a video, for each frame FEELVOS uses a semantic pixel-wise embedding together with a global and a local matching mechanism to transfer information from the first frame and from the previous frame of the video to the current frame. In contrast to previous work, our embedding is only used as an internal guidance of a convolutional network. Our novel dynamic segmentation head allows us to train the network, including the embedding, end-to-end for the multiple object segmentation task with a cross entropy loss. We achieve a new state of the art in video object segmentation without fine-tuning on the DAVIS 2017 validation set with a J&F measure of 69.1%.
翻译:最近许多成功的视频物体分割法(VOS)过于复杂,严重依赖第一个框架的微调,而且(或)缓慢,因此实际使用有限。在这项工作中,我们建议将S feelVOS作为一种简单而快速的方法,不依赖微调。为了对视频进行分解,对于每个框架而言,S feelVOS使用一个语义像素,与一个全球和地方匹配机制一起将信息从第一个框架和从先前的视频框架转移到当前框架。与以往的工作不同,我们的嵌入仅用作共生网络的内部指导。我们的新动态分割头让我们能够对网络进行培训,包括嵌入、终端到终端,以完成具有交叉导入损失的多个对象分割任务。我们在视频对象分割方面实现新的艺术状态,而不对DAVIS 2017的验证集进行微调,J&F测量值为69.1%。