Many of the recent successful methods for video object segmentation (VOS) are overly complicated, heavily rely on fine-tuning on the first frame, and/or are slow, and are hence of limited practical use. In this work, we propose FEELVOS as a simple and fast method which does not rely on fine-tuning. In order to segment a video, for each frame FEELVOS uses a semantic pixel-wise embedding together with a global and a local matching mechanism to transfer information from the first frame and from the previous frame of the video to the current frame. In contrast to previous work, our embedding is only used as an internal guidance of a convolutional network. Our novel dynamic segmentation head allows us to train the network, including the embedding, end-to-end for the multiple object segmentation task with a cross entropy loss. We achieve a new state of the art in video object segmentation without fine-tuning with a J&F measure of 71.5% on the DAVIS 2017 validation set. We make our code and models available at https://github.com/tensorflow/models/tree/master/research/feelvos.
翻译:最近许多成功的视频物体分割法(VOS)过于复杂,严重依赖第一个框架的微调,且/或缓慢,因此实际用途有限。在这项工作中,我们建议将感觉VOS作为一种简单和快速的方法,不依赖微调。为了对视频进行分解,对于每个框架而言,S felVOS使用一个语义像像素,与一个全球和地方匹配机制一起将信息从第一个框架和先前的视频框架转移到当前框架。与以往的工作不同,我们的嵌入仅用作一个共动网络的内部指导。我们的新动态分割头让我们能够对网络进行培训,包括嵌入、终端到终端,以完成具有交叉诱变损失的多对象分割任务。我们在DAVIS 2017 验证集上实现了一个新的视频对象分割状态,而没有经过71.5%的J&F测量。我们在 https://github.com/tensorstrangles/tree/master/master/master/masters/resears)上提供了我们的代码和模型。