Conventional video segmentation methods often rely on temporal continuity to propagate masks. Such an assumption suffers from issues like drifting and inability to handle large displacement. To overcome these issues, we formulate an effective mechanism to prevent the target from being lost via adaptive object re-identification. Specifically, our Video Object Segmentation with Re-identification (VS-ReID) model includes a mask propagation module and a ReID module. The former module produces an initial probability map by flow warping while the latter module retrieves missing instances by adaptive matching. With these two modules iteratively applied, our VS-ReID records a global mean (Region Jaccard and Boundary F measure) of 0.699, the best performance in 2017 DAVIS Challenge.
翻译:常规视频分割方法往往依赖时间连续性来传播面具。 这种假设存在漂移和无法处理大规模迁移等问题。 为了克服这些问题,我们制定了一个有效机制,防止目标因适应性对象重新识别而丢失。 具体地说,我们带有重新识别(VS-ReID)模型的视频对象分割包括一个掩码传播模块和一个ReID模块。 前一个模块通过流动扭曲生成初始概率图,而后一个模块则通过适应性匹配检索缺失的事例。 由于这两个模块的迭接应用,我们的VS-ReID记录了一个0.699的全球平均值(Region Jacard和边界F度),这是2017 DAVIS挑战的最佳表现。