Segmenting objects in videos is a fundamental computer vision task. The current deep learning based paradigm offers a powerful, but data-hungry solution. However, current datasets are limited by the cost and human effort of annotating object masks in videos. This effectively limits the performance and generalization capabilities of existing video segmentation methods. To address this issue, we explore weaker form of bounding box annotations. We introduce a method for generating segmentation masks from per-frame bounding box annotations in videos. To this end, we propose a spatio-temporal aggregation module that effectively mines consistencies in the object and background appearance across multiple frames. We use our resulting accurate masks for weakly supervised training of video object segmentation (VOS) networks. We generate segmentation masks for large scale tracking datasets, using only their bounding box annotations. The additional data provides substantially better generalization performance leading to state-of-the-art results in both the VOS and more challenging tracking domain.
翻译:在视频中分割对象是一项基本的计算机愿景任务。 目前基于深层学习的范例提供了一种强大但数据饥饿的解决方案。 然而, 当前的数据集受到视频中标注对象面罩的成本和人力努力的限制。 这实际上限制了现有视频分割方法的性能和一般化能力。 为了解决这一问题, 我们探索了捆绑框说明的较弱形式。 我们引入了一种方法, 生成每个框捆绑框的视频说明中的分隔面罩。 为此, 我们提议了一个空格组合模块, 有效地将目标和背景在多个框架中呈现的地雷组成为一体。 我们使用由此生成的准确掩体, 用于对视频对象分割网进行监管不力的培训。 我们为大型跟踪数据集生成分隔面罩, 仅使用其捆绑框说明。 额外数据提供了大大改进的概括性功能, 导致在 VOS 和更具挑战性的跟踪域中产生最先进的状态。