Video Instance Segmentation (VIS) is a multi-task problem performing detection, segmentation, and tracking simultaneously. Extended from image set applications, video data additionally induces the temporal information, which, if handled appropriately, is very useful to identify and predict object motions. In this work, we design a unified model to mutually learn these tasks. Specifically, we propose two modules, named Temporally Correlated Instance Segmentation (TCIS) and Bidirectional Tracking (BiTrack), to take the benefit of the temporal correlation between the object's instance masks across adjacent frames. On the other hand, video data is often redundant due to the frame's overlap. Our analysis shows that this problem is particularly severe for the YoutubeVOS-VIS2021 data. Therefore, we propose a Multi-Source Data (MSD) training mechanism to compensate for the data deficiency. By combining these techniques with a bag of tricks, the network performance is significantly boosted compared to the baseline, and outperforms other methods by a considerable margin on the YoutubeVOS-VIS 2019 and 2021 datasets.
翻译:视频分割( VIS) 是同时检测、 分割和跟踪的多重任务问题 。 从图像集应用程序扩展到视频数据, 额外引导时间信息, 如果处理得当, 这对于识别和预测物体动作非常有用 。 在这项工作中, 我们设计了一个统一的模型来相互学习这些任务 。 具体地说, 我们提议了两个模块, 名为Temporally Cortical Cortical 分割( TITIS) 和双向跟踪( BiTrack), 以利用相邻框架之间对象的图像遮罩之间的时间相关性。 另一方面, 视频数据由于框架的重叠往往多余 。 我们的分析显示, 这个问题对于YoutubeVOS- VIS-2021 的数据来说特别严重 。 因此, 我们提议了一个多源数据( MSD) 培训机制来弥补数据缺陷 。 通过将这些技术与一包的技巧结合起来, 网络性能与基线相比大大增强, 并在 YoutubeVOS - VIS 2019 和 2021 数据集上大大超出其他方法 。