Video instance segmentation aims to detect, segment, and track objects in a video. Current approaches extend image-level segmentation algorithms to the temporal domain. However, this results in temporally inconsistent masks. In this work, we identify the mask quality due to temporal stability as a performance bottleneck. Motivated by this, we propose a video instance segmentation method that alleviates the problem due to missing detections. Since this cannot be solved simply using spatial information, we leverage temporal context using inter-frame attentions. This allows our network to refocus on missing objects using box predictions from the neighbouring frame, thereby overcoming missing detections. Our method significantly outperforms previous state-of-the-art algorithms using the Mask R-CNN backbone, by achieving 35.1% mAP on the YouTube-VIS benchmark. Additionally, our method is completely online and requires no future frames. Our code is publicly available at https://github.com/anirudh-chakravarthy/ObjProp.
翻译:视频实例分割法的目的是在视频中检测、 分区和跟踪对象。 目前的方法将图像级分解算法扩展至时间域。 但是, 这导致时空的遮罩。 在这项工作中, 我们确定由于时间稳定性而导致的遮罩质量是一个性能瓶颈。 我们为此提出一个视频实例分割法, 缓解因缺少检测而出现的问题。 由于无法简单地使用空间信息来解决这个问题, 我们利用跨框架的注意来利用时间背景。 这使我们的网络能够利用相邻框架的方框预测来重新关注丢失的物体, 从而克服缺失的探测。 我们的方法大大超过使用Mack R- CNN主干线的以往最新算法, 在YouTube- VIS 基准上实现了35.1% mAP。 此外, 我们的方法完全在线, 不需要将来的框架。 我们的代码可以在 https://github.com/ anirudh- chakavarthy/ObjProp上公开查阅 。