Recently, transformer-based methods have achieved impressive results on Video Instance Segmentation (VIS). However, most of these top-performing methods run in an offline manner by processing the entire video clip at once to predict instance mask volumes. This makes them incapable of handling the long videos that appear in challenging new video instance segmentation datasets like UVO and OVIS. We propose a fully online transformer-based video instance segmentation model that performs comparably to top offline methods on the YouTube-VIS 2019 benchmark and considerably outperforms them on UVO and OVIS. This method, called Robust Online Video Segmentation (ROVIS), augments the Mask2Former image instance segmentation model with track queries, a lightweight mechanism for carrying track information from frame to frame, originally introduced by the TrackFormer method for multi-object tracking. We show that, when combined with a strong enough image segmentation architecture, track queries can exhibit impressive accuracy while not being constrained to short videos.
翻译:最近,以变压器为基础的变压器方法在视频实例分割法(VIS)上取得了令人印象深刻的成果。然而,大多数这些最优秀的性能方法通过同时处理整个视频剪辑以预测实例掩码量而以离线方式运行。这使得它们无法处理在挑战新的视频实例分割数据集(如UVO和OVIS)中出现的长视频。我们提议了一个完全在线的变压器基于视频实例分割法模型,该模型在YouTube-VIS 2019基准上与顶端的离线方法相对应,并在UVO和OVIS上大大优于这些方法。这种方法被称为Robust在线视频分割法(ROVIS),用轨查,即最初由 TrackFormer多位跟踪方法引入的将跟踪信息从框架到框架的轻量机制,强化了Mask2Former图像分割模式。我们表明,如果与足够强大的图像分割结构相结合,跟踪查询可以显示令人印象深刻的准确性,同时又不局限于短视频。