In recent years, video instance segmentation (VIS) has been largely advanced by offline models, while online models gradually attracted less attention possibly due to their inferior performance. However, online methods have their inherent advantage in handling long video sequences and ongoing videos while offline models fail due to the limit of computational resources. Therefore, it would be highly desirable if online models can achieve comparable or even better performance than offline models. By dissecting current online models and offline models, we demonstrate that the main cause of the performance gap is the error-prone association between frames caused by the similar appearance among different instances in the feature space. Observing this, we propose an online framework based on contrastive learning that is able to learn more discriminative instance embeddings for association and fully exploit history information for stability. Despite its simplicity, our method outperforms all online and offline methods on three benchmarks. Specifically, we achieve 49.5 AP on YouTube-VIS 2019, a significant improvement of 13.2 AP and 2.1 AP over the prior online and offline art, respectively. Moreover, we achieve 30.2 AP on OVIS, a more challenging dataset with significant crowding and occlusions, surpassing the prior art by 14.8 AP. The proposed method won first place in the video instance segmentation track of the 4th Large-scale Video Object Segmentation Challenge (CVPR2022). We hope the simplicity and effectiveness of our method, as well as our insight into current methods, could shed light on the exploration of VIS models.
翻译:近年来,视频分解(VIS)主要通过离线模式获得很大进步,而在线模型由于性能低劣而逐渐较少引起注意,而在线模型则逐渐较少引起注意。然而,在线方法在处理长视频序列和持续视频方面有着内在的优势,而离线模型则由于计算资源的局限性而失败。因此,如果在线模型能够实现比离线模型的可比较性或甚至更好的性能,将是十分可取的。通过分解当前的在线模型和离线模型,我们表明,造成绩效差距的主要原因是由于地貌空间不同实例的类似外观而导致框架之间的误差关联。这样,我们建议了一个基于对比性学习更具有歧视性的实例嵌入和充分利用历史信息实现稳定性的在线框架。尽管其简单性,我们的在线模型比所有在线和离线方法都优于三个基准。具体地说,我们在YouTube-VIS 2019上实现了49.5的AP, 与先前的在线和离线艺术中的13.2和2.1 AP分别大大改进了。此外,我们在OVIS上实现了30.2的AP,一个更具挑战性的数据集集,成为了第14级的预级的预级的预版的预版,成为了我们的图像的预级的预级的预级,成为了我们第4级的预级的预级的预级的预级的预级,成为了我们的第4级的预级的预级。