We propose MinVIS, a minimal video instance segmentation (VIS) framework that achieves state-of-the-art VIS performance with neither video-based architectures nor training procedures. By only training a query-based image instance segmentation model, MinVIS outperforms the previous best result on the challenging Occluded VIS dataset by over 10% AP. Since MinVIS treats frames in training videos as independent images, we can drastically sub-sample the annotated frames in training videos without any modifications. With only 1% of labeled frames, MinVIS outperforms or is comparable to fully-supervised state-of-the-art approaches on YouTube-VIS 2019/2021. Our key observation is that queries trained to be discriminative between intra-frame object instances are temporally consistent and can be used to track instances without any manually designed heuristics. MinVIS thus has the following inference pipeline: we first apply the trained query-based image instance segmentation to video frames independently. The segmented instances are then tracked by bipartite matching of the corresponding queries. This inference is done in an online fashion and does not need to process the whole video at once. MinVIS thus has the practical advantages of reducing both the labeling costs and the memory requirements, while not sacrificing the VIS performance. Code is available at: https://github.com/NVlabs/MinVIS
翻译:我们建议使用一个最小视频实例分解框架(MinVIS),即最小视频实例分解框架(VIS),这一框架既无视频架构,也无培训程序。通过在YouTube-VIS 2019/2021中培训一个基于查询的图像分解模型,MinVIS只优于在具有挑战性的Occlobe VIS 数据集方面先前的最佳结果,由10%以上的APA提供。由于MinVIS将培训视频中的框架作为独立图像处理,因此我们可以在培训视频中大量分解附加附加说明的框。只有1%的标签框,MinVIS的超文本或可与完全监督的状态方法相比。我们在YouTube-VIS 2019/2021中仅通过培训一个基于查询的图像分解模型模型模型,它优于先前的最佳结果。因此,经过培训的关于对内部对象进行区分的查询,从时间上看,可以在没有任何手工设计的超文本版图像。 MIVS 的缩略图中我们首先将经过训练的查询图像分解到视频框框框框框框框。然后由两边的双边匹配对应的查询程序,在Min-VI 格式中可以减少 VIVI 格式上使用。这个格式的缩缩缩图案, 。