Can our video understanding systems perceive objects when a heavy occlusion exists in a scene? To answer this question, we collect a large-scale dataset called OVIS for occluded video instance segmentation, that is, to simultaneously detect, segment, and track instances in occluded scenes. OVIS consists of 296k high-quality instance masks from 25 semantic categories, where object occlusions usually occur. While our human vision systems can understand those occluded instances by contextual reasoning and association, our experiments suggest that current video understanding systems cannot. On the OVIS dataset, the highest AP achieved by state-of-the-art algorithms is only 16.3, which reveals that we are still at a nascent stage for understanding objects, instances, and videos in a real-world scenario. We also present a simple plug-and-play module that performs temporal feature calibration to complement missing object cues caused by occlusion. Built upon MaskTrack R-CNN and SipMask, we obtain a remarkable AP improvement on the OVIS dataset. The OVIS dataset and project code are available at http://songbai.site/ovis .
翻译:我们的视频理解系统能否感知到当场中存在严重封闭时的物体? 为了回答这个问题,我们收集了一个大型数据集,名为 OVIS,用于隐蔽的视频实例分割,即同时探测、分段和跟踪隐蔽场景中的事例。 OVIS 由来自通常发生物体隔离的25个语义类别的296k 高品质实例面罩组成。虽然我们的人类视觉系统能够理解通过背景推理和关联而隐蔽的事例,但我们的实验表明,目前的视频理解系统无法。在 OVIS 数据集中,通过最新算法达到的最高AP值仅为16.3,这表明我们仍处于一个在现实世界情景中理解对象、实例和视频的新生阶段。我们还展示了一个简单的插插件和动作模块,进行时间特征校准,以补充因隐蔽而丢失的物体提示。在MaskTrac R-CN 和 SipMask 上,我们在 OVIS数据集上取得了显著的APS(AP)改进。 OVIS/Projusti 代码在 httpsite http:// http:// http://www.