Recently, transformer-based image segmentation methods have achieved notable success against previous solutions. While for video domains, how to effectively model temporal context with the attention of object instances across frames remains an open problem. In this paper, we propose an online video instance segmentation framework with a novel instance-aware temporal fusion method. We first leverages the representation, i.e., a latent code in the global context (instance code) and CNN feature maps to represent instance- and pixel-level features. Based on this representation, we introduce a cropping-free temporal fusion approach to model the temporal consistency between video frames. Specifically, we encode global instance-specific information in the instance code and build up inter-frame contextual fusion with hybrid attentions between the instance codes and CNN feature maps. Inter-frame consistency between the instance codes are further enforced with order constraints. By leveraging the learned hybrid temporal consistency, we are able to directly retrieve and maintain instance identities across frames, eliminating the complicated frame-wise instance matching in prior methods. Extensive experiments have been conducted on popular VIS datasets, i.e. Youtube-VIS-19/21. Our model achieves the best performance among all online VIS methods. Notably, our model also eclipses all offline methods when using the ResNet-50 backbone.
翻译:最近,基于变压器的图像分解方法相对于先前的解决方案取得了显著的成功。 对于视频域而言,如何有效地模拟时间背景并关注跨框架的物体实例,这仍然是一个尚未解决的问题。在本文件中,我们提出一个在线视频实例分解框架,采用新颖的实例识别时间聚合方法。我们首先利用这些表达方式,即全球背景下的潜在代码( Instance code)和CNN特例地图来代表实例和像素层面的特征。基于这一表达方式,我们引入了一种无裁剪切时间聚合方法,以模拟视频框架之间的时间一致性。具体地说,我们用实例代码和CNN特例图之间的混合关注来编码全球实例特定信息,并建立框架背景的结合。我们先用顺序限制来进一步强化实例代码之间的框架一致性。通过利用所学到的混合时间一致性,我们能够直接检索和保持跨框架的方位特征,消除先前方法中复杂的框架与实例的匹配。我们已经在广受欢迎的VIS数据集中进行了广泛的实验,即Youtube-VIS-19/21中,我们的所有模型都实现了最佳的S-S-S-S-S-ISIS-21。我们的所有模型。