This paper addresses the task of segmenting class-agnostic objects in semi-supervised setting. Although previous detection based methods achieve relatively good performance, these approaches extract the best proposal by a greedy strategy, which may lose the local patch details outside the chosen candidate. In this paper, we propose a novel spatiotemporal graph neural network (STG-Net) to reconstruct more accurate masks for video object segmentation, which captures the local contexts by utilizing all proposals. In the spatial graph, we treat object proposals of a frame as nodes and represent their correlations with an edge weight strategy for mask context aggregation. To capture temporal information from previous frames, we use a memory network to refine the mask of current frame by retrieving historic masks in a temporal graph. The joint use of both local patch details and temporal relationships allow us to better address the challenges such as object occlusion and missing. Without online learning and fine-tuning, our STG-Net achieves state-of-the-art performance on four large benchmarks (DAVIS, YouTube-VOS, SegTrack-v2, and YouTube-Objects), demonstrating the effectiveness of the proposed approach.
翻译:本文论述半监督环境下分类类不可知物体的任务。虽然先前的检测方法取得了相对良好的绩效,但这些方法通过贪婪战略提取了最佳建议,可能会在选定候选人之外失去本地补丁细节。在本文中,我们提议建立一个新颖的时空图像神经网络(STG-Net),以重建更准确的视频物体分割面罩,通过利用所有提议来捕捉当地背景。在空间图中,我们把框架的物体提议作为节点,并代表它们与面罩环境组合的边距权重战略的关联。为了从以前的框架获取时间信息,我们使用记忆网络,通过在时间图中检索历史面罩来改进当前框架的面罩。共同使用本地补丁图和时间关系,使我们能够更好地应对诸如物体封闭和缺失等挑战。不通过在线学习和微调,我们的STG-Net在四个大基准(DAVIS、YouTube-VOS、Segrack-v2和YouTube-Objects)上实现最新业绩,展示拟议方法的有效性。