In this paper we introduce a Transformer-based approach to video object segmentation (VOS). To address compounding error and scalability issues of prior work, we propose a scalable, end-to-end method for VOS called Sparse Spatiotemporal Transformers (SST). SST extracts per-pixel representations for each object in a video using sparse attention over spatiotemporal features. Our attention-based formulation for VOS allows a model to learn to attend over a history of multiple frames and provides suitable inductive bias for performing correspondence-like computations necessary for solving motion segmentation. We demonstrate the effectiveness of attention-based over recurrent networks in the spatiotemporal domain. Our method achieves competitive results on YouTube-VOS and DAVIS 2017 with improved scalability and robustness to occlusions compared with the state of the art.
翻译:在本文中,我们引入了一种基于变换器的视频对象分割法(VOS) 。 为了解决先前工作中的复合错误和可缩放性问题,我们为VOS提出了一种可缩放、端到端的方法,称为Sparse Spatiatiomal变异器(SST ) 。 SST在视频中提取每个对象的每个像素表示方式,在视频中使用微粒时空特征的微小注意力。我们的VOS关注基配方允许一种模式,在多个框架的历史中学习,并为进行解决运动分割所需的对应式计算提供合适的诱导偏差。我们展示了在波地时域内基于关注的网络对经常性网络的有效性。我们的方法在YouTube-VOS 和 DAVIS 2017 上取得了竞争性的结果,与艺术的状态相比,在隔离上更加可缩放性和稳健。