Video semantic segmentation has achieved great progress under the supervision of large amounts of labelled training data. However, domain adaptive video segmentation, which can mitigate data labelling constraints by adapting from a labelled source domain toward an unlabelled target domain, is largely neglected. We design temporal pseudo supervision (TPS), a simple and effective method that explores the idea of consistency training for learning effective representations from unlabelled target videos. Unlike traditional consistency training that builds consistency in spatial space, we explore consistency training in spatiotemporal space by enforcing model consistency across augmented video frames which helps learn from more diverse target data. Specifically, we design cross-frame pseudo labelling to provide pseudo supervision from previous video frames while learning from the augmented current video frames. The cross-frame pseudo labelling encourages the network to produce high-certainty predictions, which facilitates consistency training with cross-frame augmentation effectively. Extensive experiments over multiple public datasets show that TPS is simpler to implement, much more stable to train, and achieves superior video segmentation accuracy as compared with the state-of-the-art.
翻译:在大量有标签的培训数据的监督下,视频语义分割取得了很大进展;然而,在大量有标签的培训数据监督下,可减少数据标签限制的域性适应性视频分割性在很大程度上被忽视;我们设计了时间假监督(TPS),这是一个简单而有效的方法,探索一致性培训的想法,以便从无标签的目标视频中学习有效的表述;与传统的一致性培训不同,我们探索了空间空间空间的一致性培训,方法是在增强的视频框中加强模型一致性,从而帮助学习更多样化的目标数据;具体地说,我们设计跨框架假标签,从以前的视频框中提供假监督,同时学习现有扩大的视频框;跨框架假标签鼓励网络制作高稳定性预测,从而便利与跨框架增强的兼容性培训;对多个公共数据集进行的广泛实验表明,与最新技术相比,TPS更容易实施,更稳定地进行培训,并实现更好的视频分割性准确性。