Video Panoptic Segmentation (VPS) aims to achieve comprehensive pixel-level scene understanding by segmenting all pixels and associating objects in a video. Current solutions can be categorized into online and near-online approaches. Evolving over the time, each category has its own specialized designs, making it nontrivial to adapt models between different categories. To alleviate the discrepancy, in this work, we propose a unified approach for online and near-online VPS. The meta architecture of the proposed Video-kMaX consists of two components: within clip segmenter (for clip-level segmentation) and cross-clip associater (for association beyond clips). We propose clip-kMaX (clip k-means mask transformer) and HiLA-MB (Hierarchical Location-Aware Memory Buffer) to instantiate the segmenter and associater, respectively. Our general formulation includes the online scenario as a special case by adopting clip length of one. Without bells and whistles, Video-kMaX sets a new state-of-the-art on KITTI-STEP and VIPSeg for video panoptic segmentation, and VSPW for video semantic segmentation. Code will be made publicly available.
翻译:视频全景分割(VPS)旨在通过对视频中的所有像素进行分割并关联对象,达到全面的像素级场景理解。目前的解决方案可以分为在线和近在线方法两类。随着时间的推移,每个类别都有其专业的设计,使得在不同类别之间适应模型变得不容易。为了缓解差异,本文提出了一种统一的在线和近在线VPS方法。所提出的Video-kMaX的元架构包括两个组件:视频剪辑内分割器(用于剪辑级分割)和跨剪辑关联器(用于超越剪辑的关联)。我们提出clip-kMaX(剪辑k-means掩模变换器)和HiLA-MB(分层位置感知内存缓冲区)来实例化分割器和关联器。我们的一般公式包括在线情况作为其中一种特殊情况,即采用长度为1的剪辑。没有花哨的东西,Video-kMaX为视频全景分割和视频语义分割的KITTI-STEP、VIPSeg和VSPW等方面树立了新的技术水平。代码将公开使用。