We present OpenSeeD, a simple Open-vocabulary Segmentation and Detection framework that jointly learns from different segmentation and detection datasets. To bridge the gap of vocabulary and annotation granularity, we first introduce a pre-trained text encoder to encode all the visual concepts in two tasks and learn a common semantic space for them. This gives us reasonably good results compared with the counterparts trained on segmentation task only. To further reconcile them, we locate two discrepancies: $i$) task discrepancy -- segmentation requires extracting masks for both foreground objects and background stuff, while detection merely cares about the former; $ii$) data discrepancy -- box and mask annotations are with different spatial granularity, and thus not directly interchangeable. To address these issues, we propose a decoupled decoding to reduce the interference between foreground/background and a conditioned mask decoding to assist in generating masks for given boxes. To this end, we develop a simple encoder-decoder model encompassing all three techniques and train it jointly on COCO and Objects365. After pre-training, our model exhibits competitive or stronger zero-shot transferability for both segmentation and detection. Specifically, OpenSeeD beats the state-of-the-art method for open-vocabulary instance and panoptic segmentation across 5 datasets, and outperforms previous work for open-vocabulary detection on LVIS and ODinW under similar settings. When transferred to specific tasks, our model achieves new SoTA for panoptic segmentation on COCO and ADE20K, and instance segmentation on ADE20K and Cityscapes. Finally, we note that OpenSeeD is the first to explore the potential of joint training on segmentation and detection, and hope it can be received as a strong baseline for developing a single model for both tasks in open world.
翻译:我们提出了OpenSeeD,一个简单的开放词汇分割和检测框架,它可以从不同的分割和检测数据集中联合学习。为了弥合词汇和注释粒度上的差距,我们首先引入一个预训练的文本编码器,将两个任务中的所有视觉概念进行编码,并为它们学习一个公共的语义空间。与仅在分割任务上训练的对应方法相比,这给出了相当不错的结果。为了进一步协调它们,我们找到了两个差异:i)任务差异--分割需要提取前景对象和背景材料的掩模,而检测只关心前者;ii)数据差异--盒子和掩模注释具有不同的空间粒度,因此不能直接互换。为了解决这些问题,我们提出了一种解耦解码方法,以减少前景/背景之间的干扰,并设计了一个条件掩模解码方法来帮助为给定的盒子生成掩模。为此,我们开发了一个简单的编码器-解码器模型,涵盖了所有三种技术,并在COCO和Objects365上联合训练。在预训练后,我们的模型在分割和检测的零-shot迁移方面表现出了有竞争力或更强的结果。具体来说,OpenSeeD在5个数据集上的开放词汇实例和全景分割方面击败了现有的最先进方法,并在类似的设置下优于以前的工作,对于LVIS和ODinW的开放词汇检测。当转移到特定任务时,我们的模型在COCO和ADE20K的全景分割以及ADE20K和Cityscapes的实例分割方面实现了新的最先进水平。最后,我们注意到OpenSeeD是第一个探索在分割和检测上联合训练的潜力,并且希望它可以成为开发面向开放世界的分割和检测的单一模型的强基线。