Partially-supervised instance segmentation is a task which requests segmenting objects from novel unseen categories via learning on limited seen categories with annotated masks thus eliminating demands of heavy annotation burden. The key to addressing this task is to build an effective class-agnostic mask segmentation model. Unlike previous methods that learn such models only on seen categories, in this paper, we propose a new method, named ContrastMask, which learns a mask segmentation model on both seen and unseen categories under a unified pixel-level contrastive learning framework. In this framework, annotated masks of seen categories and pseudo masks of unseen categories serve as a prior for contrastive learning, where features from the mask regions (foreground) are pulled together, and are contrasted against those from the background, and vice versa. Through this framework, feature discrimination between foreground and background is largely improved, facilitating learning of the class-agnostic mask segmentation model. Exhaustive experiments on the COCO dataset demonstrate the superiority of our method, which outperforms previous state-of-the-arts.
翻译:部分监督的例谱分解是一项任务,要求通过在有限的可见类别上学习附加说明的面罩,将新的无形类别中的物体分解出来,从而消除沉重的批注负担; 完成这项任务的关键是建立一个有效的类别不可知的遮罩分解模式; 与以前只对可见类别学习此类模型的方法不同,我们在本文件中提出了一种名为 ContrastMask的新方法,该方法在统一的像素级对比学习框架内,对可见类别和不可见类别进行遮罩分解模式学习; 在这个框架内,对看不见类别的可见类别和假面罩进行附加说明的遮罩,作为对比性学习的先行,将遮罩区域(地面)的特征拉在一起,与背景特征对比,反之亦然。 通过这个框架,显著地表与背景之间的特征区别得到了改善,便利对类类不可辨遮蔽分解模式的学习。 COCO数据集的外观实验展示了我们方法的优越性,它超越了以往的状态。