Recently, open-vocabulary learning has emerged to accomplish segmentation for arbitrary categories of text-based descriptions, which popularizes the segmentation system to more general-purpose application scenarios. However, existing methods devote to designing specialized architectures or parameters for specific segmentation tasks. These customized design paradigms lead to fragmentation between various segmentation tasks, thus hindering the uniformity of segmentation models. Hence in this paper, we propose FreeSeg, a generic framework to accomplish Unified, Universal and Open-Vocabulary Image Segmentation. FreeSeg optimizes an all-in-one network via one-shot training and employs the same architecture and parameters to handle diverse segmentation tasks seamlessly in the inference procedure. Additionally, adaptive prompt learning facilitates the unified model to capture task-aware and category-sensitive concepts, improving model robustness in multi-task and varied scenarios. Extensive experimental results demonstrate that FreeSeg establishes new state-of-the-art results in performance and generalization on three segmentation tasks, which outperforms the best task-specific architectures by a large margin: 5.5% mIoU on semantic segmentation, 17.6% mAP on instance segmentation, 20.1% PQ on panoptic segmentation for the unseen class on COCO.
翻译:近来,开放词汇学习开创了针对任意类别文本描述的分割,使得分割系统适用于更广泛的应用场景。然而,现有方法致力于为特定的分割任务设计专门的结构或参数。这些定制化的设计范式导致各种分割任务之间的分化,从而阻碍了分割模型的统一性。因此,在本文中,我们提出了自由分割(FreeSeg),这是一种用于实现统一、通用且开放词汇图像分割的通用框架。FreeSeg通过一次性训练优化了单一网络,并在推理过程中使用相同的架构和参数来无缝处理各种分割任务。此外,自适应提示学习有助于统一模型捕捉任务感知和类别敏感概念,提高模型在多任务和不同场景下的鲁棒性。广泛的实验结果表明,FreeSeg在三个分割任务中取得了新的性能和泛化的最佳结果,在未见类别的COCO上,分别优于最佳任务特定架构:语义分割上的5.5%mIoU,实例分割上的17.6%mAP,全景分割上的20.1%PQ。