Different from conventional image matting, which either requires user-defined scribbles/trimap to extract a specific foreground object or directly extracts all the foreground objects in the image indiscriminately, we introduce a new task named Referring Image Matting (RIM) in this paper. RIM aims to extract the meticulous alpha matte of the specific object that best matches the given natural language description, thus enabling a more natural and simpler instruction for image matting. First, we establish a large-scale challenging dataset RefMatte by designing a comprehensive image composition and expression generation engine to automatically produce high-quality images along with diverse text attributes based on public datasets. RefMatte consists of 230 object categories, 47,500 images, 118,749 expression-region entities, and 474,996 expressions. Additionally, we construct a real-world test set with 100 high-resolution natural images and manually annotate complex phrases to evaluate the out-of-domain generalization abilities of RIM methods. Furthermore, we present a novel baseline method CLIPMat for RIM, including a context-embedded prompt, a text-driven semantic pop-up, and a multi-level details extractor. Extensive experiments on RefMatte in both keyword and expression settings validate the superiority of CLIPMat over representative methods. We hope this work could provide novel insights into image matting and encourage more follow-up studies. The dataset, code, and models will be made public at https://github.com/JizhiziLi/RIM.
翻译:常规图像交配与常规图像交配不同,前者需要用户定义的拼图/trimap 来提取特定前景对象,或直接在图像中不加区别地提取所有前景对象,我们在本文中引入了一个新的任务,名为 参考图像交配( RIM ) 。 RIM 旨在提取与给定自然语言描述最匹配的特定对象的精细字母阿尔法配方, 从而为图像交配提供一个更自然和简单的指令。 首先, 我们通过设计一个全面的图像集成和表达生成引擎来自动生成高质量的图像以及基于公共数据集的不同文本属性。 RefMAT 由230个对象类别、 47 500个图像、 118 749个表达区域实体和 474 996 表达方式组成。 此外, 我们用100个高分辨率的自然图像来构建一个真实世界测试集, 手动说明图像配对 RIM 方法的外向外概括化能力。 此外,我们为 RIM 设计了一个新型的 CLIM 基准方法, 包括一个背景化的图像化模型, 快速化的图像化模型, 和多版本的图像化的图像化的图像级的图像级的图像级的图像化LMMTI.