Referring expression segmentation aims to segment an object described by a language expression from an image. Despite the recent progress on this task, existing models tackling this task may not be able to fully capture semantics and visual representations of individual concepts, which limits their generalization capability, especially when handling novel compositions of learned concepts. In this work, through the lens of meta learning, we propose a Meta Compositional Referring Expression Segmentation (MCRES) framework to enhance model compositional generalization performance. Specifically, to handle various levels of novel compositions, our framework first uses training data to construct a virtual training set and multiple virtual testing sets, where data samples in each virtual testing set contain a level of novel compositions w.r.t. the virtual training set. Then, following a novel meta optimization scheme to optimize the model to obtain good testing performance on the virtual testing sets after training on the virtual training set, our framework can effectively drive the model to better capture semantics and visual representations of individual concepts, and thus obtain robust generalization performance even when handling novel compositions. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our framework.
翻译:指称表达式分割旨在从图像中将用语言表达描述的对象分割出来。尽管近年来这项任务取得了进展,但现有模型可能无法完全捕捉单个概念的语义和视觉表示,这限制了它们的泛化能力,尤其是在处理学习概念的新组合时。在这项工作中,通过元学习的视角,我们提出了一个元元学习框架(MCRES)来增强模型的组合泛化性能。具体而言,为了处理各种级别的新组合,我们的框架首先使用训练数据构建一个虚拟训练集和多个虚拟测试集,每个虚拟测试集中的数据样本都包含相对于虚拟训练集的一个新组合级别。然后,遵循一种新颖的元优化方案来优化模型,在虚拟训练集上训练后在虚拟测试集中获得良好的测试性能,我们的框架可以有效地驱动模型更好地捕捉单个概念的语义和视觉表示,从而在处理新组合时获得强大的泛化性能。在三个基准数据集上进行的大量实验证明了我们框架的有效性。