Image-text models excel at image-level tasks but struggle with detailed visual understanding. While these models provide strong visual-language alignment, segmentation models like SAM2 offer precise spatial boundaries for objects. To this end, we propose TextRegion, a simple, effective, and training-free framework that combines the strengths of image-text models and SAM2 to generate powerful text-aligned region tokens. These tokens enable detailed visual understanding while preserving open-vocabulary capabilities. They can be directly applied to various downstream tasks, including open-world semantic segmentation, referring expression comprehension, and grounding. We conduct extensive evaluations and consistently achieve superior or competitive performance compared to state-of-the-art training-free methods. Additionally, our framework is compatible with many image-text models, making it highly practical and easily extensible as stronger models emerge. Code is available at: https://github.com/avaxiao/TextRegion.
翻译:图像-文本模型在图像级任务中表现出色,但在细粒度视觉理解方面存在局限。尽管这些模型提供了强大的视觉-语言对齐能力,而如SAM2等分割模型能为物体提供精确的空间边界。为此,我们提出TextRegion——一个简单、高效且无需训练的框架,它结合了图像-文本模型与SAM2的优势,以生成强大的文本对齐区域标记。这些标记在保持开放词汇能力的同时,实现了细粒度的视觉理解。它们可直接应用于多种下游任务,包括开放世界语义分割、指代表达式理解与视觉定位。我们进行了广泛评估,与当前最先进的无训练方法相比,始终取得优越或具有竞争力的性能。此外,我们的框架兼容多种图像-文本模型,使其具有高度实用性,并能随着更强模型的出现轻松扩展。代码发布于:https://github.com/avaxiao/TextRegion。