Cross-view correspondence is a fundamental capability for spatial understanding and embodied AI. However, it is still far from being realized in Vision-Language Models (VLMs), especially in achieving precise point-level correspondence, which is crucial for precise affordance interaction. So we propose the Cross-View Point Correspondence (CVPC) task and CrossPoint-Bench, a comprehensive benchmark with hierarchical design, inspired by the human cognitive process of "perceive", "reason", and "correspond". Our evaluation shows the state-of-the-art models (e.g., Gemini-2.5-Pro) still fall far behind humans, with a gap of over 54.65% in overall accuracy, exposing a challenge in transitioning from coarse-grained judgement to fine-grained coordinate prediction. To address this problem, we construct CrossPoint-378K, a dataset with 378K question-answering pairs across 900 scenes, focused on actionable affordance regions that better reflect real-world manipulation and interaction scenarios. Furthermore, we propose CroPond that trained on the CrossPoint-378K dataset. Our CroPond achieves state-of-the-art performance on CrossPoint-Bench, surpassing Gemini-2.5-Pro by 39.7% accuracy, which offers a foundation for advancing future work on cross-view correspondence. The benchmark, dataset, and model are publicly available at https://github.com/WangYipu2002/CrossPoint.
翻译:跨视角对应是空间理解和具身人工智能的一项基础能力。然而,在视觉-语言模型(VLMs)中,这一能力尚未得到充分实现,尤其是在实现精确的点级对应方面——这对于精确的可供性交互至关重要。因此,我们提出了跨视角点对应(CVPC)任务以及CrossPoint-Bench,这是一个受人类“感知”、“推理”和“对应”认知过程启发、采用分层设计的综合性基准。我们的评估表明,最先进的模型(例如Gemini-2.5-Pro)在整体准确率上仍远落后于人类,差距超过54.65%,这揭示了从粗粒度判断向细粒度坐标预测过渡的挑战。为解决此问题,我们构建了CrossPoint-378K数据集,该数据集包含跨越900个场景的378K个问答对,重点关注可操作的可供性区域,以更好地反映现实世界的操控与交互场景。此外,我们提出了在CrossPoint-378K数据集上训练的CroPond模型。我们的CroPond在CrossPoint-Bench上实现了最先进的性能,准确率超越Gemini-2.5-Pro达39.7%,为推进未来跨视角对应研究奠定了基础。基准、数据集和模型已在https://github.com/WangYipu2002/CrossPoint公开提供。