Inspired by how humans reason over discrete objects and their relationships, we explore whether compact object-centric and object-relation representations can form a foundation for multitask robotic manipulation. Most existing robotic multitask models rely on dense embeddings that entangle both object and background cues, raising concerns about both efficiency and interpretability. In contrast, we study object-relation-centric representations as a pathway to more structured, efficient, and explainable visuomotor control. Our contributions are two-fold. First, we introduce LIBERO+, a fine-grained benchmark dataset designed to enable and evaluate object-relation reasoning in robotic manipulation. Unlike prior datasets, LIBERO+ provides object-centric annotations that enrich demonstrations with box- and mask-level labels as well as instance-level temporal tracking, supporting compact and interpretable visuomotor representations. Second, we propose SlotVLA, a slot-attention-based framework that captures both objects and their relations for action decoding. It uses a slot-based visual tokenizer to maintain consistent temporal object representations, a relation-centric decoder to produce task-relevant embeddings, and an LLM-driven module that translates these embeddings into executable actions. Experiments on LIBERO+ demonstrate that object-centric slot and object-relation slot representations drastically reduce the number of required visual tokens, while providing competitive generalization. Together, LIBERO+ and SlotVLA provide a compact, interpretable, and effective foundation for advancing object-relation-centric robotic manipulation.
翻译:受人类对离散物体及其关系进行推理的启发,我们探索紧凑的物体中心化与物体关系表征能否构成多任务机器人操作的基础。现有大多数机器人多任务模型依赖于将物体与背景线索纠缠在一起的稠密嵌入,这引发了关于效率与可解释性的双重担忧。相比之下,我们研究以物体关系为中心的表征,将其作为实现更结构化、高效且可解释的视觉运动控制的途径。我们的贡献包括两个方面。首先,我们引入了LIBERO+,这是一个细粒度的基准数据集,旨在支持并评估机器人操作中的物体关系推理。与先前数据集不同,LIBERO+提供了物体中心化的标注,通过边界框和掩码级别的标签以及实例级的时间追踪来丰富演示数据,从而支持紧凑且可解释的视觉运动表征。其次,我们提出了SlotVLA,一个基于槽注意力的框架,能够同时捕获物体及其关系以进行动作解码。该框架采用基于槽的视觉分词器来保持时间上一致的物体表征,使用关系中心化解码器生成与任务相关的嵌入,并利用一个由大语言模型驱动的模块将这些嵌入转换为可执行的动作。在LIBERO+上的实验表明,物体中心的槽表征与物体关系槽表征能大幅减少所需视觉令牌的数量,同时提供具有竞争力的泛化性能。LIBERO+与SlotVLA共同为推进以物体关系为中心的机器人操作提供了一个紧凑、可解释且有效的基础。