Recent progress in robot learning has been driven by large-scale datasets and powerful visuomotor policy architectures, yet policy robustness remains limited by the substantial cost of collecting diverse demonstrations, particularly for spatial generalization in manipulation tasks. To reduce repetitive data collection, we present Real2Edit2Real, a framework that generates new demonstrations by bridging 3D editability with 2D visual data through a 3D control interface. Our approach first reconstructs scene geometry from multi-view RGB observations with a metric-scale 3D reconstruction model. Based on the reconstructed geometry, we perform depth-reliable 3D editing on point clouds to generate new manipulation trajectories while geometrically correcting the robot poses to recover physically consistent depth, which serves as a reliable condition for synthesizing new demonstrations. Finally, we propose a multi-conditional video generation model guided by depth as the primary control signal, together with action, edge, and ray maps, to synthesize spatially augmented multi-view manipulation videos. Experiments on four real-world manipulation tasks demonstrate that policies trained on data generated from only 1-5 source demonstrations can match or outperform those trained on 50 real-world demonstrations, improving data efficiency by up to 10-50x. Moreover, experimental results on height and texture editing demonstrate the framework's flexibility and extensibility, indicating its potential to serve as a unified data generation framework.
翻译:机器人学习的最新进展得益于大规模数据集和强大的视觉运动策略架构,但策略的鲁棒性仍受限于收集多样化演示的高昂成本,尤其是在操作任务的空间泛化方面。为了减少重复的数据收集,我们提出了Real2Edit2Real框架,该框架通过三维控制界面将三维可编辑性与二维视觉数据相桥接,从而生成新的演示。我们的方法首先使用度量尺度三维重建模型从多视角RGB观测中重建场景几何。基于重建的几何,我们在点云上进行深度可靠的三维编辑以生成新的操作轨迹,同时对机器人位姿进行几何校正以恢复物理一致的深度,这为合成新演示提供了可靠的条件。最后,我们提出了一种以深度为主要控制信号、并结合动作、边缘和射线图的多条件视频生成模型,用于合成空间增强的多视角操作视频。在四个真实世界操作任务上的实验表明,仅使用1-5个源演示生成的数据进行训练的策略,其性能可与使用50个真实世界演示训练的策略相媲美甚至更优,将数据效率提升了高达10-50倍。此外,在高度和纹理编辑上的实验结果证明了该框架的灵活性和可扩展性,表明其有潜力成为一个统一的数据生成框架。