We present iGibson, a novel simulation environment to develop robotic solutions for interactive tasks in large-scale realistic scenes. Our environment contains 15 fully interactive home-sized scenes with 108 rooms populated with rigid and articulated objects. The scenes are replicas of real-world homes, with distribution and the layout of objects aligned to those of the real world. iGibson integrates several key features to facilitate the study of interactive tasks: i) generation of high-quality virtual sensor signals (RGB, depth, segmentation, LiDAR, flow and so on), ii) domain randomization to change the materials of the objects (both visual and physical) and/or their shapes, iii) integrated sampling-based motion planners to generate collision-free trajectories for robot bases and arms, and iv) intuitive human-iGibson interface that enables efficient collection of human demonstrations. Through experiments, we show that the full interactivity of the scenes enables agents to learn useful visual representations that accelerate the training of downstream manipulation tasks. We also show that iGibson features enable the generalization of navigation agents, and that the human-iGibson interface and integrated motion planners facilitate efficient imitation learning of human demonstrated (mobile) manipulation behaviors. iGibson is open-source, equipped with comprehensive examples and documentation. For more information, visit our project website: http://svl.stanford.edu/igibson/
翻译:我们介绍了iGibson,这是为大型现实场景中的互动任务开发机器人解决方案的一种新型模拟环境。我们的环境包含15个完全互动的家庭规模场景,有108个房间,里面有僵硬和清晰的物体;现场是真实世界之家的复制品,其分布和布局符合真实世界的物体。iGibson综合了几个关键特征,以便利对交互式任务的研究:i)生成高质量的虚拟感应信号(RGB、深度、分层、LIDAR、流动等),ii)域随机化以改变物体的材料(视觉和物理)和/或其形状;iii)基于抽样的综合运动规划者为机器人基地和武器生成无碰撞轨迹,以及iv)直观的人类-iGibson界面,以便有效地收集人类演示。我们通过实验,我们展示了场景的完全互动性,使代理者能够学习有用的视觉表现,从而加快对下游操作任务的培训。我们还展示了ibson的域特征,使导航代理人能够对导航代理人和/或形状进行全面分析,并展示了Gsimal-rodustrual-rographal 学习。我们的项目-rodustrisal-hal-tradudududustrismlix