Household environments are important testbeds for embodied AI research. Many simulation environments have been proposed to develop learning models for solving everyday household tasks. However, though interactions are paid attention to in most environments, the actions operating on the objects are not well supported concerning action types, object types, and interaction physics. To bridge the gap at the action level, we propose a novel physics-based action-centric environment, RFUniverse, for robot learning of everyday household tasks. RFUniverse supports interactions among 87 atomic actions and 8 basic object types in a visually and physically plausible way. To demonstrate the usability of the simulation environment, we perform learning algorithms on various types of tasks, namely fruit-picking, cloth-folding and sponge-wiping for manipulation, stair-chasing for locomotion, room-cleaning for multi-agent collaboration, milk-pouring for task and motion planning, and bimanual-lifting for behavior cloning from VR interface. Client-side Python APIs, learning codes, models, and the database will be released. Demo video for atomic actions can be found in supplementary materials: \url{https://sites.google.com/view/rfuniverse}
翻译:许多模拟环境建议开发解决日常家务任务的学习模式,但大多数环境都注意互动,在行动类型、物体类型和相互作用物理学方面,物体操作的行动没有很好地支持。为了缩小行动层面的差距,我们提议了一个新的基于物理的以行动为中心的环境,即RFUniverse,用于机器人学习日常家务任务。RFUVIVIVE支持87个原子行动和8个基本对象类型之间的互动,以视觉和物理上看似合理的方式。为了显示模拟环境的可用性,我们进行了各种任务类型的学习算法,这些任务包括:选菜、布料和为操纵而擦海绵、为移动设备进行楼梯检查、为多剂协作而清理房间、为任务和运动规划提供牛奶、以及从VR界面为行为克隆提供双向升压。客户侧的 Python APIs、学习代码、模型和数据库将予以发布。原子行动的Demo视频可以在补充材料中找到:\\urglevor{http://comsite.