We introduce the Universal Manipulation Policy Network (UMPNet) -- a single image-based policy network that infers closed-loop action sequences for manipulating arbitrary articulated objects. To infer a wide range of action trajectories, the policy supports 6DoF action representation and varying trajectory length. To handle a diverse set of objects, the policy learns from objects with different articulation structures and generalizes to unseen objects or categories. The policy is trained with self-guided exploration without any human demonstrations, scripted policy, or pre-defined goal conditions. To support effective multi-step interaction, we introduce a novel Arrow-of-Time action attribute that indicates whether an action will change the object state back to the past or forward into the future. With the Arrow-of-Time inference at each interaction step, the learned policy is able to select actions that consistently lead towards or away from a given state, thereby, enabling both effective state exploration and goal-conditioned manipulation. Video is available at https://youtu.be/KqlvcL9RqKM
翻译:我们引入了通用操纵政策网络(UMPNet) -- -- 一个单一的基于图像的政策网络,它为操纵任意表达的物体推断出闭环动作序列。为了推断一系列广泛的动作轨迹,该政策支持6DoF动作的表达方式和不同的轨迹长度。要处理一系列不同的物体,该政策从具有不同表达结构的物体中学习,向看不见的物体或类别进行概括。该政策经过自我指导的探索培训,没有人类演示、脚本政策或预设的目标条件。为了支持有效的多步骤互动,我们引入了一个新型的“时间之箭”动作属性,表明一项行动是会将对象的状态追溯到过去还是未来。随着时间之箭对每个互动步骤的推论,所学的政策能够选择持续向或远离特定状态的行动,从而使得有效的国家探索和有目标限制的操纵成为可能。视频可在 https://youtu.be/KqlvL9RKMM上查阅。