Human life is populated with articulated objects. A comprehensive understanding of articulated objects, namely appearance, structure, physics property, and semantics, will benefit many research communities. As current articulated object understanding solutions are usually based on synthetic object dataset with CAD models without physics properties, which prevent satisfied generalization from simulation to real-world applications in visual and robotics tasks. To bridge the gap, we present AKB-48: a large-scale Articulated object Knowledge Base which consists of 2,037 real-world 3D articulated object models of 48 categories. Each object is described by a knowledge graph ArtiKG. To build the AKB-48, we present a fast articulation knowledge modeling (FArM) pipeline, which can fulfill the ArtiKG for an articulated object within 10-15 minutes, and largely reduce the cost for object modeling in the real world. Using our dataset, we propose AKBNet, a novel integral pipeline for Category-level Visual Articulation Manipulation (C-VAM) task, in which we benchmark three sub-tasks, namely pose estimation, object reconstruction and manipulation. Dataset, codes, and models will be publicly available at https://liuliu66.github.io/articulationobjects/.
翻译:人类生命由分解物体组成。全面理解分解物体,即外观、结构、物理属性和语义学,将使许多研究界受益。由于当前分解物体理解解决方案通常以无物理特性的CAD模型合成物体数据集为基础,这些模型使得无法在视觉和机器人任务中实现从模拟到真实世界应用的满意的概括化。为了缩小差距,我们提出了AKB-48:一个大型分解物体知识库,由2,037个真实世界3D分解的48类物体模型组成。每个对象都由一个知识图表ArtiKG加以描述。为了建立AKB-48,我们提出了一个快速的分解知识模型(FARM)管道,它可以在10-15分钟内完成ArtiKG对一个分解的物体的模型,并在很大程度上降低在现实世界中进行物体建模的成本。我们提议AKBNet,这是一个全新的分级视觉分解(C-VAM)任务的综合管道,其中我们用一个知识图表来测量三个子塔斯,即显示估计、对象重建与操纵。数据、代码和模型/模型将在http://liliusmissublius.