Digitizing physical objects into the virtual world has the potential to unlock new research and applications in embodied AI and mixed reality. This work focuses on recreating interactive digital twins of real-world articulated objects, which can be directly imported into virtual environments. We introduce Ditto to learn articulation model estimation and 3D geometry reconstruction of an articulated object through interactive perception. Given a pair of visual observations of an articulated object before and after interaction, Ditto reconstructs part-level geometry and estimates the articulation model of the object. We employ implicit neural representations for joint geometry and articulation modeling. Our experiments show that Ditto effectively builds digital twins of articulated objects in a category-agnostic way. We also apply Ditto to real-world objects and deploy the recreated digital twins in physical simulation. Code and additional results are available at https://ut-austin-rpl.github.io/Ditto
翻译:将物理物体数字化到虚拟世界中,有可能在体现的AI和混合现实中释放出新的研究和应用。这项工作侧重于重新创造真实世界的显形物体的交互式数字双胞胎,可以直接输入虚拟环境。我们引入Ditto,学习表达模型的估算,通过交互感知对一个显形物体进行3D几何学重建。鉴于对一个显形物体在互动前后的一对视觉观测,Ditto重建部分层次的几何学和估计该物体的显像模型。我们采用隐含的神经表征进行联合几何和表达模型。我们的实验显示,Ditto以类别不可知的方式有效地构建了显形物体的显形双胞胎。我们还将Ditto应用到现实世界的物体上,并在物理模拟中部署重新创建的数字双胞胎。代码和额外结果见https://ut-austin-rpl.github.io/Ditto。