We present a neural network approach to transfer the motion from a single image of an articulated object to a rest-state (i.e., unarticulated) 3D model. Our network learns to predict the object's pose, part segmentation, and corresponding motion parameters to reproduce the articulation shown in the input image. The network is composed of three distinct branches that take a shared joint image-shape embedding and is trained end-to-end. Unlike previous methods, our approach is independent of the topology of the object and can work with objects from arbitrary categories. Our method, trained with only synthetic data, can be used to automatically animate a mesh, infer motion from real images, and transfer articulation to functionally similar but geometrically distinct 3D models at test time.
翻译:我们提出了一种神经网络方法,将关节对象的运动从单个图像传输到静止状态的三维模型(即未关节化)。我们的网络学习预测物体的姿势,部分分割和相应的运动参数以再现输入图像中显示的关节运动。网络由三个不同的分支组成,它们采用共享的关节图像形状嵌入,并进行端到端的训练。与以前的方法不同,我们的方法独立于对象的拓扑,并且可以与来自任意类别的对象一起使用。我们的方法仅使用合成数据进行训练,可以用于自动地使网格动画化,从实际图像中推断运动,并在测试时将关节转移至功能上相似但几何上不同的三维模型。