Recently, learning frameworks have shown the capability of inferring the accurate shape, pose, and texture of an object from a single RGB image. However, current methods are trained on image collections of a single category in order to exploit specific priors, and they often make use of category-specific 3D templates. In this paper, we present an alternative approach that infers the textured mesh of objects combining a series of deformable 3D models and a set of instance-specific deformation, pose, and texture. Differently from previous works, our method is trained with images of multiple object categories using only foreground masks and rough camera poses as supervision. Without specific 3D templates, the framework learns category-level models which are deformed to recover the 3D shape of the depicted object. The instance-specific deformations are predicted independently for each vertex of the learned 3D mesh, enabling the dynamic subdivision of the mesh during the training process. Experiments show that the proposed framework can distinguish between different object categories and learn category-specific shape priors in an unsupervised manner. Predicted shapes are smooth and can leverage from multiple steps of subdivision during the training process, obtaining comparable or state-of-the-art results on two public datasets. Models and code are publicly released.
翻译:最近,学习框架展示了从一个 RGB 图像中推断一个对象的准确形状、形状和纹理的能力;然而,目前的方法在单个类别图像收集方面受过培训,以便利用特定的前科,并且往往使用特定类别的 3D 模板。在本文件中,我们提出了一个替代方法,用以推断将一系列可变形的 3D 模型和一套具体实例变形、外形和质化组合组合在一起的标本网格。不同于以往的工作,我们的方法是用多个对象类别的图像来培训,仅使用前台面遮罩和粗糙相机作为监督。没有特定的 3D 模板,框架学习了为恢复所描绘对象的 3D 形状变形而变形的类别模型模型。对所学的 3D 网格的每个顶端,独立地预测了具体实例的变形,使培训过程中的组合组合能够对不同的对象类别进行区分,并以非监督的方式学习特定类别的形状。在公开培训过程中,预测的组合形状和杠杆,在两个可比较的阶段,在公开培训过程中,可以取得两个可变式的分区和杠杆。