Monocular 3D reconstruction of articulated object categories is challenging due to the lack of training data and the inherent ill-posedness of the problem. In this work we use video self-supervision, forcing the consistency of consecutive 3D reconstructions by a motion-based cycle loss. This largely improves both optimization-based and learning-based 3D mesh reconstruction. We further introduce an interpretable model of 3D template deformations that controls a 3D surface through the displacement of a small number of local, learnable handles. We formulate this operation as a structured layer relying on mesh-laplacian regularization and show that it can be trained in an end-to-end manner. We finally introduce a per-sample numerical optimisation approach that jointly optimises over mesh displacements and cameras within a video, boosting accuracy both for training and also as test time post-processing. While relying exclusively on a small set of videos collected per category for supervision, we obtain state-of-the-art reconstructions with diverse shapes, viewpoints and textures for multiple articulated object categories.
翻译:由于缺乏培训数据以及问题固有的不正确性,对清晰对象类别的单体 3D 重建具有挑战性。 在这项工作中,我们使用视频自我监督,通过基于运动的周期损失迫使连续的 3D 重建保持一致。 这在很大程度上改进了基于优化和基于学习的 3D 网格重建。 我们进一步引入了3D 模板变形的可解释模型模型模型,该模型通过迁移少量可学习的本地控控控器来控制3D 表面。 我们将这一操作设计成一个结构化的层,依靠网状拉平板的正规化,并展示它能够以端到端的方式接受培训。 我们最终引入了每个抽样数字优化方法,在视频中共同对网状置换和照相机进行优化,提高培训的准确性,并作为测试后处理。 我们完全依靠每类收集的少量视频来进行监管。 我们获得了不同形状、观点和文字的状态重建,用于多个表达的物体类别。