We study the problem of learning to estimate the 3D object pose from a few labelled examples and a collection of unlabelled data. Our main contribution is a learning framework, neural view synthesis and matching, that can transfer the 3D pose annotation from the labelled to unlabelled images reliably, despite unseen 3D views and nuisance variations such as the object shape, texture, illumination or scene context. In our approach, objects are represented as 3D cuboid meshes composed of feature vectors at each mesh vertex. The model is initialized from a few labelled images and is subsequently used to synthesize feature representations of unseen 3D views. The synthesized views are matched with the feature representations of unlabelled images to generate pseudo-labels of the 3D pose. The pseudo-labelled data is, in turn, used to train the feature extractor such that the features at each mesh vertex are more invariant across varying 3D views of the object. Our model is trained in an EM-type manner alternating between increasing the 3D pose invariance of the feature extractor and annotating unlabelled data through neural view synthesis and matching. We demonstrate the effectiveness of the proposed semi-supervised learning framework for 3D pose estimation on the PASCAL3D+ and KITTI datasets. We find that our approach outperforms all baselines by a wide margin, particularly in an extreme few-shot setting where only 7 annotated images are given. Remarkably, we observe that our model also achieves an exceptional robustness in out-of-distribution scenarios that involve partial occlusion.
翻译:我们研究从几个标签示例和一组未贴标签的数据中估算 3D 对象构成的学习问题。 我们的主要贡献是一个学习框架、 神经视图合成和匹配, 可以将 3D 表示的注释从标签显示为可靠无标签的图像, 尽管存在不可见的 3D 视图和干扰变异, 如对象形状、 纹理、 光化或场景背景。 在我们的方法中, 对象代表为 3D cub meshes, 由每个网格的矢量矢量矢量矢量矢量矢量矢量矢量矢量矢量矢量矢量矢量。 我们的主要贡献来自几个标签图像, 并随后用于合成隐蔽的 3D 视图。 合成的3D 与未贴标签图像的特征表示匹配, 生成3D 的伪标签图像。 假标签数据反过来被用来训练特性提取器, 每个网格的顶端点的特性在3D 不同视图中比较 3D 。 我们的模型以EM 类型方式在给定的3D 显示的3D 的变异度表示, 和半变值表示的模型中显示一个未标值 。