The impressive performance of deep convolutional neural networks in single-view 3D reconstruction suggests that these models perform non-trivial reasoning about the 3D structure of the output space. Recent work has challenged this belief, showing that, on standard benchmarks, complex encoder-decoder architectures perform similarly to nearest-neighbor baselines or simple linear decoder models that exploit large amounts of per-category data. However, building large collections of 3D shapes for supervised training is a laborious process; a more realistic and less constraining task is inferring 3D shapes for categories with few available training examples, calling for a model that can successfully generalize to novel object classes. In this work we experimentally demonstrate that naive baselines fail in this few-shot learning setting, in which the network must learn informative shape priors for inference of new categories. We propose three ways to learn a class-specific global shape prior, directly from data. Using these techniques, we are able to capture multi-scale information about the 3D shape, and account for intra-class variability by virtue of an implicit compositional structure. Experiments on the popular ShapeNet dataset show that our method outperforms a zero-shot baseline by over 40%, and the current state-of-the-art by over 10%, in terms of relative performance, in the few-shot setting.12
翻译:在单一视图 3D 重建中深层进化神经网络的令人印象深刻的绩效表明,这些模型对输出空间的 3D 结构进行了非三维结构的非三维推理。最近的工作对这一信念提出了挑战,表明在标准基准上,复杂的编码器解码器结构的运作与利用大量每类数据的近邻基线或简单的线性解码模型类似。然而,为监督培训建立大量3D形状的3D形状收集是一个艰巨的过程;更现实和较少的制约性工作是用很少的培训实例为类别推导3D形状的3D形状,这需要一种能够成功地将模型推广到新的对象类。在这个工作中,我们实验性地证明,在这种少见的学习环境中,天真基线失败,而网络必须先学习信息形状,才能推断新的类别。我们提出了三种方法来直接从数据中学习一个班级特定的全球形状。使用这些技术,我们可以捕捉到关于3D形状的多级信息,并用几个隐含的构成结构来计算出内部变异性。在这个工作中,我们试验性地用一个普通的零位模型进行实验,在当前的缩缩缩缩图中,通过10 显示10 样的10 样的SAP- 样的10 。