We present a novel neural implicit shape method for partial point cloud completion. To that end, we combine a conditional Deep-SDF architecture with learned, adversarial shape priors. More specifically, our network converts partial inputs into a global latent code and then recovers the full geometry via an implicit, signed distance generator. Additionally, we train a PointNet++ discriminator that impels the generator to produce plausible, globally consistent reconstructions. In that way, we effectively decouple the challenges of predicting shapes that are both realistic, i.e. imitate the training set's pose distribution, and accurate in the sense that they replicate the partial input observations. In our experiments, we demonstrate state-of-the-art performance for completing partial shapes, considering both man-made objects (e.g. airplanes, chairs, ...) and deformable shape categories (human bodies). Finally, we show that our adversarial training approach leads to visually plausible reconstructions that are highly consistent in recovering missing parts of a given object.
翻译:我们为部分点云的完成提出了一个新的神经隐含形状方法。 为此, 我们将一个有条件的深 SDF 结构与学习的、 对抗的形状前缀结合起来。 更具体地说, 我们的网络将部分输入转换成一个全球潜伏代码, 然后通过一个隐含的、 签字的远距生成器来恢复完整的几何。 此外, 我们训练一个PointNet++ 歧视器, 使生成器产生一个可信的、 全球一致的重建。 这样, 我们有效地分离了预测既现实又真实的形状的挑战, 即模仿训练组的容貌分布, 准确的就是它们复制部分输入观察。 在我们的实验中, 我们展示了完成部分形状的最先进的性能, 同时考虑到人造物体( 如飞机、 椅子 ) 和变形形状( 人体) 。 最后, 我们展示了我们的对抗训练方法导致视觉上看似合理的重建方法, 这在修复特定对象的缺失部分时是非常一致的。