Differentiable rendering is a very successful technique that applies to a Single-View 3D Reconstruction. Current renderers use losses based on pixels between a rendered image of some 3D reconstructed object and ground-truth images from given matched viewpoints to optimise parameters of the 3D shape. These models require a rendering step, along with visibility handling and evaluation of the shading model. The main goal of this paper is to demonstrate that we can avoid these steps and still get reconstruction results as other state-of-the-art models that are equal or even better than existing category-specific reconstruction methods. First, we use the same CNN architecture for the prediction of a point cloud shape and pose prediction like the one used by Insafutdinov & Dosovitskiy. Secondly, we propose the novel effective loss function that evaluates how well the projections of reconstructed 3D point clouds cover the ground truth object's silhouette. Then we use Poisson Surface Reconstruction to transform the reconstructed point cloud into a 3D mesh. Finally, we perform a GAN-based texture mapping on a particular 3D mesh and produce a textured 3D mesh from a single 2D image. We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.
翻译:可区别的映射是一种非常成功的技术,适用于单一视图 3D 重建 。 当前转换器使用基于3D 重建对象的图像和地面真实图像之间的像素损失, 这些图像来自给定的匹配角度, 以优化 3D 形状的参数。 这些模型需要跨度, 加上可见度处理和评估阴影模型。 本文的主要目的是证明我们能够避免这些步骤, 并且仍然获得重建结果, 作为其他最先进的模型, 这些模型与现有特定类别的重建方法相同, 甚至更好。 首先, 我们使用相同的CNN 结构来预测点网络云形的精确度, 并像 Insafutdinov 和 Dosovitskiy 所使用的一样作出预测。 其次, 我们提出新的有效损失功能, 评估重建的 3D 点云的预测如何覆盖地面真相天体的细图。 然后我们用 Poisson 地表重建来将重建的点云变成3D 。 最后, 我们用 GAN 的文本图绘制了特定的 3D 网络 和 D 显示的演示文体 。