Reconstructing 3D shape from 2D sketches has long been an open problem because the sketches only provide very sparse and ambiguous information. In this paper, we use an encoder/decoder architecture for the sketch to mesh translation. When integrated into a user interface that provides camera parameters for the sketches, this enables us to leverage its latent parametrization to represent and refine a 3D mesh so that its projections match the external contours outlined in the sketch. We will show that this approach is easy to deploy, robust to style changes, and effective. Furthermore, it can be used for shape refinement given only single pen strokes. We compare our approach to state-of-the-art methods on sketches -- both hand-drawn and synthesized -- and demonstrate that we outperform them.
翻译:从 2D 草图重建 3D 形状长期以来一直是一个未解决的问题, 因为草图只提供了非常稀少和模糊的信息。 在本文中, 我们使用一个编码器/ 解码器结构来进行草图和网状翻译。 当整合到一个为草图提供摄像参数的用户界面中时, 这使我们能够利用其潜在的对称来代表并改进一个 3D 网格, 以便它的预测与草图中描述的外部轮廓匹配。 我们将显示, 这种方法很容易应用, 并且对样式变化非常有力, 并且有效 。 此外, 我们可以用它来进行形状的精细化, 只用单笔笔笔笔划来进行。 我们比较了我们的方法, 用手画和合成的手画图谱和合成来显示我们比它们都快。