We introduce UNIST, the first deep neural implicit model for general-purpose, unpaired shape-to-shape translation, in both 2D and 3D domains. Our model is built on autoencoding implicit fields, rather than point clouds which represents the state of the art. Furthermore, our translation network is trained to perform the task over a latent grid representation which combines the merits of both latent-space processing and position awareness, to not only enable drastic shape transforms but also well preserve spatial features and fine local details for natural shape translations. With the same network architecture and only dictated by the input domain pairs, our model can learn both style-preserving content alteration and content-preserving style transfer. We demonstrate the generality and quality of the translation results, and compare them to well-known baselines.
翻译:我们引入了UNIST, 这是在 2D 和 3D 域中, 通用、 未受保护的形状到形状的首个深神经暗隐型翻译模式。 我们的模型建在自动编码的隐含字段上, 而不是代表艺术状态的点云上。 此外, 我们的翻译网络受过培训, 能够在潜空处理和位置意识相结合的潜在电网代表面上执行这项任务, 不仅能够让巨型的形状发生巨变, 还能保存空间特征和自然形状翻译的精细本地细节。 同样的网络结构只有输入域对, 我们的模型可以同时学习风格保存内容的改变和内容保存风格的传输。 我们展示了翻译结果的普遍性和质量, 并将其与众所周知的基线进行比较 。