Very recently neural implicit rendering techniques have been rapidly evolved and shown great advantages in novel view synthesis and 3D scene reconstruction. However, existing neural rendering methods for editing purposes offer limited functionality, e.g., rigid transformation, or not applicable for fine-grained editing for general objects from daily lives. In this paper, we present a novel mesh-based representation by encoding the neural implicit field with disentangled geometry and texture codes on mesh vertices, which facilitates a set of editing functionalities, including mesh-guided geometry editing, designated texture editing with texture swapping, filling and painting operations. To this end, we develop several techniques including learnable sign indicators to magnify spatial distinguishability of mesh-based representation, distillation and fine-tuning mechanism to make a steady convergence, and the spatial-aware optimization strategy to realize precise texture editing. Extensive experiments and editing examples on both real and synthetic data demonstrate the superiority of our method on representation quality and editing ability. Code is available on the project webpage: https://zju3dv.github.io/neumesh/.
翻译:最近,神经隐含转化技术迅速发展,在新观点合成和3D场景重建中显示出巨大的优势,然而,现有用于编辑目的的神经转换方法具有有限的功能,例如,僵化变换,或不适用于日常生活中一般物体的细微雕刻编辑。在本文中,我们提出了一个新型的网状表示法,将神经隐含领域编码成分解的几何和网状脊椎纹理编码,这有利于一套编辑功能,包括网状引导几何理编辑、指定的纹理编辑以及纹理转换、填充和绘画操作。为此,我们开发了几种技术,包括可学习的标志指标,以放大网状代表的空间区分、蒸馏和微调机制,以便实现稳定的趋同,以及空间觉优化战略,以实现精确的纹理编辑。关于真实和合成数据的广泛试验和编辑实例表明我们方法在表述质量和编辑能力方面的优越性。代码可在项目网页上查阅:https://zju3v.github.io/nemesh/www.