Neural shape representation generally refers to representing 3D geometry using neural networks, e.g., computing a signed distance or occupancy value at a specific spatial position. In this paper we present a neural-network architecture suitable for accurate encoding of 3D shapes in a single forward pass. Our architecture is based on a multi-scale hybrid system incorporating graph-based and voxel-based components, as well as a continuously differentiable decoder. The hybrid system includes a novel way of voxelizing point-based features in neural networks, which we show can be used in combination with oriented point-clouds to obtain smoother and more detailed reconstructions. Furthermore, our network is trained to solve the eikonal equation and only requires knowledge of the zero-level set for training and inference. This means that in contrast to most previous shape encoder architectures, our network is able to output valid signed distance fields without explicit prior knowledge of non-zero distance values or shape occupancy. It also requires only a single forward-pass, instead of the latent-code optimization used in auto-decoder methods. We further propose a modification to the loss function in case that surface normals are not well defined, e.g., in the context of non-watertight surfaces and non-manifold geometry, resulting in an unsigned distance field. Overall, our system can help to reduce the computational overhead of training and evaluating neural distance fields, as well as enabling the application to difficult geometry.
翻译:暂无翻译