In visual computing, 3D geometry is represented in many different forms including meshes, point clouds, voxel grids, level sets, and depth images. Each representation is suited for different tasks thus making the transformation of one representation into another (forward map) an important and common problem. We propose Omnidirectional Distance Fields (ODFs), a new 3D shape representation that encodes geometry by storing the depth to the object's surface from any 3D position in any viewing direction. Since rays are the fundamental unit of an ODF, it can be used to easily transform to and from common 3D representations like meshes or point clouds. Different from level set methods that are limited to representing closed surfaces, ODFs are unsigned and can thus model open surfaces (e.g., garments). We demonstrate that ODFs can be effectively learned with a neural network (NeuralODF) despite the inherent discontinuities at occlusion boundaries. We also introduce efficient forward mapping algorithms for transforming ODFs to and from common 3D representations. Specifically, we introduce an efficient Jumping Cubes algorithm for generating meshes from ODFs. Experiments demonstrate that NeuralODF can learn to capture high-quality shape by overfitting to a single object, and also learn to generalize on common shape categories.
翻译:在视觉计算中, 3D 几何以多种不同的形式表示, 包括 meshes、 点云、 voxel 网格、 水平集和深度图像。 每个表示都适合不同的任务, 使一个表示方式转换为另一个( 前向地图) 是一个重要和常见的问题。 我们提议了 Omnidal 远程场( ODFs), 一个新的 3D 形状代表方式, 将深度从任何查看方向从任何 3D 位置存储到天体表面, 从而将几何编码起来。 由于光线是 ODF 的基本单位, 它可以很容易地转换成或从 3D 共同表示方式( 如 meshes 或点云) 。 不同的是, 将一个表示表示方式转换为代表封闭的表示方式( 前向前的表示方式) 。 我们提出, 3DFS 也可以从一个高效的跳动式 CDFS 模型到一个普通的表达式的演示式, 也可以从 ODFS 跳式的 高级分析器, 向普通的我 学习一个有效的跳式 。