We propose a new representation for encoding 3D shapes as neural fields. The representation is designed to be compatible with the transformer architecture and to benefit both shape reconstruction and shape generation. Existing works on neural fields are grid-based representations with latents defined on a regular grid. In contrast, we define latents on irregular grids, enabling our representation to be sparse and adaptive. In the context of shape reconstruction from point clouds, our shape representation built on irregular grids improves upon grid-based methods in terms of reconstruction accuracy. For shape generation, our representation promotes high-quality shape generation using auto-regressive probabilistic models. We show different applications that improve over the current state of the art. First, we show results for probabilistic shape reconstruction from a single higher resolution image. Second, we train a probabilistic model conditioned on very low resolution images. Third, we apply our model to category-conditioned generation. All probabilistic experiments confirm that we are able to generate detailed and high quality shapes to yield the new state of the art in generative 3D shape modeling.
翻译:我们为3D形状的编码提出了新的神经场域。 该表示旨在与变压器结构兼容,并有利于形状的重建与形状的生成。 神经场的现有工作是以电网为基础的,在正常的电网中具有潜质。 相反, 我们定义了非常规电网的潜伏, 使得我们的代表性变得稀疏和适应性。 在从点云重建的形状重建方面, 我们基于非常规电网的形状代表在重建精确度方面利用基于电网的方法改进了我们的结构。 对于形状的生成, 我们的表示促进使用自动递增性概率模型的高质量形状生成。 我们展示了比目前艺术状态更好的不同应用。 首先, 我们用单一的更高分辨率图像来显示概率形状重建的结果。 其次, 我们用非常低的分辨率图像来培养一种概率模型。 第三, 我们用我们的模型来对有分类条件的一代进行改造。 所有概率实验都证实我们能够生成详细和高质量的形状, 从而在基因化的3D形状模型中产生新的状态。