We introduce NeuralVDB, which improves on an existing industry standard for efficient storage of sparse volumetric data, denoted VDB, by leveraging recent advancements in machine learning. Our novel hybrid data structure can reduce the memory footprints of VDB volumes by orders of magnitude, while maintaining its flexibility and only incurring a small (user-controlled) compression errors. Specifically, NeuralVDB replaces the lower nodes of a shallow and wide VDB tree structure with multiple hierarchy neural networks that separately encode topology and value information by means of neural classifiers and regressors respectively. This approach has proven to maximize the compression ratio while maintaining the spatial adaptivity offered by the higher-level VDB data structure. For sparse signed distance fields and density volumes, we have observed compression ratios on the order of $10\times$ to more than $100\times$ from already compressed VDB inputs, with little to no visual artifacts. We also demonstrate how its application to animated sparse volumes can both accelerate training and generate temporally coherent neural networks.
翻译:我们引入了NeuralVDB, 它通过利用机器学习的最新进展,改进了现有工业标准,以有效储存稀散的体积数据。 我们的新混合数据结构可以将VDB体积的记忆足迹降低到数量级,同时保持其灵活性,而且只能引发一个小(用户控制的)压缩错误。 具体地说, NeuralVDB用多种等级神经网络取代浅广的VDB树结构的低节点,这些网络分别通过神经分类器和反射器分别对表层和价值信息进行分类。 这种方法已证明可以最大限度地实现压缩率,同时保持VDB较高层次数据结构提供的空间适应性。 对于所签的稀少距离字段和密度量,我们观察到了10美元到100美元左右左右的压缩率,从已经压缩的VDB体输入到100美元以上,只有很少到没有视觉文物。 我们还展示了它如何将其应用于微小量的神经网络,可以加速培训和产生时间一致的神经网络。