The extensive adoption of Deep Neural Networks has led to their increased utilization in challenging scientific visualization tasks. Recent advancements in building compressed data models using implicit neural representations have shown promising results for tasks like spatiotemporal volume visualization and super-resolution. Inspired by these successes, we develop compressed neural representations for multivariate datasets containing tens to hundreds of variables. Our approach utilizes a single network to learn representations for all data variables simultaneously through parameter sharing. This allows us to achieve state-of-the-art data compression. Through comprehensive evaluations, we demonstrate superior performance in terms of reconstructed data quality, rendering and visualization quality, preservation of dependency information among variables, and storage efficiency.
翻译:深度神经网络的广泛应用使其在具有挑战性的科学可视化任务中得到越来越多的采用。利用隐式神经表示构建压缩数据模型的最新进展,在时空体可视化与超分辨率等任务中已显示出有前景的结果。受这些成功的启发,我们为包含数十至数百个变量的多变量数据集开发了压缩神经表示。我们的方法利用单一网络,通过参数共享同时学习所有数据变量的表示。这使得我们能够实现最先进的数据压缩。通过全面评估,我们在重建数据质量、渲染与可视化质量、变量间依赖信息保持以及存储效率方面均展现出优越性能。