We present a content-adaptive generation and parallel compositing algorithm for view-dependent explorable representations of large three-dimensional volume data. Large distributed volume data are routinely produced in both numerical simulations and experiments, yet it remains challenging to visualize them at smooth, interactive frame rates. Volumetric Depth Images (VDIs), view-dependent piece wise-constant representations of volume data, offer a potential solution: they are more compact and less expensive to render than the original data. So far, however, there is no method to generate such representations on distributed data and to automatically adapt the representation to the contents of the data. We propose an approach that addresses both issues by enabling sort-last parallel generation of VDIs with content-adaptive parameters. The resulting VDIs can be streamed for display, providing responsive visualization of large, potentially distributed, volume data.
翻译:我们为根据视觉可探索的大型三维体积数据提供了一种内容适应生成和平行合成算法。在数字模拟和实验中,通常都会生成大量分布式量数据,但以平滑的互动框架速率将这些数据直观化仍然具有挑战性。量物理图象(VDIs),以视觉为依存的碎片智能表达体积数据,提供了一种潜在的解决方案:它们比原始数据更紧凑,而且比原始数据更便宜。然而,迄今为止,还没有方法对分布式数据生成这种表达式,并自动调整其表达式以适应数据的内容。我们建议一种方法,通过使具有内容适应性参数的VDIs最后一类平行生成来解决这两个问题。由此产生的VDIs可以流出,为大型、潜在分布式的体积数据提供反应性视觉化。