We propose VDL-Surrogate, a view-dependent neural-network-latent-based surrogate model for parameter space exploration of ensemble simulations that allows high-resolution visualizations and user-specified visual mappings. Surrogate-enabled parameter space exploration allows domain scientists to preview simulation results without having to run a large number of computationally costly simulations. Limited by computational resources, however, existing surrogate models may not produce previews with sufficient resolution for visualization and analysis. To improve the efficient use of computational resources and support high-resolution exploration, we perform ray casting from different viewpoints to collect samples and produce compact latent representations. This latent encoding process reduces the cost of surrogate model training while maintaining the output quality. In the model training stage, we select viewpoints to cover the whole viewing sphere and train corresponding VDL-Surrogate models for the selected viewpoints. In the model inference stage, we predict the latent representations at previously selected viewpoints and decode the latent representations to data space. For any given viewpoint, we make interpolations over decoded data at selected viewpoints and generate visualizations with user-specified visual mappings. We show the effectiveness and efficiency of VDL-Surrogate in cosmological and ocean simulations with quantitative and qualitative evaluations. Source code is publicly available at https://github.com/trainsn/VDL-Surrogate.
翻译:我们提议VDL-Surrogate,这是一个基于视觉依赖的神经-网络-远程的代孕模型,用以进行混合模拟的参数空间探索,允许高清晰度直观化和用户指定的视觉绘图。由代用参数空间探索使域科学家能够预览模拟结果,而不必进行大量的计算成本模拟。但是,由于计算资源的限制,现有的代用模型可能无法产生具有足够分辨率的预览,以便进行可视化和分析。为了改进计算资源的高效使用并支持高清晰度探索,我们从不同角度进行射线投影,以收集样本和产生紧凑的潜在代表。这种潜伏编码过程降低了代用模型培训的成本,同时保持产出质量。在模型培训阶段,我们选择的观点覆盖整个视图范围,并为选定的观点培训相应的VDL-S-Surrogate模型。在模型的推导阶段,我们预测先前选定的观点的潜在表现,并解码数据空间的潜在展示。对于任何给定的观点,我们通过在所选的视觉-DR-S-S-imal-Degraphal-Degraphal-Degial-Degraphal-D-Degraphal-Degraphal-D-Degal-Develyal-Develal-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vicument-views-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vial-vi和制和制制,我们-vial-vial-inal-vial-vial-vial-vial-vial-vial-vial-vical-vial-in-in-vi和制,我们和制,我们和制,我们在选定的视觉和制,我们在选定的观点中,我们在选定观点中,我们-vi-vi-制制制制制制制