Deep learning based latent representations have been widely used for numerous scientific visualization applications such as isosurface similarity analysis, volume rendering, flow field synthesis, and data reduction, just to name a few. However, existing latent representations are mostly generated from raw data in an unsupervised manner, which makes it difficult to incorporate domain interest to control the size of the latent representations and the quality of the reconstructed data. In this paper, we present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis. We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation. We further reduced the latent size by a lossless entropy encoding algorithm trained together with the autoencoder, improving the storage and memory efficiency. We qualitatively and quantitatively evaluate the effectiveness and efficiency of latent representations generated by our method with data from multiple scientific visualization applications.
翻译:许多科学直观应用,如地表相似性分析、体积成份、流场合成和数据减少等科学直观化应用,广泛使用了深层的学习潜在代表形式,仅举几个例子;然而,现有潜在代表形式大多来自原始数据,未经监督,因此难以纳入控制潜在代表形式规模和重现数据质量的域内利益;在本文件中,我们展示了一种新颖的重要驱动潜在代表形式,以促进以域利益为导向的科学数据可视化和分析;我们利用空间重要性地图代表各种科学利益,并将其作为地貌转换网络的投入,以引导潜在生成;我们进一步缩小了潜在规模,为此与自动编码器一起培训了无损的英特罗比编码算法,提高了存储和记忆效率;我们用多种科学直观应用的数据,从质量和数量上评价了我们方法生成的潜在代表形式的效力和效率。