Visualizing a large-scale volumetric dataset with high resolution is challenging due to the substantial computational time and space complexity. Recent deep learning-based image inpainting methods significantly improve rendering latency by reconstructing a high-resolution image for visualization in constant time on GPU from a partially rendered image where only a portion of pixels go through the expensive rendering pipeline. However, existing solutions need to render every pixel of either a predefined regular sampling pattern or an irregular sample pattern predicted from a low-resolution image rendering. Both methods require a significant amount of expensive pixel-level rendering. In this work, we provide Importance Mask Learning (IML) and Synthesis (IMS) networks, which are the first attempts to directly synthesize important regions of the regular sampling pattern from the user's view parameters, to further minimize the number of pixels to render by jointly considering the dataset, user behavior, and the downstream reconstruction neural network. Our solution is a unified framework to handle various types of inpainting methods through the proposed differentiable compaction/decompaction layers. Experiments show our method can further improve the overall rendering latency of state-of-the-art volume visualization methods using reconstruction neural network for free when rendering scientific volumetric datasets. Our method can also directly optimize the off-the-shelf pre-trained reconstruction neural networks without elongated retraining.
翻译:暂无翻译