Modern-day display systems demand high-quality rendering. However, rendering at higher resolution requires a large number of data samples and is computationally expensive. Recent advances in deep learning-based image and video super-resolution techniques motivate us to investigate such networks for high-fidelity upscaling of frames rendered at a lower resolution to a higher resolution. While our work focuses on super-resolution of medical volume visualization performed with direct volume rendering, it is also applicable for volume visualization with other rendering techniques. We propose a learning-based technique where our proposed system uses color information along with other supplementary features gathered from our volume renderer to learn efficient upscaling of a low-resolution rendering to a higher-resolution space. Furthermore, to improve temporal stability, we also implement the temporal reprojection technique for accumulating history samples in volumetric rendering.
翻译:现代显示系统要求高质量的转换。然而,高分辨率需要大量的数据样本,而且计算成本很高。最近在深层学习图像和视频超分辨率技术方面的进步促使我们调查这种网络,以便高不易解析的下分辨率框架提升到更高分辨率。虽然我们的工作重点是直接成体成像的医学体积可视化超分辨率,但也适用于体积可视化。我们建议采用一种基于学习的技术,即我们提议的系统使用彩色信息以及从体积转化器收集的其他补充功能,以学习将低分辨率转换到更高分辨率空间的高效升级。此外,为了改善时间稳定性,我们还采用了在体积中积累历史样品的时空再预测技术。