Virtual and augmented reality (VR/AR) displays strive to provide a resolution, framerate and field of view that matches the perceptual capabilities of the human visual system, all while constrained by limited compute budgets and transmission bandwidths of wearable computing systems. Foveated graphics techniques have emerged that could achieve these goals by exploiting the falloff of spatial acuity in the periphery of the visual field. However, considerably less attention has been given to temporal aspects of human vision, which also vary across the retina. This is in part due to limitations of current eccentricity-dependent models of the visual system. We introduce a new model, experimentally measuring and computationally fitting eccentricity-dependent critical flicker fusion thresholds jointly for both space and time. In this way, our model is unique in enabling the prediction of temporal information that is imperceptible for a certain spatial frequency, eccentricity, and range of luminance levels. We validate our model with an image quality user study, and use it to predict potential bandwidth savings 7x higher than those afforded by current spatial-only foveated models. As such, this work forms the enabling foundation for new temporally foveated graphics techniques.
翻译:虚拟和增强现实(VR/AR)显示,虚拟和扩展现实(VR/AR)显示力求提供一个与人类视觉系统感知能力相匹配的分辨率、框架率和视野领域,这些都受到有限计算预算和可磨损计算机系统传输带宽的限制,同时受到有限的计算预算和可磨损计算机系统传输带宽的限制。改造的图形技术已经出现,通过在视觉场外围利用空间光度的下降,可以实现这些目标。然而,对人类视觉的时间方面的关注却少得多,这些方面在视网膜之间也不同。这部分是由于当前视系统偏心依赖的模型的限制。我们为时空和时间联合引入了一种新的模型,实验性测量和计算性适合偏心性关键闪烁临界值。这样,我们的模型就具有独特性,能够预测某些空间频率、偏心度和亮度范围都无法察觉的时间信息。我们用图像质量用户研究来验证模型,并使用它来预测比当前空间受挫模型提供的更高的可能性带宽度。因此,这种模型构成新的时空基基础。