Virtual reality (VR) headsets provide an immersive, stereoscopic visual experience, but at the cost of blocking users from directly observing their physical environment. Passthrough techniques are intended to address this limitation by leveraging outward-facing cameras to reconstruct the images that would otherwise be seen by the user without the headset. This is inherently a real-time view synthesis challenge, since passthrough cameras cannot be physically co-located with the eyes. Existing passthrough techniques suffer from distracting reconstruction artifacts, largely due to the lack of accurate depth information (especially for near-field and disoccluded objects), and also exhibit limited image quality (e.g., being low resolution and monochromatic). In this paper, we propose the first learned passthrough method and assess its performance using a custom VR headset that contains a stereo pair of RGB cameras. Through both simulations and experiments, we demonstrate that our learned passthrough method delivers superior image quality compared to state-of-the-art methods, while meeting strict VR requirements for real-time, perspective-correct stereoscopic view synthesis over a wide field of view for desktop-connected headsets.
翻译:虚拟现实( VR) 耳机提供了隐性、立体视觉体验,但代价是阻止用户直接观测其物理环境。 通路技术意在通过利用外视摄像头来重建用户在没有耳机的情况下会看到的图像来应对这一限制。 这本质上是一个实时的视觉合成挑战,因为过路摄像头不能与眼睛同时放置。 现有的过路技术由于缺少准确的深度信息(特别是近场和分解的物体)而转移了重建文物的注意力,而且图像质量也有限(例如分辨率低和单色)。 在本文中,我们提出第一个学习的过路方法,并使用带有立体 RGB 相机的定制 VR 头板来评估其性能。 通过模拟和实验,我们证明我们所学过的过过路方法与最新方法相比具有更高的图像质量,同时满足了对实时、视觉校正立立面图像合成的严格VR要求,在桌面连接头部的宽视场上满足了严格的VR要求。