Classically, rasterization techniques are performed for real-time rendering to meet the constraint of interactive frame rates. However, such techniques do not produce realistic results as compared to ray tracing approaches. Hence, hybrid rendering has emerged to improve the graphics fidelity of rasterization with ray tracing in real-time. We explore the approach of distributed rendering in incorporating real-time hybrid rendering into metaverse experiences for immersive graphics. In standalone extended reality (XR) devices, such ray tracing-enabled graphics is only feasible through pure cloud-based remote rendering systems that rely on low-latency networks to transmit real-time ray-traced data in response to interactive user input. Under high network latency conditions, remote rendering might not be able to maintain interactive frame rates for the client, adversely affecting the user experience. We adopt hybrid rendering via a distributed rendering approach by integrating ray tracing on powerful remote hardware with raster-based rendering on user access devices. With this hybrid approach, our technique can help standalone XR devices achieve ray tracing-incorporated graphics and maintain interactive frame rates even under high-latency conditions.
翻译:典型地说,为适应互动框架率的制约,对实时显示进行了光栅技术,以满足互动框架率的制约,然而,这些技术并不产生与光线跟踪方法相比的现实结果。因此,出现了混合法,以提高光线实时追踪的图像真实性。我们探索了将实时混合转化为浸泡图形的元经验的分布法。在独立的扩展现实(XR)装置中,这种光线跟踪功能图形只能通过依赖低延迟网络传输实时光谱数据以响应用户互动输入的纯云基远程显示系统才可行。在高网络时空条件下,远程显示可能无法维持客户的互动式框架率,对用户经验产生不利影响。我们采用分布法,将强效远程硬件的射线跟踪与以光栅为基础的用户访问装置结合起来。采用这种混合法,我们的技术可以帮助独立XR装置实现光光跟踪公司内部的图像并甚至在高长时保持互动框架率。