Recent approaches to render photorealistic views from a limited set of photographs have pushed the boundaries of our interactions with pictures of static scenes. The ability to recreate moments, that is, time-varying sequences, is perhaps an even more interesting scenario, but it remains largely unsolved. We introduce DCT-NeRF, a coordinatebased neural representation for dynamic scenes. DCTNeRF learns smooth and stable trajectories over the input sequence for each point in space. This allows us to enforce consistency between any two frames in the sequence, which results in high quality reconstruction, particularly in dynamic regions.
翻译:最近从一组有限的照片中得出摄影现实观点的方法拉动了我们与静态场景照片互动的界限。 重现时间序列的能力或许是一个更有意思的情景,但基本上仍未解决。 我们引入了DCT-NERF, 这是一个基于协调的动态场景神经代表。 DTNERF在空间每个点的输入序列上都学会了平稳和稳定的轨迹。 这使得我们得以在序列中执行任何两个框架的一致性,从而导致高质量的重建,特别是在动态区域。