We present a portable multiscopic camera system with a dedicated model for novel view and time synthesis in dynamic scenes. Our goal is to render high-quality images for a dynamic scene from any viewpoint at any time using our portable multiscopic camera. To achieve such novel view and time synthesis, we develop a physical multiscopic camera equipped with five cameras to train a neural radiance field (NeRF) in both time and spatial domains for dynamic scenes. Our model maps a 6D coordinate (3D spatial position, 1D temporal coordinate, and 2D viewing direction) to view-dependent and time-varying emitted radiance and volume density. Volume rendering is applied to render a photo-realistic image at a specified camera pose and time. To improve the robustness of our physical camera, we propose a camera parameter optimization module and a temporal frame interpolation module to promote information propagation across time. We conduct experiments on both real-world and synthetic datasets to evaluate our system, and the results show that our approach outperforms alternative solutions qualitatively and quantitatively. Our code and dataset are available at https://yuenfuilau.github.io.
翻译:我们展示了一个便携式多镜照相机系统,其中有一个用于在动态场景中进行新视角和时间合成的专用模型。我们的目标是利用我们的便携式多镜照相机,从任何角度在任何时候为动态场景提供高质量的图像。为了实现这种新颖的视图和时间合成,我们开发了一个带有5个照相机的物理多镜照相机,在时间和空间范围内对动态场景进行神经亮度场(NERF)的培训。我们的模型绘制了一个6D坐标(3D空间位置、1D时间坐标和2D查看方向),以查看依赖性和时间分辨的亮度和体积密度。 量的图像用于在特定相机的姿势和时空上制作摄影现实图像。为了提高我们物理摄影机的稳健性,我们提议了一个摄影机参数优化模块和一个时间框架间框间隔板模块,以促进时间的传播。我们在现实世界和合成数据集上进行实验,以评价我们的系统,结果显示我们的方法在质量和数量上优于替代的解决方案。我们的代码和数据集可以在https://yuenfuilaulau.github.githuio上查阅。