We introduce the simple idea of adaptive view planning to multi-view synthesis, aiming to improve both occlusion revelation and 3D consistency for single-view 3D reconstruction. Instead of producing an unordered set of views independently or simultaneously, we generate a sequence of views, leveraging temporal consistency to enhance 3D coherence. More importantly, our view sequence is not determined by a pre-determined and fixed camera setup. Instead, we compute an adaptive camera trajectory (ACT), forming an orbit, which seeks to maximize the visibility of occluded regions of the 3D object to be reconstructed. Once the best orbit is found, we feed it to a video diffusion model to generate novel views around the orbit, which can then be passed to any multi-view 3D reconstruction model to obtain the final result. Our multi-view synthesis pipeline is quite efficient since it involves no run-time training/optimization, only forward inferences by applying pre-trained models for occlusion analysis and multi-view synthesis. Our method predicts camera trajectories that reveal occlusions effectively and produce consistent novel views, significantly improving 3D reconstruction over SOTA alternatives on the unseen GSO dataset. Project Page: https://mingrui-zhao.github.io/ACT-R/
翻译:我们提出了一种将自适应视角规划引入多视图合成的简洁思路,旨在提升单视图三维重建中的遮挡揭示能力与三维一致性。不同于独立或同步生成无序视图集合,我们通过生成时序一致的视图序列来增强三维连贯性。更重要的是,我们的视图序列并非由预设固定相机位姿决定,而是通过计算自适应相机轨迹(ACT)构建环绕轨道,以最大化待重建三维物体遮挡区域的可见性。在确定最优轨道后,将其输入视频扩散模型以生成沿轨道分布的新视角,进而馈送至任意多视图三维重建模型获得最终结果。我们的多视图合成流程具有高效性,无需运行时训练或优化,仅需通过预训练模型进行遮挡分析与多视图合成的前向推理。本方法预测的相机轨迹能有效揭示遮挡并生成一致的新视角,在未见过的GSO数据集上显著超越了当前最优方案的三维重建性能。项目页面:https://mingrui-zhao.github.io/ACT-R/