We introduce Omni-LOS, a neural computational imaging method for conducting holistic shape reconstruction (HSR) of complex objects utilizing a Single-Photon Avalanche Diode (SPAD)-based time-of-flight sensor. As illustrated in Fig. 1, our method enables new capabilities to reconstruct near-$360^\circ$ surrounding geometry of an object from a single scan spot. In such a scenario, traditional line-of-sight (LOS) imaging methods only see the front part of the object and typically fail to recover the occluded back regions. Inspired by recent advances of non-line-of-sight (NLOS) imaging techniques which have demonstrated great power to reconstruct occluded objects, Omni-LOS marries LOS and NLOS together, leveraging their complementary advantages to jointly recover the holistic shape of the object from a single scan position. The core of our method is to put the object nearby diffuse walls and augment the LOS scan in the front view with the NLOS scans from the surrounding walls, which serve as virtual ``mirrors'' to trap lights toward the object. Instead of separately recovering the LOS and NLOS signals, we adopt an implicit neural network to represent the object, analogous to NeRF and NeTF. While transients are measured along straight rays in LOS but over the spherical wavefronts in NLOS, we derive differentiable ray propagation models to simultaneously model both types of transient measurements so that the NLOS reconstruction also takes into account the direct LOS measurements and vice versa. We further develop a proof-of-concept Omni-LOS hardware prototype for real-world validation. Comprehensive experiments on various wall settings demonstrate that Omni-LOS successfully resolves shape ambiguities caused by occlusions, achieves high-fidelity 3D scan quality, and manages to recover objects of various scales and complexity.
翻译:我们引入了Omni-LOS,这是一种使用基于单光子雪崩二极管(SPAD)的飞行时间传感器进行综合形状重建(HSR)的神经计算成像方法。正如图1所示,我们的方法实现了从单个扫描点重建物体的近$360^\circ$周围几何形状的新能力。在这种情况下,传统的视线(LOS)成像方法只能看到物体的前部分,并且通常无法恢复遮挡的后部分。受最近非视线(NLOS)成像技术的进展启发,这些技术已经展示了重建遮挡物体的巨大能力,Omni-LOS将LOS和NLOS相结合,利用它们的互补优势共同从单个扫描位置恢复物体的全面形状。我们方法的核心是将物体放置在附近的漫反射墙壁上,并将前视图中的LOS扫描与来自周围墙壁的NLOS扫描进行扩充,这些扫描作为虚拟的“镜子”将光线引向物体。我们采用隐式神经网络来表示物体的核心,类似于NeRF和NeTF。虽然LOS中沿直线光线测量瞬变现象,但在NLOS中则测量球面波前,我们推导出可微分的光线传播模型,以同时模拟两种类型的瞬变测量,使NLOS重建也考虑直接LOS测量的影响,反之亦然。我们进一步开发了一个证明概念的Omni-LOS硬件原型,用于实际验证。在各种墙面环境中进行的全面实验表明,Omni-LOS成功解决了由于遮挡而引起的形状歧义,实现了高保真的3D扫描质量,并成功恢复了各种尺度和复杂度的物体。