3D Time-of-Flight (ToF) image sensors are used widely in applications such as self-driving cars, Augmented Reality (AR) and robotics. When implemented with Single-Photon Avalanche Diodes (SPADs), compact, array format sensors can be made that offer accurate depth maps over long distances, without the need for mechanical scanning. However, array sizes tend to be small, leading to low lateral resolution, which combined with low Signal-to-Noise Ratio (SNR) levels under high ambient illumination, may lead to difficulties in scene interpretation. In this paper, we use synthetic depth sequences to train a 3D Convolutional Neural Network (CNN) for denoising and upscaling (x4) depth data. Experimental results, based on synthetic as well as real ToF data, are used to demonstrate the effectiveness of the scheme. With GPU acceleration, frames are processed at >30 frames per second, making the approach suitable for low-latency imaging, as required for obstacle avoidance.
翻译:3D 光时图像传感器被广泛用于自驾驶汽车、增强现实(AR)和机器人等应用中。当用单光速电磁脉冲二极管(SPADs)、紧凑的阵列格式传感器实施时,可以在不需机械扫描的情况下进行长距离精确深度地图的阵列式式传感器,但阵列大小往往较小,导致横向分辨率低,加上高环境照明下的信号到噪音比率低,可能导致现场解释困难。在本文中,我们使用合成深度序列来训练3D进程神经网络(CNN)进行分层和升尺度(X4)深度数据。根据合成和真实托弗数据得出的实验结果,用来证明计划的有效性。由于GPU加速度,框架在大于30下处理,因此适合低延度成像的方法对于避免障碍是必要的。