Indirect Time-of-Flight (iToF) cameras are a promising depth sensing technology. However, they are prone to errors caused by multi-path interference (MPI) and low signal-to-noise ratio (SNR). Traditional methods, after denoising, mitigate MPI by estimating a transient image that encodes depths. Recently, data-driven methods that jointly denoise and mitigate MPI have become state-of-the-art without using the intermediate transient representation. In this paper, we propose to revisit the transient representation. Using data-driven priors, we interpolate/extrapolate iToF frequencies and use them to estimate the transient image. Given direct ToF (dToF) sensors capture transient images, we name our method iToF2dToF. The transient representation is flexible. It can be integrated with different rule-based depth sensing algorithms that are robust to low SNR and can deal with ambiguous scenarios that arise in practice (e.g., specular MPI, optical cross-talk). We demonstrate the benefits of iToF2dToF over previous methods in real depth sensing scenarios.
翻译:光线(iToF)摄像头是一种很有希望的深度感测技术,但它们容易发生多路干扰(MPI)和低信号对噪音比率(SNR)造成的错误。传统方法,在拆卸后,通过估计一个能编码深度的瞬时图像,减缓MPI。最近,数据驱动方法,即共同淡化和减缓MPI,在不使用中间瞬间表示法的情况下,已成为最先进的数据驱动方法。在本文中,我们提议重新审视瞬间表示法。我们使用数据驱动的前端,我们通过内插/外推 iToF频率来估计瞬时图像。根据直接的 ToF(dF) 传感器捕捉瞬时图像,我们指定了我们的方法 iToF2dToF。 瞬间表示法是灵活的。它可以与不同的基于规则的深度感测算法相结合,这种测算法对低光度代表法是强大的,并且可以处理实践中出现的模糊的情景(例如光学 MPI、光学交叉交谈)。我们展示了iToF2ToF对真实深度设想法的效益。