With the advent of deep learning, estimating depth from a single RGB image has recently received a lot of attention, being capable of empowering many different applications ranging from path planning for robotics to computational cinematography. Nevertheless, while the depth maps are in their entirety fairly reliable, the estimates around object discontinuities are still far from satisfactory. This can be contributed to the fact that the convolutional operator naturally aggregates features across object discontinuities, resulting in smooth transitions rather than clear boundaries. Therefore, in order to circumvent this issue, we propose a novel convolutional operator which is explicitly tailored to avoid feature aggregation of different object parts. In particular, our method is based on estimating per-part depth values by means of superpixels. The proposed convolutional operator, which we dub "Instance Convolution", then only considers each object part individually on the basis of the estimated superpixels. Our evaluation with respect to the NYUv2 as well as the iBims dataset clearly demonstrates the superiority of Instance Convolutions over the classical convolution at estimating depth around occlusion boundaries, while producing comparable results elsewhere. Code will be made publicly available upon acceptance.
翻译:随着深入的学习的到来,从单一的RGB图像中估计深度最近引起了人们的极大关注,能够赋予从机器人路径规划到计算电影摄影等许多不同的应用能力,然而,虽然深度地图整体相当可靠,但围绕目标不连续的估计仍然远远不能令人满意,这可以促成以下事实:革命操作者自然地将物体不连续的特点综合在一起,从而导致平稳的过渡,而不是明确的界限。因此,为了绕过这一问题,我们提议建立一个新的革命操作者,明确为避免不同物体部件的特征聚合而设计。特别是,我们的方法是以利用超级像素来估计每个部分的深度值为基础。拟议的革命操作者,我们称它为“进化”,然后仅仅根据估计的超像素分别考虑每个对象。我们对于NYUv2和iBims数据集的评估将清楚地显示,在估计封闭边界深度时,在得出可比较的结果时,对古典革命的演变的优越性将公开提出守则。