Current stereo matching techniques are challenged by restricted searching space, occluded regions and sheer size. While single image depth estimation is spared from these challenges and can achieve satisfactory results with the extracted monocular cues, the lack of stereoscopic relationship renders the monocular prediction less reliable on its own, especially in highly dynamic or cluttered environments. To address these issues in both scenarios, we present an optic-chiasm-inspired self-supervised binocular depth estimation method, wherein a vision transformer (ViT) with gated positional cross-attention (GPCA) layers is designed to enable feature-sensitive pattern retrieval between views while retaining the extensive context information aggregated through self-attentions. Monocular cues from a single view are thereafter conditionally rectified by a blending layer with the retrieved pattern pairs. This crossover design is biologically analogous to the optic-chasma structure in the human visual system and hence the name, ChiTransformer. Our experiments show that this architecture yields substantial improvements over state-of-the-art self-supervised stereo approaches by 11%, and can be used on both rectilinear and non-rectilinear (e.g., fisheye) images. Project is available at https://github.com/ISL-CV/ChiTransformer.
翻译:当前的立体匹配技术受到有限的搜索空间、隐蔽区域和外观尺寸的挑战。 虽然单一图像深度估计可以避免这些挑战,并能通过提取的单眼提示取得令人满意的结果,但缺乏立体剖面关系使单眼预测本身不那么可靠,特别是在高度动态或杂乱的环境中。为了在两种情景中解决这些问题,我们提出了一种光-chiasm启发的自我监督的双目镜深度估计方法,在这种方法下,设计了一个带有封闭定位交叉注意层的视觉变异器(VT),以便在各种视图之间进行对地感敏感的模式检索,同时保留通过自用收集的广泛背景信息。 单一视图的单眼线提示随后通过与回收的图案配对的混合层进行有条件的纠正。 这种交叉式设计在生物上类似于人类视觉系统中的光-chasma结构,因此也类似于名称ChiTransforforformt。我们的实验显示,这一结构在状态-艺术自我监督立体(GPCA) 方法上取得了显著的改进,同时保留通过自受自控的自控系统收集的广泛背景信息。 从单一视图/Creal-cline上可以使用。 http/Profrofropal-creal-creareareal-cionalLismal/tomal-c-c-toimpal/toilmal/toimpal