The stereo-matching problem, i.e., matching corresponding features in two different views to reconstruct depth, is efficiently solved in biology. Yet, it remains the computational bottleneck for classical machine vision approaches. By exploiting the properties of event cameras, recently proposed Spiking Neural Network (SNN) architectures for stereo vision have the potential of simplifying the stereo-matching problem. Several solutions that combine event cameras with spike-based neuromorphic processors already exist. However, they are either simulated on digital hardware or tested on simplified stimuli. In this work, we use the Dynamic Vision Sensor 3D Human Pose Dataset (DHP19) to validate a brain-inspired event-based stereo-matching architecture implemented on a mixed-signal neuromorphic processor with real-world data. Our experiments show that this SNN architecture, composed of coincidence detectors and disparity sensitive neurons, is able to provide a coarse estimate of the input disparity instantaneously, thereby detecting the presence of a stimulus moving in depth in real-time.
翻译:立体匹配问题,即将两个不同角度的相应特征匹配起来以重建深度的问题,在生物学中得到了有效解决。然而,它仍然是古典机器视觉方法的计算瓶颈。通过利用事件相机的特性,最近提议的立体图像神经网络(Spiking Neal Network)结构(SNN)结构具有简化立体图像问题的潜力。将事件相机与基于钉钉钉的神经形态处理器相结合的几种解决方案已经存在。然而,它们要么模拟在数字硬件上,要么在简化的平板上测试。在这项工作中,我们使用动态传感器 3D 人类脉冲数据集(DHP19) 来验证在混合信号神经形态处理器上安装的基于大脑的事件立体匹配结构与现实世界数据。我们的实验表明,由巧合探测器和差异敏感神经元组成的该SNNN结构能够瞬间提供粗略的输入差异估计,从而探测到在实时深度移动的刺激因素的存在。