Distance estimation from vision is fundamental for a myriad of robotic applications such as navigation, manipulation, and planning. Inspired by the mammal's visual system, which gazes at specific objects, we develop two novel constraints relating time-to-contact, acceleration, and distance that we call the $\tau$-constraint and $\Phi$-constraint. They allow an active (moving) camera to estimate depth efficiently and accurately while using only a small portion of the image. The constraints are applicable to range sensing, sensor fusion, and visual servoing. We successfully validate the proposed constraints with two experiments. The first applies both constraints in a trajectory estimation task with a monocular camera and an Inertial Measurement Unit (IMU). Our methods achieve 30-70% less average trajectory error while running 25$\times$ and 6.2$\times$ faster than the popular Visual-Inertial Odometry methods VINS-Mono and ROVIO respectively. The second experiment demonstrates that when the constraints are used for feedback with efference copies the resulting closed loop system's eigenvalues are invariant to scaling of the applied control signal. We believe these results indicate the $\tau$ and $\Phi$ constraint's potential as the basis of robust and efficient algorithms for a multitude of robotic applications.
翻译:从视觉到远距离的估算对于导航、操纵和规划等多种机器人应用来说至关重要。在哺乳动物的视觉系统启发下,我们通过两个实验成功地验证了拟议的限制。首先,我们用单层照相机和惰性测量股(IMU)在轨迹评估任务中应用了两种限制。我们的方法取得了30-70%的平均轨道误差,同时运行了25美元的时间和6.2美元的时间差,比流行的视觉-内地测量方法VNS-Moni和ROVIO分别快速和精确地估计深度。第二次实验表明,当限制用于测距、感应聚合和视觉蒸发时,我们用两个实验成功地验证了拟议的限制。我们用单层照相机和惰性测量股(IMU)在轨评估任务中应用了两种限制。我们的方法取得了30-70%的平均轨道误差,同时运行了25美元的时间和6.2美元的时间差比流行的视觉-内测量方法VNS-Mono和ROVIO都快。第二次实验表明,当使用限制来反馈的制约时,我们成功地复制了由此而导致的封闭循环系统使用的美元值应用的节点值,我们相信这些测量值的压值的压值是用于测量的高度。</s>