To navigate in an environment safely and autonomously, robots must accurately estimate where obstacles are and how they move. Instead of using expensive traditional 3D sensors, we explore the use of a much cheaper, faster, and higher resolution alternative: programmable light curtains. Light curtains are a controllable depth sensor that sense only along a surface that the user selects. We adapt a probabilistic method based on particle filters and occupancy grids to explicitly estimate the position and velocity of 3D points in the scene using partial measurements made by light curtains. The central challenge is to decide where to place the light curtain to accurately perform this task. We propose multiple curtain placement strategies guided by maximizing information gain and verifying predicted object locations. Then, we combine these strategies using an online learning framework. We propose a novel self-supervised reward function that evaluates the accuracy of current velocity estimates using future light curtain placements. We use a multi-armed bandit framework to intelligently switch between placement policies in real time, outperforming fixed policies. We develop a full-stack navigation system that uses position and velocity estimates from light curtains for downstream tasks such as localization, mapping, path-planning, and obstacle avoidance. This work paves the way for controllable light curtains to accurately, efficiently, and purposefully perceive and navigate complex and dynamic environments. Project website: https://siddancha.github.io/projects/active-velocity-estimation/
翻译:为了安全自主地在环境中航行,机器人必须精确地估计障碍所在位置和它们如何移动。我们不使用昂贵的传统三维传感器,而是探索使用更便宜、更快和更高分辨率的替代方法:可编程的光窗帘。光窗帘是一个可控制的深度传感器,仅能随用户选择的表面而感知。我们根据粒子过滤器和占用网网采用一种概率法来明确估计现场三维点的位置和速度,使用光窗帘所作的部分测量。核心挑战在于决定如何放置灯幕以准确完成这项任务。我们提出多种幕幕布战略,以尽量增加信息收益和核查预测的物体位置为指导。然后,我们利用在线学习框架将这些战略结合起来。我们提出一个新的自我监督奖励功能,利用未来的灯窗帘布布布布位置和占用网格来评估当前速度估算的准确性。我们使用多臂带框架在实时定位政策之间明智地转换位置/速度,优于固定政策。我们开发一个全套导航系统,利用光窗帘进行位置和速度估计,从光幕获取预测。然后,我们利用在线学习框架,利用在线学习框架框架,为下行路,以快速地规划。我们,以快速地规划,以快速地规划,以快速地规划。</s>