Visual Servoing has been effectively used to move a robot into specific target locations or to track a recorded demonstration. It does not require manual programming, but it is typically limited to settings where one demonstration maps to one environment state. We propose a modular approach to extend visual servoing to scenarios with multiple demonstration sequences. We call this conditional servoing, as we choose the next demonstration conditioned on the observation of the robot. This method presents an appealing strategy to tackle multi-step problems, as individual demonstrations can be combined flexibly into a control policy. We propose different selection functions and compare them on a shape-sorting task in simulation. With the reprojection error yielding the best overall results, we implement this selection function on a real robot and show the efficacy of the proposed conditional servoing. For videos of our experiments, please check out our project page: https://lmb.informatik.uni-freiburg.de/projects/conditional_servoing/
翻译:视觉 Servoing 已被有效地用于将机器人移动到特定的目标位置或跟踪有记录的演示。 它不需要人工编程, 但通常仅限于一个环境状态的演示地图的设置。 我们提出一个模块化的方法, 将视觉预览扩展至多个演示序列的情景。 我们称之为有条件的预览, 因为我们选择了下一个以观察机器人为条件的演示。 这种方法提出了解决多步骤问题的吸引策略, 因为个人演示可以灵活地结合到控制政策中。 我们建议了不同的选择功能, 并在模拟中的形状校正任务中比较这些功能。 随着重新预测错误产生最佳的总体结果, 我们将在一个真正的机器人上执行这个选择功能, 并展示拟议有条件的预览的效果。 关于我们的实验视频, 请查看我们的项目网页 : https:// lmb.informatik. uni- freiburg.de/ project/ monthal_ servoing/