This paper proposes a unified vision-based manipulation framework using image contours of deformable/rigid objects. Instead of using human-defined cues, the robot automatically learns the features from processed vision data. Our method simultaneously generates -- from the same data -- both, visual features and the interaction matrix that relates them to the robot control inputs. Extraction of the feature vector and control commands is done online and adaptively, with little data for initialization. The method allows the robot to manipulate an object without knowing whether it is rigid or deformable. To validate our approach, we conduct numerical simulations and experiments with both deformable and rigid objects.
翻译:本文提出一个统一的基于愿景的操纵框架, 使用变形/ 硬性物体的图像轮廓。 机器人不使用人为定义的提示, 而是自动从已处理的视觉数据中学习特性。 我们的方法同时生成 -- -- 同一数据 -- -- 与机器人控制输入相关的视觉特征和互动矩阵。 功能矢量和控制命令的提取是在线和适应性的, 初始化数据很少。 该方法允许机器人在不知道物体是硬性还是变形的情况下操作对象。 为了验证我们的方法, 我们用变形和变形对象进行数字模拟和实验。