This paper presents a new dual quaternion-based formulation for pose-based visual servoing. Extending our previous work on local contact moment (LoCoMo) based grasp planning, we demonstrate grasping of arbitrarily moving objects in 3D space. Instead of using the conventional axis-angle parameterization, dual quaternions allow designing the visual servoing task in a more compact manner and provide robustness to manipulator singularities. Given an object point cloud, LoCoMo generates a ranked list of grasp and pre-grasp poses, which are used as desired poses for visual servoing. Whenever the object moves (tracked by visual marker tracking), the desired pose updates automatically. For this, capitalising on the dual quaternion spatial distance error, we propose a dynamic grasp re-ranking metric to select the best feasible grasp for the moving object. This allows the robot to readily track and grasp arbitrarily moving objects. In addition, we also explore the robot null-space with our controller to avoid joint limits so as to achieve smooth trajectories while following moving objects. We evaluate the performance of the proposed visual servoing by conducting simulation experiments of grasping various objects using a 7-axis robot fitted with a 2-finger gripper. Obtained results demonstrate the efficiency of our proposed visual servoing.
翻译:本文展示了一个新的基于双四基配方的配方, 用于基于外观的视觉变异。 扩展我们先前在基于本地接触时间( LoCoMo) 的定位规划上的工作, 我们展示了在 3D 空间任意移动对象的掌握。 我们为此提议了一种动态的重新定位标准, 而不是使用传统的轴角空间偏差参数化, 而不是使用常规的轴角参数化参数化, 使视觉变异任务设计得更加紧凑, 并为操控或奇特提供了稳健性。 此外, LoCoMo 生成了一个对象点云, 并生成了一个排名级的控件和预缩放阵列列表, 用于视觉变异形。 当对象移动时( 由视觉标记跟踪跟踪跟踪), 想要自动显示更新更新。 对此, 我们提议了一种动态的重新定位标准, 以选择移动对象的最佳可行控件。 这使机器人能够随时跟踪和捕捉到任意移动对象。 此外, 我们还与我们的控制器一起探索机器人的空域, 以避免联合限制, 从而在移动物体时实现平稳的轨迹。 我们评估了拟议的视觉变动结果, 。 我们评估了以模拟的视觉变换装的视觉变换的机器人, 。