We present a robust markerless image based visual servoing method that enables precision robot control without hand-eye and camera calibrations in 1, 3, and 5 degrees-of-freedom. The system uses two cameras for observing the workspace and a combination of classical image processing algorithms and deep learning based methods to detect features on camera images. The only restriction on the placement of the two cameras is that relevant image features must be visible in both views. The system enables precise robot-tool to workspace interactions even when the physical setup is disturbed, for example if cameras are moved or the workspace shifts during manipulation. The usefulness of the visual servoing method is demonstrated and evaluated in two applications: in the calibration of a micro-robotic system that dissects mosquitoes for the automated production of a malaria vaccine, and a macro-scale manipulation system for fastening screws using a UR10 robot. Evaluation results indicate that our image based visual servoing method achieves human-like manipulation accuracy in challenging setups even without camera calibration.
翻译:我们提出了一种坚固的无标记基于图像的视觉伺服方法,使得在1、3和5自由度中能够实现精确的机器人控制,而无需进行手眼和相机标定。该系统使用两台摄像机观察工作空间,并结合经典图像处理算法和基于深度学习技术的方法来检测相机图像上的特征。放置两台摄像机的唯一限制是两视图中必须可见相关的图像特征。即使在物理设置被干扰的情况下,例如相机移动或操作期间工作空间发生变化,该系统也能实现精确的机器人-工具与工作空间之间的交互。该视觉伺服方法的实用性在两个应用领域中得到了演示和评估:一种用于自动生产疟疾疫苗的昆虫解剖微型机器人系统的校准以及一种用于使用UR10机器人紧固螺丝的宏观操作系统。评估结果表明,即使没有相机标定,我们的基于图像的视觉伺服方法在具有挑战性的设置中实现了类似于人类操作的精度。