Vision-based control has found a key place in the research to tackle the requirement of the state feedback when controlling a continuum robot under physical sensing limitations. Traditional visual servoing requires feature extraction and tracking while the imaging device captures the images, which limits the controller's efficiency. We hypothesize that employing deep learning models and implementing direct visual servoing can effectively resolve the issue by eliminating the tracking requirement and controlling the continuum robot without requiring an exact system model. In this paper, we control a single-section tendon-driven continuum robot utilizing a modified VGG-16 deep learning network and an eye-in-hand direct visual servoing approach. The proposed algorithm is first developed in Blender using only one input image of the target and then implemented on a real robot. The convergence and accuracy of the results in normal, shadowed, and occluded scenes reflected by the sum of absolute difference between the normalized target and captured images prove the effectiveness and robustness of the proposed controller.
翻译:以视觉为基础的控制在研究中找到一个关键位置,以便在控制受物理感测限制的连续机器人时满足国家反馈的要求。 传统的视觉检查需要特征提取和跟踪,而成象装置则需要捕获图像,从而限制控制控制器的效率。 我们假设,使用深层学习模型和实施直接视觉检查,可以消除跟踪要求,控制连续机器人而不要求精确的系统模型,从而有效解决这一问题。 在本文中,我们使用经修改的 VGG-16 深度学习网络和直视直接视觉观察方法控制单部分带带驱动的连续机器人。 提议的算法最初是在Blender开发的,仅使用目标的一个输入图像,然后在真正的机器人上实施。 正常、 影子和隐蔽的场景结果的趋同和准确性,通过正常目标与被捕获的图像之间的绝对差数之和所反映的隐蔽场景,证明了拟议控制器的效果和稳健。