For soft continuum arms, visual servoing is a popular control strategy that relies on visual feedback to close the control loop. However, robust visual servoing is challenging as it requires reliable feature extraction from the image, accurate control models and sensors to perceive the shape of the arm, both of which can be hard to implement in a soft robot. This letter circumvents these challenges by presenting a deep neural network-based method to perform smooth and robust 3D positioning tasks on a soft arm by visual servoing using a camera mounted at the distal end of the arm. A convolutional neural network is trained to predict the actuations required to achieve the desired pose in a structured environment. Integrated and modular approaches for estimating the actuations from the image are proposed and are experimentally compared. A proportional control law is implemented to reduce the error between the desired and current image as seen by the camera. The model together with the proportional feedback control makes the described approach robust to several variations such as new targets, lighting, loads, and diminution of the soft arm. Furthermore, the model lends itself to be transferred to a new environment with minimal effort.
翻译:对于软连续臂而言,视觉透视是一种大众化的控制策略,依靠视觉反馈来关闭控制环。然而,强大的视觉透视是具有挑战性的,因为它需要从图像中进行可靠的特征提取、准确的控制模型和传感器来感知手臂的形状,而两者在软机器人中都难以执行。这封信通过展示一种深神经网络为基础的方法,在软臂上顺利和稳健地执行三维定位任务,从而避开了这些挑战。一个动态神经网络经过培训,可以预测在结构环境中达到预期姿势所需的动作。提出了从图像中估计动作的综合和模块化方法,并进行了实验性比较。实施了比例控制法,以减少相机所看到的理想图像与当前图像之间的误差。模型加上比例反馈控制使描述的方法对一些变异如新目标、照明、负载和软臂的缩缩。此外,模型本身也以微小的努力转移到一个新的环境。