Navigation inside luminal organs is an arduous task that requires non-intuitive coordination between the movement of the operator's hand and the information obtained from the endoscopic video. The development of tools to automate certain tasks could alleviate the physical and mental load of doctors during interventions, allowing them to focus on diagnosis and decision-making tasks. In this paper, we present a synergic solution for intraluminal navigation consisting of a 3D printed endoscopic soft robot that can move safely inside luminal structures. Visual servoing, based on Convolutional Neural Networks (CNNs) is used to achieve the autonomous navigation task. The CNN is trained with phantoms and in-vivo data to segment the lumen, and a model-less approach is presented to control the movement in constrained environments. The proposed robot is validated in anatomical phantoms in different path configurations. We analyze the movement of the robot using different metrics such as task completion time, smoothness, error in the steady-state, and mean and maximum error. We show that our method is suitable to navigate safely in hollow environments and conditions which are different than the ones the network was originally trained on.
翻译:光器官内部导航是一项艰巨的任务,需要操作者手的移动与从内镜视频获得的信息之间非直觉协调。 开发使某些任务自动化的工具可以减轻医生在干预期间的身心负荷, 让他们能够集中关注诊断和决策任务。 在本文中, 我们提出了一个由3D印刷的内镜软机器人组成的光内部导航协同解决方案, 它可以在光结构中安全移动。 以革命神经网络( Convolual Neal Networks) 为基础, 视觉透镜用于实现自主导航任务。 CNN 接受幽灵和动态数据的培训, 以对润滑剂进行分解, 并展示一种无模型的方法来控制受限制环境中的移动。 拟议的机器人在不同路径配置的解剖象中被验证。 我们用不同测量仪分析机器人的移动情况, 如任务完成时间、 平滑度、 稳定状态中的错误以及 中和最大错误。 我们显示我们的方法适合在最初经过训练的空心环境中安全行走, 。