Automating the process of manipulating and delivering sutures during robotic surgery is a prominent problem at the frontier of surgical robotics, as automating this task can significantly reduce surgeons' fatigue during tele-operated surgery and allow them to spend more time addressing higher-level clinical decision making. Accomplishing autonomous suturing and suture manipulation in the real world requires accurate suture thread localization and reconstruction, the process of creating a 3D shape representation of suture thread from 2D stereo camera surgical image pairs. This is a very challenging problem due to how limited pixel information is available for the threads, as well as their sensitivity to lighting and specular reflection. We present a suture thread reconstruction work that uses reliable keypoints and a Minimum Variation Spline (MVS) smoothing optimization to construct a 3D centerline from a segmented surgical image pair. This method is comparable to previous suture thread reconstruction works, with the possible benefit of increased accuracy of grasping point estimation. Our code and datasets will be available at: https://github.com/ucsdarclab/thread-reconstruction.
翻译:在机器人外科手术中自动操纵和提供缝线的过程是机器人外科外科手术前沿的一个突出问题,因为这一任务自动化可以大大减少外科医生在远程外科手术期间的疲劳,使他们能够花更多的时间处理更高级别的临床决策。在现实世界中实现自主的缝线和缝线操纵需要准确的缝线线定位和重建,从2D立体摄像机外科图像配对中创建3D形线线的缝线代表的过程是一个非常具有挑战性的问题。由于线线的像素信息有限,以及他们对照明和镜像反射的敏感度,因此这是一个非常具有挑战性的问题。我们展示的线线条重建工作使用可靠的关键点和最小变光线(MVS)来从片段外科图像配对中构建一个3D中线。这个方法与以前的线条重建工程相似,并可能提高掌握点估计的准确性。我们的代码和数据集将在以下提供: https://github.com/ucdarprecrestraction/drethrestraction。