Autonomy in robotic surgery is very challenging in unstructured environments, especially when interacting with deformable soft tissues. The main difficulty is to generate model-based control methods that account for deformation dynamics during tissue manipulation. Previous works in vision-based perception can capture the geometric changes within the scene, however, model-based controllers integrated with dynamic properties, a more accurate and safe approach, has not been studied before. Considering the mechanic coupling between the robot and the environment, it is crucial to develop a registered, simulated dynamical model. In this work, we propose an online, continuous, real-to-sim registration method to bridge 3D visual perception with position-based dynamics (PBD) modeling of tissues. The PBD method is employed to simulate soft tissue dynamics as well as rigid tool interactions for model-based control. Meanwhile, a vision-based strategy is used to generate 3D reconstructed point cloud surfaces based on real-world manipulation, so as to register and update the simulation. To verify this real-to-sim approach, tissue experiments have been conducted on the da Vinci Research Kit. Our real-to-sim approach successfully reduces registration error online, which is especially important for safety during autonomous control. Moreover, it achieves higher accuracy in occluded areas than fusion-based reconstruction.
翻译:在非结构化环境中,机器人手术中的自主性非常具有挑战性,特别是在与变形软组织互动时。主要困难在于生成基于模型的控制方法,在组织操纵期间考虑到变形动态的变形。基于视觉的先前工作可以捕捉现场的几何变化,然而,以前尚未研究过以模型为基础的控制器与动态属性相结合的模型控制器,一种更准确和安全的方法。考虑到机器人与环境之间的机械化组合,开发一个已登记、模拟的动态模型至关重要。在这项工作中,我们提议了一种基于模型的在线、持续、真实到真实的登记方法,将3D视觉感知与基于位置的动态(PBD)模型联系起来。PBD方法用于模拟软组织动态以及基于模型的控制的僵硬工具互动。与此同时,基于视觉的战略被用来生成基于真实世界操纵的3D重建点云层表面,以便登记和更新模拟。为了验证这一实际到模拟方法,我们提议了在达芬奇研究 Kit上进行的组织实验,将3D视觉感知觉与基于位置的动态(PPBD) 建模模型模型的模型模型模型模型模型模型连接。我们实际到定位的方法被用于模拟的精确化方法用来模拟的精确定位方法比它更精确的精确化方法更精确,在进行重建期间,在安全度上是成功的, 成功的升级方法,它的重要的实现了。在自动的升级的升级,在升级的升级方法,它的重要的升级,在升级方法是成功。在升级方法中,在升级的升级的升级的升级。在它。在升级中,它。在升级的精确度上,它。在升级。在升级的重建中,它。在升级的精确性控制中,它。在升级的精确性操作中,它。在升级。