Conventional computer-assisted orthopaedic navigation systems rely on the tracking of dedicated optical markers for patient poses, which makes the surgical workflow more invasive, tedious, and expensive. Visual tracking has recently been proposed to measure the target anatomy in a markerless and effortless way, but the existing methods fail under real-world occlusion caused by intraoperative interventions. Furthermore, such methods are hardware-specific and not accurate enough for surgical applications. In this paper, we propose a RGB-D sensing-based markerless tracking method that is robust against occlusion. We design a new segmentation network that features dynamic region-of-interest prediction and robust 3D point cloud segmentation. As it is expensive to collect large-scale training data with occlusion instances, we also propose a new method to create synthetic RGB-D images for network training. Experimental results show that our proposed markerless tracking method outperforms recent state-of-the-art approaches by a large margin, especially when an occlusion exists. Furthermore, our method generalises well to new cameras and new target models, including a cadaver, without the need for network retraining. In practice, by using a high-quality commercial RGB-D camera, our proposed visual tracking method achieves an accuracy of 1-2 degress and 2-4 mm on a model knee, which meets the standard for clinical applications.
翻译:常规计算机辅助矫形导航系统依赖于跟踪用于患者配置的专用光学标志,这使得外科工作流程更具侵入性、枯燥和昂贵。最近提议进行视觉跟踪,以无标记和不费力的方式测量目标解剖,但现有方法在实世隔离下由于内科干预造成的闭塞失败。此外,这些方法有硬件特性,不够精确,不足以用于外科手术。在本文中,我们提议了一种基于RGB-D的无标志的无标志跟踪方法,该方法能够有力地抵御隐蔽。我们设计了一个新的分解网络,其特点是动态区域利益预测和强力的3D点云分解。由于收集大型培训数据以隐蔽方式进行的费用很高,我们还提议了一种新的方法,为网络培训创建合成的 RGB-D 图像。实验结果表明,我们提议的无标志跟踪方法大大超越了最近的先进方法,特别是在存在隐蔽的情况下。此外,我们的方法非常接近新的照相机和新的目标模型,包括一个具有动态区域预测和强健的3D点云层分割。由于以隐蔽方式收集大规模培训数据,因此需要用高质量的网络跟踪。