Surgical instrument segmentation for robot-assisted surgery is needed for accurate instrument tracking and augmented reality overlays. Therefore, the topic has been the subject of a number of recent papers in the CAI community. Deep learning-based methods have shown state-of-the-art performance for surgical instrument segmentation, but their results depend on labelled data. However, labelled surgical data is of limited availability and is a bottleneck in surgical translation of these methods. In this paper, we demonstrate the limited generalizability of these methods on different datasets, including human robot-assisted surgeries. We then propose a novel joint generation and segmentation strategy to learn a segmentation model with better generalization capability to domains that have no labelled data. The method leverages the availability of labelled data in a different domain. The generator does the domain translation from the labelled domain to the unlabelled domain and simultaneously, the segmentation model learns using the generated data while regularizing the generative model. We compared our method with state-of-the-art methods and showed its generalizability on publicly available datasets and on our own recorded video frames from robot-assisted prostatectomies. Our method shows consistently high mean Dice scores on both labelled and unlabelled domains when data is available only for one of the domains. *M. Kalia and T. Aleef contributed equally to the manuscript
翻译:机器人辅助外科手术需要外科仪器分解,以便准确跟踪仪器和增加现实覆盖。 因此, 这个问题已成为CAI 社区最近一些论文的主题。 深学习方法显示外科仪器分解的最先进性能, 但其结果取决于标签数据。 然而, 标签外科数据有限, 是这些方法外科翻译的瓶颈。 在本文件中, 我们展示了这些方法在不同数据集, 包括人类机器人辅助外科手术中的一般性有限。 我们随后提出了一个新型的联合生成和分解战略, 以学习一个具有更好通用能力的分解模型, 以学习没有标签数据的域。 深层次的基于学习方法利用了在外科外科仪器分解的先进性功能, 但其结果取决于有标签的数据。 但是, 标签的外科数据从标签域转换到无标签的域, 同时, 分解模型学习使用所生成的数据, 同时将基因缩写模型正规化。 我们将这些方法与最新方法进行了比较, 并在公开提供的数据集和我们自己所录制成的截图框中, 从机器人辅助的无标签的类高等级域域中, 我们的方法只能持续用于一个稳定的标签和高等级域。