Deep Learning (DL) has achieved automatic and objective assessment of surgical skills. However, DL models are data-hungry and restricted to their training domain. This prevents them from transitioning to new tasks where data is limited. Hence, domain adaptation is crucial to implement DL in real life. Here, we propose a meta-learning model, A-VBANet, that can deliver domain-agnostic surgical skill classification via one-shot learning. We develop the A-VBANet on five laparoscopic and robotic surgical simulators. Additionally, we test it on operating room (OR) videos of laparoscopic cholecystectomy. Our model successfully adapts with accuracies up to 99.5% in one-shot and 99.9% in few-shot settings for simulated tasks and 89.7% for laparoscopic cholecystectomy. For the first time, we provide a domain-agnostic procedure for video-based assessment of surgical skills. A significant implication of this approach is that it allows the use of data from surgical simulators to assess performance in the operating room.
翻译:深度学习( DL) 已经实现了对外科手术技能的自动和客观评估。 但是, DL 模型是数据饥饿的, 仅限于其培训领域。 这使得它们无法向数据有限的新任务过渡。 因此, 域适应对于在现实生活中实施 DL 至关重要。 在此, 我们提出一个元学习模型, A- VBANet, 可以通过一次性学习提供域- 遗传外科手术技能分类。 我们开发了 A- VBANet, 用于5个腹腔镜和机器人外科模拟器。 此外, 我们测试了该模型在腹腔细胞切除手术室( OR) 视频中。 我们的模型成功地适应了在一发中高达99.5%的外科术, 在几发环境中的外科外科外科切除术中高达99.9%的外科手术。 我们第一次为外科外科手术技能的视频评估提供了一个域- 程序 。 这种方法的一个重要含义是, 它允许使用手术模拟器来评估手术室的性表现。