Objective: We propose a formal framework for modeling surgical tasks using a unified set of motion primitives (MPs) as the basic surgical actions to enable more objective labeling and aggregation of different datasets and training generalized models for surgical action recognition. Methods: We use our framework to create the COntext and Motion Primitive Aggregate Surgical Set (COMPASS), including six dry-lab surgical tasks from three publicly-available datasets (JIGSAWS, DESK, and ROSMA) with kinematic and video data and context and MP labels. Methods for labeling surgical context and automatic translation to MPs are presented. We propose the Leave-One-Task-Out (LOTO) cross validation method to evaluate a model's ability to generalize to an unseen task. Results: Our context labeling method achieves near-perfect agreement between consensus labels from crowd-sourcing and expert surgeons. Segmentation of tasks to MPs enables the generation of separate left and right transcripts and significantly improves LOTO performance. We find that MP segmentation models perform best if trained on tasks with the same context and/or tasks from the same dataset. Conclusion: The proposed framework enables high-quality labeling of surgical data based on context and fine-grained MPs. Modeling surgical tasks with MPs enables the aggregation of different datasets for training action recognition models that can generalize better to unseen tasks than models trained at the gesture level. Significance: Our formal framework and aggregate dataset can support the development of models and algorithms for surgical process analysis, skill assessment, error detection, and autonomy.
翻译:目标:我们提出一个正式框架,用于模拟外科任务,使用一套统一的运动原始数据和背景及MP标签,作为基本的外科行动,以便更客观地标注和汇总不同的数据集,并培训通用的外科行动识别模型。 方法:我们使用我们的框架来创建COntext和运动原始综合外科组(COMPASS),包括由三种公开可得的数据集(JIGSAWS、DESK和ROSMA)提供的6个干法外科任务,其中包括由3个公开可得的数据集(JIGSAWS、DESK和ROSMA)提供的6个干法外科外科手术任务,这些数据集包括运动和视频原始数据、背景和背景和背景,以及MPO的功能。 我们发现,如果对外科背景和自动翻译进行更精确的翻译背景和翻译,那么MP的分解模型就能够以更精确的方式执行任务,从而使得基于相同背景和/或更高质量的外科手术分析任务得以进行数据分析。