Surgical captioning plays an important role in surgical instruction prediction and report generation. However, the majority of captioning models still rely on the heavy computational object detector or feature extractor to extract regional features. In addition, the detection model requires additional bounding box annotation which is costly and needs skilled annotators. These lead to inference delay and limit the captioning model to deploy in real-time robotic surgery. For this purpose, we design an end-to-end detector and feature extractor-free captioning model by utilizing the patch-based shifted window technique. We propose Shifted Window-Based Multi-Layer Perceptrons Transformer Captioning model (SwinMLP-TranCAP) with faster inference speed and less computation. SwinMLP-TranCAP replaces the multi-head attention module with window-based multi-head MLP. Such deployments primarily focus on image understanding tasks, but very few works investigate the caption generation task. SwinMLP-TranCAP is also extended into a video version for video captioning tasks using 3D patches and windows. Compared with previous detector-based or feature extractor-based models, our models greatly simplify the architecture design while maintaining performance on two surgical datasets. The code is publicly available at https://github.com/XuMengyaAmy/SwinMLP_TranCAP.
翻译:外科字幕在外科指令预测和生成报告方面起着重要作用。 然而, 大部分字幕模型仍然依赖重计算对象探测器或特征提取器来提取区域特性。 此外, 检测模型需要额外的捆绑盒注释, 成本昂贵, 需要熟练的注解器。 导致推断延迟, 限制字幕模型在实时机器人外科手术中部署。 为此, 我们设计了一个端到端检测器, 并使用基于补丁的移动窗口技术, 并设计一个无地段提取符的字幕模型。 我们提议使用3D 补丁/ 窗口, 以基于窗口的多功能变换模式( SwinMLP- TranCAP ) (SwinMLP- TranCAP ) (SwinMLP- TranCAP ) (SwinMLP- TranCAP) (SwinMLP- TranCAP) (S- TwinMLP- TranCAP) (SMLP) (S- CLADMLA) (MLA) (MLA) (MLA) (MLA) (MLMLA) (MLA) (MLA) (MLA) (MLA) (MLA) (MLA) (MLA- SD) (MLA) (MLMLAD) ) (MLA- SD) (MLA) ) 和 的两台 的模拟模型, 模板 。 在 的模拟/ CLMLMLMLADS) 模型上, 和 和 常规模型上, 和 等 等 等 工具 的 的 和 等 等 等 工具 。