Automatic surgical scene segmentation is fundamental for facilitating cognitive intelligence in the modern operating theatre. Previous works rely on conventional aggregation modules (e.g., dilated convolution, convolutional LSTM), which only make use of the local context. In this paper, we propose a novel framework STswinCL that explores the complementary intra- and inter-video relations to boost segmentation performance, by progressively capturing the global context. We firstly develop a hierarchy Transformer to capture intra-video relation that includes richer spatial and temporal cues from neighbor pixels and previous frames. A joint space-time window shift scheme is proposed to efficiently aggregate these two cues into each pixel embedding. Then, we explore inter-video relation via pixel-to-pixel contrastive learning, which well structures the global embedding space. A multi-source contrast training objective is developed to group the pixel embeddings across videos with the ground-truth guidance, which is crucial for learning the global property of the whole data. We extensively validate our approach on two public surgical video benchmarks, including EndoVis18 Challenge and CaDIS dataset. Experimental results demonstrate the promising performance of our method, which consistently exceeds previous state-of-the-art approaches. Code is available at https://github.com/YuemingJin/STswinCL.
翻译:自动外科外科切除场景分解对于促进现代操作场区的认知智能至关重要。 以前的作品依赖常规集成模块( 例如, 放大变速、 革命LSTM), 只能利用本地环境。 在本文中, 我们提出一个新的STswinCL 框架, 通过逐步捕捉全球环境, 探索辅助性内部和视频间关系, 以提升分解性能。 我们首先开发一个等级变换器, 以捕捉视频内部关系, 包括来自邻居像素和以往框架的更丰富的空间和时间提示。 我们广泛验证了我们两种公共外科视频基准的方法, 包括 EndoVis18 和 CADIS- CLSS 嵌入。 我们随后通过像素到像素对比学习探索视频间的关系, 这很好地构建了全球嵌入空间。 一个多源对比性培训目标, 将像素嵌入视频与地图指导组合起来, 这对学习整个数据的全球属性至关重要。 我们广泛验证了我们关于两个公共外科视频基准的方法, 包括 EndoView18 Chals- etaling- 和 CLADIS- commal 。 lavealal s pas agilateal bes as as as agress agress agress agress