Colonoscopy is widely recognised as the gold standard procedure for the early detection of colorectal cancer (CRC). Segmentation is valuable for two significant clinical applications, namely lesion detection and classification, providing means to improve accuracy and robustness. The manual segmentation of polyps in colonoscopy images is time-consuming. As a result, the use of deep learning (DL) for automation of polyp segmentation has become important. However, DL-based solutions can be vulnerable to overfitting and the resulting inability to generalise to images captured by different colonoscopes. Recent transformer-based architectures for semantic segmentation both achieve higher performance and generalise better than alternatives, however typically predict a segmentation map of $\frac{h}{4}\times\frac{w}{4}$ spatial dimensions for a $h\times w$ input image. To this end, we propose a new architecture for full-size segmentation which leverages the strengths of a transformer in extracting the most important features for segmentation in a primary branch, while compensating for its limitations in full-size prediction with a secondary fully convolutional branch. The resulting features from both branches are then fused for final prediction of a $h\times w$ segmentation map. We demonstrate our method's state-of-the-art performance with respect to the mDice, mIoU, mPrecision, and mRecall metrics, on both the Kvasir-SEG and CVC-ClinicDB dataset benchmarks. Additionally, we train the model on each of these datasets and evaluate on the other to demonstrate its superior generalisation performance.
翻译:Colonoscopy被公认为是早期检测直肠癌(CRC)的金标准程序。 分解对于两种重要的临床应用是有价值的, 即, 损伤检测和分类, 提供了提高准确性和稳健性的手段。 结肠镜图像中聚聚虫的人工分解过程耗时费时费。 因此, 使用深度学习( DL) 实现聚分解自动化变得非常重要。 但是, 基于 DL 的解决方案可能容易被过度配置, 从而无法对不同共生镜采集的图像进行概括化。 最近的基于变异器的语义分解结构不仅能达到更高的性能和一般化效果, 而且还提供了比替代品更好的效果, 然而通常预测一个 $\ h ⁇ 4 ⁇ 4 ⁇ time\ mfrac{ wrac{w ⁇ 4} 空间尺寸图示 $hh\time wproprocation 。 因此, 我们提出了一个新的全比例分解结构结构, 利用变异变器在主要分支中提取最重要的分解特征, 同时弥补其全缩性预测的全数值,, 以二级变异序数据 。