Besides the complex nature of colonoscopy frames with intrinsic frame formation artefacts such as light reflections and the diversity of polyp types/shapes, the publicly available polyp segmentation training datasets are limited, small and imbalanced. In this case, the automated polyp segmentation using a deep neural network remains an open challenge due to the overfitting of training on small datasets. We proposed a simple yet effective polyp segmentation pipeline that couples the segmentation (FCN) and classification (CNN) tasks. We find the effectiveness of interactive weight transfer between dense and coarse vision tasks that mitigates the overfitting in learning. And It motivates us to design a new training scheme within our segmentation pipeline. Our method is evaluated on CVC-EndoSceneStill and Kvasir-SEG datasets. It achieves 4.34% and 5.70% Polyp-IoU improvements compared to the state-of-the-art methods on the EndoSceneStill and Kvasir-SEG datasets, respectively.
翻译:除了带有光反射和聚苯乙烯类型/形状多样性等内在框架形成工艺品的结肠镜镜框架的复杂性性质外,公开提供的聚分层培训数据集有限、小且不平衡。在这种情况下,由于对小型数据集的培训过于完善,使用深神经网络的自动聚解分解仍是一个公开的挑战。我们建议建立一个简单而有效的聚分解管道,将分解(FCN)和分类(CNN)任务结合起来。我们发现宽厚和粗糙的视觉任务之间的交互重量转移效果,这缓解了学习中的过度配置。它激励我们在分解管道内设计一个新的培训计划。我们的方法在CVC-EndoSeneStill和Kvasir-SEG数据集上进行了评估。在EndoStill和Kvasir-SEG数据集上分别实现了4.34%和5.70%的聚苯-IoU改进。我们发现,与EndoScenStill和Kvasir-SEG数据集的州工艺方法相比,我们得到了4.34%和5.70%的改进。