We propose Uncertainty Augmented Context Attention network (UACANet) for polyp segmentation which consider a uncertain area of the saliency map. We construct a modified version of U-Net shape network with additional encoder and decoder and compute a saliency map in each bottom-up stream prediction module and propagate to the next prediction module. In each prediction module, previously predicted saliency map is utilized to compute foreground, background and uncertain area map and we aggregate the feature map with three area maps for each representation. Then we compute the relation between each representation and each pixel in the feature map. We conduct experiments on five popular polyp segmentation benchmarks, Kvasir, CVC-ClinicDB, ETIS, CVC-ColonDB and CVC-300, and achieve state-of-the-art performance. Especially, we achieve 76.6% mean Dice on ETIS dataset which is 13.8% improvement compared to the previous state-of-the-art method. Source code is publicly available at https://github.com/plemeri/UACANet
翻译:我们建议为聚变增加不确定背景注意网络(UACANet) 进行聚变分解, 考虑突出地图的不确定区域。 我们用额外的编码器和解码器建立一个修改版的 U-Net 形状网络, 在每个自下而上流的预测模块中计算一个突出的地图, 并传播到下一个预测模块。 在每一个预测模块中, 先前预测的突出区域地图被用来计算前景、 背景和不确定的区域地图, 我们将地貌地图与每个代表方的3个区域地图相加。 然后我们计算每个代表方和地貌图中每个像素像标之间的关系。 我们用五个流行的聚变形基准、 Kvasir、 CVC-ClinicDB、 ETIS、 CVC-C- ColonDB和 CVC-300进行实验, 并实现最新性能。 特别是, 我们实现了76.6% ETIS 数据集的平均Dice, 与先前的状态方法相比改进了13.8%。 源码可在 https://github.com/plemerire/UACNet 上公开查阅 。