The encoder-decoder network is widely used to learn deep feature representations from pixel-wise annotations in biomedical image analysis. Under this structure, the performance profoundly relies on the effectiveness of feature extraction achieved by the encoding network. However, few models have considered adapting the attention of the feature extractor even in different kinds of tasks. In this paper, we propose a novel training strategy by adapting the attention of the feature extractor according to different tasks for effective representation learning. Specifically, the framework, named T-Net, consists of an encoding network supervised by task-specific attention maps and a posterior network that takes in the learned features to predict the corresponding results. The attention map is obtained by the transformation from pixel-wise annotations according to the specific task, which is used as the supervision to regularize the feature extractor to focus on different locations of the recognition object. To show the effectiveness of our method, we evaluate T-Net on two different tasks, i.e. , segmentation and localization. Extensive results on three public datasets (BraTS-17, MoNuSeg and IDRiD) have indicated the effectiveness and efficiency of our proposed supervision method, especially over the conventional encoding-decoding network.


翻译:在生物医学图像分析中,大量使用编码器-decoder 网络从像素图解中学习深度特征表示;在这一结构下,性能在很大程度上依赖于编码网络所实现的特征提取的效果;然而,很少有模型考虑调整特征提取器的注意力,甚至在不同种类的任务中也是如此;在本文件中,我们提出一种新的培训战略,根据有效代表性学习的不同任务调整特征提取器的注意力;具体地说,称为T-Net的框架包括一个编码网络,由特定任务关注地图和后方网络监督,通过学习的特征预测相应结果;通过根据具体任务从像素图解转换而获得的注意地图,这是用来监管特征提取器的正规化,以关注识别对象的不同地点;为了显示我们的方法的有效性,我们评估了T-Net的两种不同任务,即分解和本地化。三个公共数据集(BraTS-17、MouSeg和IDriD)的广泛结果显示我们拟议的常规监管方法网络的有效性和效率,特别是对常规监管的升级。

0
下载
关闭预览

相关内容

100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
164+阅读 · 2020年3月18日
Transferring Knowledge across Learning Processes
CreateAMind
27+阅读 · 2019年5月18日
深度自进化聚类:Deep Self-Evolution Clustering
我爱读PAMI
15+阅读 · 2019年4月13日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
Hierarchical Disentangled Representations
CreateAMind
4+阅读 · 2018年4月15日
计算机视觉近一年进展综述
机器学习研究会
9+阅读 · 2017年11月25日
VIP会员
相关VIP内容
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
164+阅读 · 2020年3月18日
相关资讯
Top
微信扫码咨询专知VIP会员