Deep learning has proved particularly useful for semantic segmentation, a fundamental image analysis task. However, the standard deep learning methods need many training images with ground-truth pixel-wise annotations, which are usually laborious to obtain and, in some cases (e.g., medical images), require domain expertise. Therefore, instead of pixel-wise annotations, we focus on image annotations that are significantly easier to acquire but still informative, namely the size of foreground objects. We define the object size as the maximum distance between a foreground pixel and the background. We propose an algorithm for training a deep segmentation network from a dataset of a few pixel-wise annotated images and many images with known object sizes. The algorithm minimizes a discrete (non-differentiable) loss function defined over the object sizes by sampling the gradient and then using the standard back-propagation algorithm. We study the performance of our approach in terms of training time and generalization error.
翻译:深层学习已证明对语义分解特别有用,这是一个基本的图像分析任务。然而,标准的深层学习方法需要许多带有地真像像像素注释的培训图像,这些图解通常很难获得,在某些情况下(例如医学图象)需要域内的专门知识。因此,我们不是像素一样的注解,而是侧重于易于获取但仍具有信息性的图像说明,即前景天体的大小。我们把对象大小定义为浅色像素和背景之间的最大距离。我们提出一个算法,用于从几个像素图解的数据集和许多已知对象大小的图像中培训深层分解网络。算法通过取样梯度并随后使用标准的反演算法,最大限度地减少对对象大小所定义的离散(不可区分的)损失功能。我们从培训时间和一般化错误的角度研究我们方法的性能。