Deep learning has proved particularly useful for semantic segmentation, a fundamental image analysis task. However, the standard deep learning methods need many training images with ground-truth pixel-wise annotations, which are usually laborious to obtain and, in some cases (e.g., medical images), require domain expertise. Therefore, instead of pixel-wise annotations, we focus on image annotations that are significantly easier to acquire but still informative, namely the size of foreground objects. We define the object size as the maximum Chebyshev distance between a foreground and the nearest background pixel. We propose an algorithm for training a deep segmentation network from a dataset of a few pixel-wise annotated images and many images with known object sizes. The algorithm minimizes a discrete (non-differentiable) loss function defined over the object sizes by sampling the gradient and then using the standard back-propagation algorithm. Experiments show that the new approach improves the segmentation performance.
翻译:深层学习已证明对语义分解特别有用,这是一个基本的图像分析任务。然而,标准的深层学习方法需要许多带有地真像像像素注释的培训图像,这些图解通常很难获得,在某些情况下(例如医学图象)需要域内的专门知识。因此,我们不是像素一样的注解,而是侧重于易于获取但仍具有信息性的图像说明,即前景天体的大小。我们把对象大小定义为前台和最近的背景像素之间的最大Chebyshev距离。我们建议一种算法,用于从几个像素图解的数据集中培训深层分解网络,从几个有注释的图像和许多已知对象大小的图像中培训。算法通过取样梯度并随后使用标准的反剖析算算法,将物体大小定义的离散(不可区别)损失功能降到最小。实验表明,新方法可以提高分解性。