We consider the task of semi-supervised semantic segmentation, where we aim to produce pixel-wise semantic object masks given only a small number of human-labeled training examples. We focus on iterative self-training methods in which we explore the behavior of self-training over multiple refinement stages. We show that iterative self-training leads to performance degradation if done naively with a fixed ratio of human-labeled to pseudo-labeled training examples. We propose Greedy Iterative Self-Training (GIST) and Random Iterative Self-Training (RIST) strategies that alternate between training on either human-labeled data or pseudo-labeled data at each refinement stage, resulting in a performance boost rather than degradation. We further show that GIST and RIST can be combined with existing SOTA methods to boost performance, yielding new SOTA results in Pascal VOC 2012 and Cityscapes dataset across five out of six subsets.


翻译:我们考虑了半监督的语义分解任务,我们的目标是根据少量的人类标签培训实例制作像素的语义对象面具。我们侧重于迭代自我培训方法,在其中我们探索在多个完善阶段的自我培训行为。我们表明,迭代自我培训如果以人类标签与假标签培训范例的固定比例天真地进行,会导致性能退化。我们提议了贪婪的迭代自我培训(GIST)和随机迭代自我培训(RIST)战略,在每一个精细阶段的人类标签数据或假标签数据培训之间互换,导致性能增强而不是退化。我们进一步表明,GIST和RIST可以与现有的SOTA方法相结合,提高性能,产生2012年Pascal VOC的新SITA结果,6个子中的5个城景数据集。

0
下载
关闭预览

相关内容

TensorFlow 2.0 学习资源汇总
专知会员服务
66+阅读 · 2019年10月9日
已删除
将门创投
3+阅读 · 2019年6月12日
VIP会员
相关VIP内容
TensorFlow 2.0 学习资源汇总
专知会员服务
66+阅读 · 2019年10月9日
相关资讯
已删除
将门创投
3+阅读 · 2019年6月12日
Top
微信扫码咨询专知VIP会员