We consider the task of semi-supervised semantic segmentation, where we aim to produce pixel-wise semantic object masks given only a small number of human-labeled training examples. We focus on iterative self-training methods in which we explore the behavior of self-training over multiple refinement stages. We show that iterative self-training leads to performance degradation if done na\"ively with a fixed ratio of human-labeled to pseudo-labeled training examples. We propose Greedy Iterative Self-Training (GIST) and Random Iterative Self-Training (RIST) strategies that alternate between training on either human-labeled data or pseudo-labeled data at each refinement stage, resulting in a performance boost rather than degradation. We further show that GIST and RIST can be combined with existing semi-supervised learning methods to boost performance.
翻译:我们考虑了半监督的语义分解任务, 我们的目标是在其中生成像素的语义对象面罩, 仅以少量的人类标签培训实例为例。 我们侧重于迭代自我培训方法, 探索在多个完善阶段的自我培训行为。 我们显示迭代自我培训如果以人类标签与假标签培训范例的固定比例进行, 就会导致性能退化。 我们提议了贪婪的迭代自我培训( GIST)和随机迭代自我培训( RIST) 战略, 在每个完善阶段, 将关于人标签数据或假标签数据的培训交替, 导致性能增强而不是退化。 我们进一步显示, GIST和RIST可以与现有的半监督的学习方法相结合, 以提高性能 。