This paper presents one-bit supervision, a novel setting of learning from incomplete annotations, in the scenario of image classification. Instead of training a model upon the accurate label of each sample, our setting requires the model to query with a predicted label of each sample and learn from the answer whether the guess is correct. This provides one bit (yes or no) of information, and more importantly, annotating each sample becomes much easier than finding the accurate label from many candidate classes. There are two keys to training a model upon one-bit supervision: improving the guess accuracy and making use of incorrect guesses. For these purposes, we propose a multi-stage training paradigm which incorporates negative label suppression into an off-the-shelf semi-supervised learning algorithm. In three popular image classification benchmarks, our approach claims higher efficiency in utilizing the limited amount of annotations.
翻译:本文展示了一比特的监管,这是在图像分类的情景中从不完整的注释中学习的一个新设置。 我们的设置不是在每种样本的准确标签上培训模型,而是要求模型使用每个样本的预测标签进行查询,并从答案中了解猜想是否正确。 这提供了一比特的信息(是或否 ), 更重要的是, 每个样本的说明比从许多候选类别中找到准确的标签容易得多。 在一比特的监管中培训模型有两个关键点: 提高猜测准确性, 并使用错误的猜测。 为此, 我们提议了一个多阶段培训模式, 将负面标签抑制纳入现成的半监督学习算法。 在三种受欢迎的图像分类基准中, 我们的方法要求使用数量有限的说明的效率更高。