The critical challenge of Semi-Supervised Learning (SSL) is how to effectively leverage the limited labeled data and massive unlabeled data to improve the model's generalization performance. In this paper, we first revisit the popular pseudo-labeling methods via a unified sample weighting formulation and demonstrate the inherent quantity-quality trade-off problem of pseudo-labeling with thresholding, which may prohibit learning. To this end, we propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training, effectively exploiting the unlabeled data. We derive a truncated Gaussian function to weight samples based on their confidence, which can be viewed as a soft version of the confidence threshold. We further enhance the utilization of weakly-learned classes by proposing a uniform alignment approach. In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
翻译:半监督学习的关键挑战是如何有效地利用有限标签数据和大量无标签数据来提高模型的通用性能。 在本文中,我们首先通过统一的抽样加权配方重新审视流行的假标签方法,并展示假标签与临界值之间的内在数量质量权衡问题,这可能会禁止学习。为此,我们提议通过在培训期间保持高数量和高质量的假标签,有效地利用未标签数据来克服权衡。我们根据信任度,从加权样本中得出一个疏漏的高斯函数,可以被视为信任阈值的软版本。我们通过提出统一的校准方法,进一步加强对学习程度差的利用。在实验中,SoftMatch展示了包括图像、文本和不平衡分类在内的多种基准的巨大改进。</s>