Recent semi-supervised learning methods use pseudo supervision as core idea, especially self-training methods that generate pseudo labels. However, pseudo labels are unreliable. Self-training methods usually rely on single model prediction confidence to filter low-confidence pseudo labels, thus remaining high-confidence errors and wasting many low-confidence correct labels. In this paper, we point out it is difficult for a model to counter its own errors. Instead, leveraging inter-model disagreement between different models is a key to locate pseudo label errors. With this new viewpoint, we propose mutual training between two different models by a dynamically re-weighted loss function, called Dynamic Mutual Training (DMT). We quantify inter-model disagreement by comparing predictions from two different models to dynamically re-weight loss in training, where a larger disagreement indicates a possible error and corresponds to a lower loss value. Extensive experiments show that DMT achieves state-of-the-art performance in both image classification and semantic segmentation. Our codes are released at https://github.com/voldemortX/DST-CBC .
翻译:最近半监督的学习方法使用假监督作为核心思想,特别是产生假标签的自我培训方法。然而,假标签不可靠。自我培训方法通常依靠单一模型预测信心来过滤低信任伪标签,从而保持高度信心错误,浪费许多低信任正确标签。在本文中,我们指出模型很难反倒自己的错误。相反,利用不同模型之间的不同差异是查找假标签错误的关键。有了这一新的观点,我们提议通过动态重加权损失函数,即动态相互培训(DMT),在两种不同模型的预测与动态重标值培训(DMT)相比,将不同模型之间的分歧量化为动态重标值损失,在其中更大的分歧表明可能的错误,与较低的损失值相对应。广泛的实验表明,DMT在图像分类和语义分化中都取得了最先进的性表现。我们的代码在 https://github.com/voldemortX/DST-CBC中发布。