We propose two generic methods for improving semi-supervised learning (SSL). The first integrates weight perturbation (WP) into existing "consistency regularization" (CR) based methods. We implement WP by leveraging variational Bayesian inference (VBI). The second method proposes a novel consistency loss called "maximum uncertainty regularization" (MUR). While most consistency losses act on perturbations in the vicinity of each data point, MUR actively searches for "virtual" points situated beyond this region that cause the most uncertain class predictions. This allows MUR to impose smoothness on a wider area in the input-output manifold. Our experiments show clear improvements in classification errors of various CR based methods when they are combined with VBI or MUR or both.
翻译:我们提出了两种改进半监督学习的通用方法(SSL ) 。 第一种是将加权扰动(WP)整合到现有的“一致性规范化”方法中。 我们通过利用变式贝叶斯推断法(VBI)实施WP。 第二种方法是提出一个新的一致性损失,称为“最大不确定性规范化 ” ( MUR ) 。 虽然大多数一致性损失是针对每个数据点附近的扰动,但MUR积极寻找该地区以外的“虚拟”点,这些点导致最不确定的等级预测。 这使得MUR能够将输入输出方块的更宽区域平稳。 我们的实验表明,在与VBI或多边UR或两者结合时,基于捷克共和国的各种方法的分类错误明显改进了。