Recent advances in one-shot semi-supervised learning have lowered the barrier for deep learning of new applications. However, the state-of-the-art for semi-supervised learning is slow to train and the performance is sensitive to the choices of the labeled data and hyper-parameter values. In this paper, we present a one-shot semi-supervised learning method that trains up to an order of magnitude faster and is more robust than state-of-the-art methods. Specifically, we show that by combining semi-supervised learning with a one-stage, single network version of self-training, our FROST methodology trains faster and is more robust to choices for the labeled samples and changes in hyper-parameters. Our experiments demonstrate FROST's capability to perform well when the composition of the unlabeled data is unknown; that is when the unlabeled data contain unequal numbers of each class and can contain out-of-distribution examples that don't belong to any of the training classes. High performance, speed of training, and insensitivity to hyper-parameters make FROST the most practical method for one-shot semi-supervised training. Our code is available at https://github.com/HelenaELiu/FROST.
翻译:以一发半监督式教学的最新进展降低了深入学习新应用程序的障碍。 但是,用于半监督式学习的先进技术培训速度缓慢,而且性能对标签数据和超参数值的选择十分敏感。 在本文中,我们展示了一种单发半监督式学习方法,这种半监督式学习方法可以使培训达到一个数量级,而且比最先进的方法更加有力。具体地说,我们通过将半监督式学习与一个阶段、单一的自我培训网络版本相结合,表明我们的FROST方法培训速度更快,对于标签式样本的选择和超参数的变化而言,其性能更强。我们的实验表明,FROST在未标数据构成不明时能够很好地运行;这就是,无标签式数据包含每个班级的不平等数字,并且能够包含不属于任何培训班级的外部分配实例。高性业绩、培训速度和对超光度参数的敏感度方法使得FROST、标签式样本和超光度度度度度度参数的选取的样本和超度参数。我们的实验表明,FROST有能力在未标数据构成的情况下进行最实用的、MHU/FUST/O的半监督/FSIST。