Deep neural networks have been successfully applied to many real-world applications. However, such successes rely heavily on large amounts of labeled data that is expensive to obtain. Recently, many methods for semi-supervised learning have been proposed and achieved excellent performance. In this study, we propose a new EnAET framework to further improve existing semi-supervised methods with self-supervised information. To our best knowledge, all current semi-supervised methods improve performance with prediction consistency and confidence ideas. We are the first to explore the role of {\bf self-supervised} representations in {\bf semi-supervised} learning under a rich family of transformations. Consequently, our framework can integrate the self-supervised information as a regularization term to further improve {\it all} current semi-supervised methods. In the experiments, we use MixMatch, which is the current state-of-the-art method on semi-supervised learning, as a baseline to test the proposed EnAET framework. Across different datasets, we adopt the same hyper-parameters, which greatly improves the generalization ability of the EnAET framework. Experiment results on different datasets demonstrate that the proposed EnAET framework greatly improves the performance of current semi-supervised algorithms. Moreover, this framework can also improve {\bf supervised learning} by a large margin, including the extremely challenging scenarios with only 10 images per class. The code and experiment records are available in \url{https://github.com/maple-research-lab/EnAET}.
翻译:深神经网络被成功地应用于许多现实世界应用。然而,这些成功在很大程度上依赖于大量标签数据,而这些数据是昂贵的。最近,提出了许多半监督学习方法,并取得了优异的绩效。在本研究中,我们提出了一个新的 ENAET 框架,以进一步改进现有的半监督方法,提供自监督信息。根据我们的最佳知识,目前所有半监督方法都以预测一致性和信心理念改进了绩效。我们首先探索了在 &bf 半监督的) 中学习大量图像的等级代表的作用。因此,我们的框架可以将自监督信息整合为正规化术语,进一步改进当前半监督方法。在实验中,我们使用MixMatch,这是当前在半监督学习方面的最先进方法,作为测试拟议的 ENAET 框架的基线。在不同的数据集中,我们采用了相同的超参数,这也极大地改进了当前ENA 的大规模实验性能,包括大幅改进了目前数据分析框架。