Self-supervised learning, which benefits from automatically constructing labels through pre-designed pretext task, has recently been applied for strengthen supervised learning. Since previous self-supervised pretext tasks are based on input, they may incur huge additional training overhead. In this paper we find that features in CNNs can be also used for self-supervision. Thus we creatively design the \emph{feature-based pretext task} which requires only a small amount of additional training overhead. In our task we discard different particular regions of features, and then train the model to distinguish these different features. In order to fully apply our feature-based pretext task in supervised learning, we also propose a novel learning framework containing multi-classifiers for further improvement. Original labels will be expanded to joint labels via self-supervision of feature transformations. With more semantic information provided by our self-supervised tasks, this approach can train CNNs more effectively. Extensive experiments on various supervised learning tasks demonstrate the accuracy improvement and wide applicability of our method.
翻译:通过预先设计的托辞任务自动建立标签而受益的自我监督学习最近被应用以加强监督学习。由于以前自我监督的托辞任务是以投入为基础的,它们可能会带来巨大的额外培训间接费用。在本文件中,我们发现CNN的特点也可以用于自我监督。因此,我们创造性地设计了仅仅需要少量额外培训间接费用的\emph{fature依据的托辞任务。在我们的任务中,我们抛弃了不同的特性区域,然后训练了区分这些不同特性的模式。为了在监督学习中充分运用我们基于特性的托辞任务,我们还提议了一个包含多级化器的新颖学习框架,以便进一步改进。原始标签将扩大为通过自我监督特性转换的自我监督功能联合标签。有了我们自我监督任务提供的更多精密信息,这种方法可以更有效地培训CNN。关于各种监督学习任务的广泛实验表明我们方法的准确性改进和广泛适用性。