Recently, it is shown that deploying a proper self-supervision is a prospective way to enhance the performance of supervised learning. Yet, the benefits of self-supervision are not fully exploited as previous pretext tasks are specialized for unsupervised representation learning. To this end, we begin by presenting three desirable properties for such auxiliary tasks to assist the supervised objective. First, the tasks need to guide the model to learn rich features. Second, the transformations involved in the self-supervision should not significantly alter the training distribution. Third, the tasks are preferred to be light and generic for high applicability to prior arts. Subsequently, to show how existing pretext tasks can fulfill these and be tailored for supervised learning, we propose a simple auxiliary self-supervision task, predicting localizable rotation (LoRot). Our exhaustive experiments validate the merits of LoRot as a pretext task tailored for supervised learning in terms of robustness and generalization capability. Our code is available at https://github.com/wjun0830/Localizable-Rotation.
翻译:最近,我们发现,部署适当的自我监督是提高受监督学习绩效的可行方式;然而,自我监督的好处并没有被充分利用,因为先前的借口任务是专门用来进行不受监督的代言学习;为此,我们首先为协助受监督的目标的辅助任务提出三种可取的属性;第一,需要指导模式学习丰富的特点;第二,自我监督的转变不应显著改变培训分布;第三,这些任务最好是轻而易举的,一般的,以便高度适用于以前的艺术;随后,为了说明现有的借口任务如何能够完成这些任务,并适合受监督的学习,我们提议一项简单的辅助性自我监督任务,预测可本地的轮换(Lorot);我们详尽的实验证实LoRot的优点,这是专门为在稳健和普及能力方面有监督的学习而设计的借口。我们的代码可在https://github.com/wjun0830/delegalicable-Rotation查阅。