Deep learning perception models require a massive amount of labeled training data to achieve good performance. While unlabeled data is easy to acquire, the cost of labeling is prohibitive and could create a tremendous burden on companies or individuals. Recently, self-supervision has emerged as an alternative to leveraging unlabeled data. In this paper, we propose a new light-weight self-supervised learning framework that could boost supervised learning performance with minimum additional computation cost. Here, we introduce a simple and flexible multi-task co-training framework that integrates a self-supervised task into any supervised task. Our approach exploits pretext tasks to incur minimum compute and parameter overheads and minimal disruption to existing training pipelines. We demonstrate the effectiveness of our framework by using two self-supervised tasks, object detection and panoptic segmentation, on different perception models. Our results show that both self-supervised tasks can improve the accuracy of the supervised task and, at the same time, demonstrates strong domain adaption capability when used with additional unlabeled data.
翻译:深层学习认知模型需要大量标签培训数据才能取得良好业绩。 虽然标签成本很容易获得,但标签成本高得令人望而却步,可能会给公司或个人带来巨大负担。 最近,自我监督的发现成为了利用无标签数据的一种替代手段。 在本文中,我们提出了一个新的轻质自我监督学习框架,可以提高监督学习的绩效,同时降低额外的计算成本。在这里,我们引入了一个简单灵活的多任务共同培训框架,将自我监督的任务纳入任何受监督的任务中。 我们的方法利用一些借口任务对现有的培训管道进行最小的计算和参数管理,并尽可能减少干扰。 我们通过使用两种自我监督的任务,即对象探测和对不同认知模型的全光分割,展示了我们框架的有效性。 我们的结果显示,自监督的任务可以提高受监督任务的准确性,同时,在使用额外的无标签数据时,我们展示了强大的域适应能力。