While few-shot learning (FSL) aims for rapid generalization to new concepts with little supervision, self-supervised learning (SSL) constructs supervisory signals directly computed from unlabeled data. Exploiting the complementarity of these two manners, few-shot auxiliary learning has recently drawn much attention to deal with few labeled data. Previous works benefit from sharing inductive bias between the main task (FSL) and auxiliary tasks (SSL), where the shared parameters of tasks are optimized by minimizing a linear combination of task losses. However, it is challenging to select a proper weight to balance tasks and reduce task conflict. To handle the problem as a whole, we propose a novel approach named as Pareto self-supervised training (PSST) for FSL. PSST explicitly decomposes the few-shot auxiliary problem into multiple constrained multi-objective subproblems with different trade-off preferences, and here a preference region in which the main task achieves the best performance is identified. Then, an effective preferred Pareto exploration is proposed to find a set of optimal solutions in such a preference region. Extensive experiments on several public benchmark datasets validate the effectiveness of our approach by achieving state-of-the-art performance.
翻译:虽然少见的学习(FSL)旨在迅速推广到没有多少监督的新概念,但自我监督的学习(SSL)则建立直接从未贴标签的数据中计算的监督信号。探索这两种方式的互补性,少数的辅助学习最近引起对少数标签数据的大量注意。以前的工作得益于主要任务(FSL)和辅助任务(SSL)之间分享感化偏见,通过尽量减少任务损失的线性组合优化了任务的共同参数。然而,选择适当加权来平衡任务和减少任务冲突是具有挑战性的。为了处理整个问题,我们提出了名为Pareto 自我监督培训(PSST)的新办法。PSST将少数的辅助问题明确降解为多种受限制的多目标子问题,具有不同的贸易偏好,并在此确定了一个主要任务达到最佳业绩的偏好区域。然后,提议有效选择Pareto探索,以便在这种偏好区域找到一套最佳解决办法。我们提出了一种名为Pareto的新的方法,通过实现若干公共基准数据设定方法的效能,从而验证了我们几个业绩基准的国。