Meta-learning enables learning systems to adapt quickly to new tasks, similar to humans. Different meta-learning approaches all work under/with the mini-batch episodic training framework. Such framework naturally gives the information about task identity, which can serve as additional supervision for meta-training to improve generalizability. We propose to exploit task identity as additional supervision in meta-training, inspired by the alignment and discrimination ability which is is intrinsic in human's fast learning. This is achieved by contrasting what meta-learners learn, i.e., model representations. The proposed ConML is evaluating and optimizing the contrastive meta-objective under a problem- and learner-agnostic meta-training framework. We demonstrate that ConML integrates seamlessly with existing meta-learners, as well as in-context learning models, and brings significant boost in performance with small implementation cost.
翻译:元学习使学习系统能够像人类一样快速适应新任务。不同的元学习方法均在基于小批量情景训练框架下运作。该框架自然地提供了任务身份信息,这种信息可作为元训练中的额外监督信号以提升泛化能力。受人类快速学习过程中固有的对齐与判别能力启发,我们提出在元训练中利用任务身份作为额外监督。这一目标通过对比元学习器所学内容(即模型表征)来实现。所提出的ConML方法在一个问题与学习器无关的元训练框架下评估并优化对比元目标。我们证明ConML能够与现有元学习器及上下文学习模型无缝集成,并以极小的实现成本带来显著的性能提升。