In this paper, we consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task. We start by reviewing recent advances in MTR theory and show that they can provide novel insights for popular meta-learning algorithms when analyzed within this framework. In particular, we highlight a fundamental difference between gradient-based and metric-based algorithms in practice and put forward a theoretical analysis to explain it. Finally, we use the derived insights to improve the performance of meta-learning methods via a new spectral-based regularization term and confirm its efficiency through experimental studies on few-shot classification benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of MTR theory into practice for the task of few-shot classification.
翻译:在本文中,我们考虑多任务代表制学习框架,目标是利用源任务学习减少解决目标任务抽样复杂性的表示制,我们首先审查中期审查理论的最新进展,并表明这些进展能够在分析这一框架时为流行的元学习算法提供新的见解,特别是,我们强调基于梯度的算法和基于计量的算法在实际中存在根本差异,并提出理论分析来加以解释。最后,我们利用所得的见解,通过基于光谱的新的正规化术语来改进元学习方法的绩效,并通过对微小分数分类基准的实验研究来确认其效率。我们最了解的是,这是将最新的中期审查理论的学习范围纳入少数分解任务的第一个贡献。