Recently, different machine learning methods have been introduced to tackle the challenging few-shot learning scenario that is, learning from a small labeled dataset related to a specific task. Common approaches have taken the form of meta-learning: learning to learn on the new problem given the old. Following the recognition that meta-learning is implementing learning in a multi-level model, we present a Bayesian treatment for the meta-learning inner loop through the use of deep kernels. As a result we can learn a kernel that transfers to new tasks; we call this Deep Kernel Transfer (DKT). This approach has many advantages: is straightforward to implement as a single optimizer, provides uncertainty quantification, and does not require estimation of task-specific parameters. We empirically demonstrate that DKT outperforms several state-of-the-art algorithms in few-shot classification, and is the state of the art for cross-domain adaptation and regression. We conclude that complex meta-learning routines can be replaced by a simpler Bayesian model without loss of accuracy.
翻译:最近,我们引入了不同的机器学习方法来应对挑战性的微小学习情景,即从与具体任务有关的小型标签数据集中学习。共同的方法采取了元学习的形式:学习从旧问题中学习新问题。认识到元学习正在多层次模式中进行学习后,我们提出一种通过使用深内核对元学习内环的贝叶斯治疗方法。因此,我们可以学习一个向新任务转移的内核;我们称之为深海内核传输(DKT ) 。这个方法有许多好处:可以直接作为单一的优化器实施,提供不确定性量化,不需要对具体任务参数进行估计。我们从经验上证明,DKT在微小的分类中超越了几种最先进的算法,是跨领域适应和回归的艺术状态。我们的结论是,复杂的元学习常规可以由简单的贝叶斯模式取代,而不会失去准确性。