Deep neural networks can achieve great successes when presented with large data sets and sufficient computational resources. However, their ability to learn new concepts quickly is quite limited. Meta-learning is one approach to address this issue, by enabling the network to learn how to learn. The exciting field of Deep Meta-Learning advances at great speed, but lacks a unified, insightful overview of current techniques. This work presents just that. After providing the reader with a theoretical foundation, we investigate and summarize key methods, which are categorized into i) metric-, ii) model-, and iii) optimization-based techniques. In addition, we identify the main open challenges, such as performance evaluations on heterogeneous benchmarks, and reduction of the computational costs of meta-learning.
翻译:深神经网络在提供大型数据集和充足的计算资源时可以取得巨大成功,然而,它们迅速学习新概念的能力相当有限。 元学习是解决这一问题的一种方法,它使网络能够学习如何学习。 深元学习的令人振奋的领域是高速进步,但缺乏对当前技术的统一、有见地的概览。 这项工作只是提出这一点。 在向读者提供理论基础之后,我们调查并总结了主要方法,这些方法被归类为(一) 标准-(二) 模式-(三) 优化技术。 此外,我们确定了主要的公开挑战,例如不同基准的绩效评估和降低元学习的计算成本。