Integrating knowledge across different domains is an essential feature of human learning. Learning paradigms such as transfer learning, meta learning, and multi-task learning reflect the human learning process by exploiting the prior knowledge for new tasks, encouraging faster learning and good generalization for new tasks. This article gives a detailed view of these learning paradigms and their comparative analysis. The weakness of one learning algorithm turns out to be a strength of another, and thus merging them is a prevalent trait in the literature. There are numerous research papers that focus on each of these learning paradigms separately and provide a comprehensive overview of them. However, this article provides a review of research studies that combine (two of) these learning algorithms. This survey describes how these techniques are combined to solve problems in many different fields of study, including computer vision, natural language processing, hyperspectral imaging, and many more, in supervised setting only. As a result, the global generic learning network an amalgamation of meta learning, transfer learning, and multi-task learning is introduced here, along with some open research questions and future research directions in the multi-task setting.
翻译:将知识纳入不同领域是人类学习的一个基本特征。学习范式,如转移学习、元学习和多任务学习等,通过利用先前的知识开展新任务,鼓励更快的学习和对新任务进行良好的概括化,反映了人类学习过程。这一条详细介绍了这些学习范式及其比较分析。一个学习算法的弱点被证明是另一个领域的强项,因此将它们合并起来是文献中的一个普遍特征。有许多研究论文分别侧重于这些学习范式中的每一个,并对这些范式作了全面概述。然而,本文章回顾了将(两个)这些学习算法结合起来的研究工作。本项调查描述了这些技术如何结合在一起,解决许多不同研究领域的问题,包括计算机视觉、自然语言处理、超光谱成像等,以及仅在监督环境下的更多问题。因此,在这里引入了全球通用学习网络,将元学习、转移学习和多任务学习结合起来,同时介绍一些公开的研究问题和多任务环境中的未来研究方向。