Deep learning has been the mainstream technique in natural language processing (NLP) area. However, the techniques require many labeled data and are less generalizable across domains. Meta-learning is an arising field in machine learning studying approaches to learn better learning algorithms. Approaches aim at improving algorithms in various aspects, including data efficiency and generalizability. Efficacy of approaches has been shown in many NLP tasks, but there is no systematic survey of these approaches in NLP, which hinders more researchers from joining the field. Our goal with this survey paper is to offer researchers pointers to relevant meta-learning works in NLP and attract more attention from the NLP community to drive future innovation. This paper first introduces the general concepts of meta-learning and the common approaches. Then we summarize task construction settings and application of meta-learning for various NLP problems and review the development of meta-learning in NLP community.
翻译:深入学习是自然语言处理(NLP)领域的主流技术。然而,这些技术需要许多标签数据,而且不那么普遍。元学习是在机器学习学习学习方法以学习更好的学习算法方面出现的一个领域。方法的目的是改进各方面的算法,包括数据效率和可概括性。许多NLP任务中都显示了方法的有效性,但是在NLP中没有对这些方法的系统性调查,这阻碍了更多的研究人员加入这个领域。我们这份调查文件的目标是向研究人员提供NLP的有关元学习工作,吸引NLP社区的更多关注,推动未来的创新。本文首先介绍了元学习的一般概念和共同方法。然后我们总结各种NLP问题的元学习设置和应用情况,并审查NLP社区的元学习发展情况。