In the era of deep learning, modeling for most NLP tasks has converged to several mainstream paradigms. For example, we usually adopt the sequence labeling paradigm to solve a bundle of tasks such as POS-tagging, NER, Chunking, and adopt the classification paradigm to solve tasks like sentiment analysis. With the rapid progress of pre-trained language models, recent years have observed a rising trend of Paradigm Shift, which is solving one NLP task by reformulating it as another one. Paradigm shift has achieved great success on many tasks, becoming a promising way to improve model performance. Moreover, some of these paradigms have shown great potential to unify a large number of NLP tasks, making it possible to build a single model to handle diverse tasks. In this paper, we review such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks.
翻译:在深层次学习的时代,大多数国家语言方案任务的建模已经与几个主流模式相融合。例如,我们通常采用序列标签模式来解决一系列任务,如POS拖累、NER、Chuking等,并采用分类模式解决情绪分析等任务。随着预先培训的语言模式的快速进步,近年来观察到了一种“地格转换”趋势的上升趋势,这种转变正在通过将这一任务重塑为另一个任务的方式来解决。“地格转换”在许多任务上取得了巨大成功,成为了改进模型绩效的一个有希望的方法。此外,其中一些模式显示了将大量国家语言方案任务统一起来的巨大潜力,从而有可能建立一个处理不同任务的单一模式。 在本文中,我们审查了近年来这种模式转变现象,强调了一些有可能解决不同的国家语言方案任务的范例。