Large Pre-trained Language Models (PLM) have become the most desirable starting point in the field of NLP, as they have become remarkably good at solving many individual tasks. Despite such success, in this paper, we argue that current paradigms of working with PLMs are neglecting a critical aspect of modeling human intelligence: functional compositionality. Functional compositionality - the ability to compose learned tasks - has been a long-standing challenge in the field of AI (and many other fields) as it is considered one of the hallmarks of human intelligence. An illustrative example of such is cross-lingual summarization, where a bilingual person (English-French) could directly summarize an English document into French sentences without having to translate the English document or summary into French explicitly. We discuss why this matter is an important open problem that requires further attention from the field. Then, we show that current PLMs (e.g., GPT-2 and T5) don't have functional compositionality yet and it is far from human-level generalizability. Finally, we suggest several research directions that could push the field towards zero-shot functional compositionality of language models.
翻译:受过培训的大型语言模型(PLM)已成为国家语言模型(PLP)领域最理想的起点,因为这些模型在解决许多个人任务方面已经变得非常出色。尽管如此,我们在本文件中认为,目前与PLMS合作的模式忽视了模拟人类智能的一个关键方面:功能构成性。功能构成性(能够承担学到的任务)一直是AI(和许多其他领域)领域的一个长期挑战,因为它被认为是人类智慧的标志之一。这种特征的一个例证是跨语言汇总,双语人员(英语-法语)可以直接将英文文件总结成法文句子,而不必将英文文件或摘要明确翻译成法文。我们讨论为什么这个问题是一个重要的开放问题,需要该领域进一步注意。然后,我们表明目前的PLMS(例如GPT-2和T-5)还没有功能构成性,而且远离人类层面的通用性。最后,我们建议了若干研究方向,可以将这个领域推向语言模式的零端功能构成性。</s>