In this paper, we propose to study language modelling as a multi-task problem, bringing together three strands of research: multi-task learning, linguistics, and interpretability. Based on hypotheses derived from linguistic theory, we investigate whether language models adhere to learning principles of multi-task learning during training. To showcase the idea, we analyse the generalisation behaviour of language models as they learn the linguistic concept of Negative Polarity Items (NPIs). Our experiments demonstrate that a multi-task setting naturally emerges within the objective of the more general task of language modelling.We argue that this insight is valuable for multi-task learning, linguistics and interpretability research and can lead to exciting new findings in all three domains.
翻译:在本文中,我们提议将语言建模作为一个多任务问题来研究,将三个研究领域结合起来:多任务学习、语言学和可解释性。根据语言理论的假设,我们调查语言建模是否遵守培训期间多任务学习的学习原则。为了展示这一想法,我们分析语言建模在学习非对地项目语言概念时的概括行为。我们的实验表明,多任务设置自然出现在更一般性的语言建模任务的目标之内。我们争辩说,这种深入了解对于多任务学习、语言学和可解释性研究是有价值的,并可能导致所有三个领域令人振奋的新发现。