Fine-tuning pretrained language models (LMs) without making any architectural changes has become a norm for learning various language downstream tasks. However, for non-language downstream tasks, a common practice is to employ task-specific designs for input, output layers, and loss functions. For instance, it is possible to fine-tune an LM into an MNIST classifier by replacing the word embedding layer with an image patch embedding layer, the word token output layer with a 10-way output layer, and the word prediction loss with a 10-way classification loss, respectively. A natural question arises: can LM fine-tuning solve non-language downstream tasks without changing the model architecture or loss function? To answer this, we propose Language-Interfaced Fine-Tuning (LIFT) and study its efficacy and limitations by conducting an extensive empirical study on a suite of non-language classification and regression tasks. LIFT does not make any changes to the model architecture or loss function, and it solely relies on the natural language interface, enabling "no-code machine learning with LMs." We find that LIFT performs relatively well across a wide range of low-dimensional classification and regression tasks, matching the performances of the best baselines in many cases, especially for the classification tasks. We report the experimental results on the fundamental properties of LIFT, including its inductive bias, sample efficiency, ability to extrapolate, robustness to outliers and label noise, and generalization. We also analyze a few properties/techniques specific to LIFT, e.g., context-aware learning via appropriate prompting, quantification of predictive uncertainty, and two-stage fine-tuning. Our code is available at https://github.com/UW-Madison-Lee-Lab/LanguageInterfacedFineTuning.
翻译:在不做任何建筑变革的情况下,微调预先训练的语言模型(LMS)已成为学习各种语言下游任务的一个规范。然而,对于非语言下游任务,通常的做法是对输入、输出层和损失功能使用特定任务设计。例如,有可能将LMS微调成MIS分类器,将LMM微调成MNIST的分类器,将LMM的嵌入层改为图像嵌入层,用文字象征输出层取代输出层,用文字输出层替换输出层,用词符号输出层分别使用10道输出层,用10道分类损失来进行单词预测损失。自然出现一个问题:LMM微调能解决非语言下游任务而不改变模型架构或损失功能功能功能。我们发现,LIMFT在低层次的不确定性或损失阶段功能功能性功能上,通过广泛的实验性研究其功效和局限性。 LFTA(我们)在一般的分类和基础性分析中,包括常规性成本性(我们)的精确性(我们)的分类和基础性(我们)的分类和精确性(我们)分析性(我们)的常规性)分析(我们的基本性)分析(我们的基本性)的)分析/内部性能(我们的数据(我们)(我们)(我们)的)(我们)的)(我们)的)(我们)的)(我们)的)(我们)的)的)(我们)的)的) 的)的)的)(我们的研究(包括基础性能(我们)的)(我们的)的)分析性能(我们)的)(我们的)的)(我们的)的)的)的)的)分析性能(我们的)的)(我们的)的)和(我们的(我们的(我们的)的)的)分析性能(我们的)的)的)的)的)(我们的)的)的)的(我们的(我们的)(我们的)(我们的)的)(我们的)(和(我们的)(我们的)(我们的)(我们的)(我们的)(我们的)的)(LIFTLIFIFTADIADALIA)的)(LIA)的)的)的)的)(