Distance Metric Learning (DML) has attracted much attention in image processing in recent years. This paper analyzes its impact on supervised fine-tuning language models for Natural Language Processing (NLP) classification tasks under few-shot learning settings. We investigated several DML loss functions in training RoBERTa language models on known SentEval Transfer Tasks datasets. We also analyzed the possibility of using proxy-based DML losses during model inference. Our systematic experiments have shown that under few-shot learning settings, particularly proxy-based DML losses can positively affect the fine-tuning and inference of a supervised language model. Models tuned with a combination of CCE (categorical cross-entropy loss) and ProxyAnchor Loss have, on average, the best performance and outperform models with only CCE by about 3.27 percentage points -- up to 10.38 percentage points depending on the training dataset.
翻译:近些年来,远程计量学习(DML)在图像处理中引起了很大的注意。本文件分析了其在少数学习环境中对受监督的自然语言处理(NLP)分类任务微调语言模型的影响。我们在对已知的SentEval传输任务数据集进行ROBERTA语言模型培训的过程中调查了DML的一些损失功能。我们还分析了在模型推理过程中使用基于代理的DML损失的可能性。我们的系统实验表明,在微小的学习环境中,特别是基于代理的DML损失,能够积极影响受监督的语言模型的微调和推论。与CECE(分类跨人类损失)和Proxyanchonor损失相结合的模型平均具有最佳性能和超常模式,只有CE(CE)大约3.27个百分点 -- -- 根据培训数据集,高达10.38个百分点。