We present a new paradigm for fine-tuning large-scale visionlanguage pre-trained models on downstream task, dubbed Prompt Regularization (ProReg). Different from traditional fine-tuning which easily overfits to the downstream task data, ProReg uses the prediction by prompting the pretrained model to regularize the fine-tuning. The motivation is: by prompting the large model "a photo of a [CLASS]", the fil-lin answer is only dependent on the pretraining encyclopedic knowledge while independent of the task data distribution, which is usually biased. Specifically, given a training sample prediction during fine-tuning, we first calculate its KullbackLeibler loss of the prompt prediction and Cross-Entropy loss of the ground-truth label, and then combine them with a proposed sample-wise adaptive trade-off weight, which automatically adjusts the transfer between the pretrained and downstream domains. On various out-of-distribution benchmarks, we show the consistently strong performance of ProReg compared with conventional fine-tuning, zero-shot prompt, prompt tuning, and other state-of-the-art methods.
翻译:我们提出了一种新的方法,用于对下游任务进行大规模视觉-语言预训练模型的微调,称为提示正则化(ProReg)。与传统的微调不同,它很容易过度拟合下游任务数据,ProReg使用预训练模型的提问预测来规范微调。动机是:通过提示大型模型“一张[CLASS]的照片”,其填空答案仅依赖于预训练百科全书知识而独立于通常存在偏差的任务数据分布。具体而言,在微调过程中给定一个训练样本预测,我们首先计算其提示预测的KL散度损失和基于真实标签的交叉熵损失,然后将它们与提出的样本自适应权重结合起来,这种权重自动调整预训练和下游领域之间的转移。在各种非分布式基准测试中,与传统微调、零-shot提示、提示调整和其他最先进方法相比,我们展示了ProReg的始终强大性能。