In this work, we explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e.g., Roberta) for natural language understanding (NLU) tasks. Our experiments show that EBM training can help the model reach a better calibration that is competitive to strong baselines, with little or no loss in accuracy. We discuss three variants of energy functions (namely scalar, hidden, and sharp-hidden) that can be defined on top of a text encoder, and compare them in experiments. Due to the discreteness of text data, we adopt noise contrastive estimation (NCE) to train the energy-based model. To make NCE training more effective, we train an auto-regressive noise model with the masked language model (MLM) objective.
翻译:在这项工作中,我们探索在微调预先培训的文字编码器(如罗伯塔)期间,为自然语言理解任务(NLU)进行联合能源模型(EBM)培训。我们的实验表明,EBM培训可以帮助模型达到更佳的校准标准,比强的基线更具有竞争力,而准确性则很少或没有损失。我们讨论了三种能函数(即斜弧、隐藏和尖锐隐藏)的变体(即在文本编码器之上加以定义,并在实验中加以比较)。由于文本数据的离散性,我们采用了噪声对比估计法(NCE)来培训以能源为基础的模型。为了使NCE培训更加有效,我们用遮掩语言模型的目标培训一种自动反向噪声模型。