Prompt-based fine-tuning for pre-trained models has proven effective for many natural language processing tasks under few-shot settings in general domain. However, tuning with prompt in biomedical domain has not been investigated thoroughly. Biomedical words are often rare in general domain, but quite ubiquitous in biomedical contexts, which dramatically deteriorates the performance of pre-trained models on downstream biomedical applications even after fine-tuning, especially in low-resource scenarios. We propose a simple yet effective approach to helping models learn rare biomedical words during tuning with prompt. Experimental results show that our method can achieve up to 6% improvement in biomedical natural language inference task without any extra parameters or training steps using few-shot vanilla prompt settings.
翻译:对预先培训的模型进行即时微调,已证明在一般领域几发情况下对许多自然语言处理任务有效。然而,在生物医学领域迅速进行调试尚未彻底调查。生物医学词汇在一般领域通常很少见,但在生物医学方面却非常普遍,即使经过微调,特别是在低资源情景下,也使经过预先培训的生物医学应用模型的性能急剧恶化。我们提出了一个简单而有效的方法,帮助模型在快速调时学习稀有生物医学词汇。实验结果显示,我们的方法可以在没有附加参数或培训步骤的情况下,利用微弱的香草快速环境,在生物医学自然语言推导任务方面实现高达6%的改进。