Stance detection aims to identify whether the author of a text is in favor of, against, or neutral to a given target. The main challenge of this task comes two-fold: few-shot learning resulting from the varying targets and the lack of contextual information of the targets. Existing works mainly focus on solving the second issue by designing attention-based models or introducing noisy external knowledge, while the first issue remains under-explored. In this paper, inspired by the potential capability of pre-trained language models (PLMs) serving as knowledge bases and few-shot learners, we propose to introduce prompt-based fine-tuning for stance detection. PLMs can provide essential contextual information for the targets and enable few-shot learning via prompts. Considering the crucial role of the target in stance detection task, we design target-aware prompts and propose a novel verbalizer. Instead of mapping each label to a concrete word, our verbalizer maps each label to a vector and picks the label that best captures the correlation between the stance and the target. Moreover, to alleviate the possible defect of dealing with varying targets with a single hand-crafted prompt, we propose to distill the information learned from multiple prompts. Experimental results show the superior performance of our proposed model in both full-data and few-shot scenarios.
翻译:检测标准的目的是确定文本的作者是否赞成、反对或中性于特定目标。这一任务的主要挑战有两个方面:从不同目标中获得的微小的学习,以及缺乏目标的背景信息。现有工作主要侧重于通过设计基于关注的模式或引进吵闹的外部知识来解决第二个问题,而第一个问题仍未得到充分探讨。在本文中,在作为知识库和少见的学习者的预先培训语言模型的潜在能力启发下,我们提议对定位检测进行迅速的微调。PLM可以为目标提供必要的背景信息,并通过提示进行少见的学习。考虑到目标在定位检测任务中的关键作用,我们设计了目标敏锐的提示,并提出了新颖的言辞。我们不是将每个标签绘制成一个具体词,而是将每个标签绘制到矢量,并选取最能捕捉到姿态和目标相关性的标签。此外,为了减轻处理不同目标与单一手工设计的模型之间可能出现的缺陷,PLMS可以提供一些背景信息,通过提示进行少见的学习。考虑到目标在定位任务识别任务中的关键作用,我们设计了目标,我们建议从多重的实验中提取了几个图像。