Handling and digesting a huge amount of information in an efficient manner has been a long-term demand in modern society. Some solutions to map key points (short textual summaries capturing essential information and filtering redundancies) to a large number of arguments/opinions have been provided recently (Bar-Haim et al., 2020). To complement the full picture of the argument-to-keypoint mapping task, we mainly propose two approaches in this paper. The first approach is to incorporate prompt engineering for fine-tuning the pre-trained language models (PLMs). The second approach utilizes prompt-based learning in PLMs to generate intermediary texts, which are then combined with the original argument-keypoint pairs and fed as inputs to a classifier, thereby mapping them. Furthermore, we extend the experiments to cross/in-domain to conduct an in-depth analysis. In our evaluation, we find that i) using prompt engineering in a more direct way (Approach 1) can yield promising results and improve the performance; ii) Approach 2 performs considerably worse than Approach 1 due to the negation issue of the PLM.
翻译:以有效方式处理和消化大量信息是现代社会的长期需求,最近提供了将关键点(简短的文字摘要捕捉基本信息并过滤冗余)映射为大量论据/意见的一些解决办法(Bar-Haim等人,2020年),为了全面补充对关键点绘图任务的全面情况,我们在本文件中主要提出两种办法:第一种办法是纳入快速工程,以微调经过训练的语文模式(PLMs),第二种办法是利用PLM的迅速学习来生成中间文本,然后与原始的对关键对子合并,作为输入一个分类器的投入,从而进行绘图;此外,我们将实验扩大到交叉/内部,以便进行深入分析;在我们的评价中,我们发现i)以更直接的方式使用迅速工程(Approach 1)可以产生令人鼓舞的结果,并改进业绩;第二种办法是利用PLMS的迅速学习来产生中间文本,然后将其与原始的对口配对作为输入材料,作为输入,从而进行绘图。