This paper introduces our systems for all three subtasks of SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning. To help our model better represent and understand abstract concepts in natural language, we well-design many simple and effective approaches adapted to the backbone model (RoBERTa). Specifically, we formalize the subtasks into the multiple-choice question answering format and add special tokens to abstract concepts, then, the final prediction of question answering is considered as the result of subtasks. Additionally, we employ many finetuning tricks to improve the performance. Experimental results show that our approaches achieve significant performance compared with the baseline systems. Our approaches achieve eighth rank on subtask-1 and tenth rank on subtask-2.
翻译:本文介绍了我们用于SemEval-2021任务4的所有三个子任务系统:阅读摘要含义理解。为了帮助模型更好地在自然语言中体现和理解抽象概念,我们设计了许多适合主干模式(ROBERTA)的简单而有效的方法。具体地说,我们将这些子任务正式化为多选问题解答格式,并在抽象概念中添加特殊符号。然后,对回答问题的最后预测被视为子任务的结果。此外,我们运用了许多微调技巧来改进绩效。实验结果显示,我们的方法与基线系统相比取得了显著的绩效。我们的方法在子任务1和子任务2上达到了第八位。