This paper describes our system for Task 4 of SemEval-2021: Reading Comprehension of Abstract Meaning (ReCAM). We participated in all subtasks where the main goal was to predict an abstract word missing from a statement. We fine-tuned the pre-trained masked language models namely BERT and ALBERT and used an Ensemble of these as our submitted system on Subtask 1 (ReCAM-Imperceptibility) and Subtask 2 (ReCAM-Nonspecificity). For Subtask 3 (ReCAM-Intersection), we submitted the ALBERT model as it gives the best results. We tried multiple approaches and found that Masked Language Modeling(MLM) based approach works the best.
翻译:本文描述了我们关于SemEval-2021任务4的系统:阅读抽象含义理解(ReCAM),我们参与了所有子任务,其主要目的是预测一个声明遗漏的抽象词。我们微调了经过事先训练的隐形语言模型,即BERT和ALBERT, 并使用其中的一个集合,作为我们提交的关于Subtask 1 (ReCAM-可接受性)和Subtask 2 (ReCAM-非特定性)的系统。对于Subtask 3 (ReCAM-Intersection),我们提交了ALBERT 模型,因为它提供了最佳结果。我们尝试了多种方法,发现基于遮蔽语言模型的方法最有效。