We propose an ensemble model for predicting the lexical complexity of words and multiword expressions (MWEs). The model receives as input a sentence with a target word or MWEand outputs its complexity score. Given that a key challenge with this task is the limited size of annotated data, our model relies on pretrained contextual representations from different state-of-the-art transformer-based language models (i.e., BERT and RoBERTa), and on a variety of training methods for further enhancing model generalization and robustness:multi-step fine-tuning and multi-task learning, and adversarial training. Additionally, we propose to enrich contextual representations by adding hand-crafted features during training. Our model achieved competitive results and ranked among the top-10 systems in both sub-tasks.
翻译:我们提出了一个共同模型,用于预测单词和多字表达式的复杂程度。该模型作为投入,接收一个带有目标单词或MWE和产出的句子,其复杂性得分。鉴于这项任务的关键挑战在于附加说明的数据数量有限,我们的模型依赖于来自不同先进变压器语言模型(即BERT和ROBERTA)的预先培训背景说明,以及进一步加强模式概括和稳健性的各种培训方法:多步骤微调和多任务学习,以及对抗性培训。此外,我们提议在培训期间增加手工制作的特征,丰富背景表现。我们的模式取得了竞争成果,在两个子任务中都排在前十大系统中。