This paper presents the team TransQuest's participation in Sentence-Level Direct Assessment shared task in WMT 2020. We introduce a simple QE framework based on cross-lingual transformers, and we use it to implement and evaluate two different neural architectures. The proposed methods achieve state-of-the-art results surpassing the results obtained by OpenKiwi, the baseline used in the shared task. We further fine tune the QE framework by performing ensemble and data augmentation. Our approach is the winning solution in all of the language pairs according to the WMT 2020 official results.
翻译:本文件介绍跨Quest团队在2020年WMT参与判决一级直接评估共同任务的情况。我们引入了一个基于跨语言变压器的简单量化宽松框架,并用它来实施和评估两种不同的神经结构。拟议方法取得的最新结果超过了共同任务中所使用的基准OpenKiwi所取得的结果。我们通过实施组合和数据增强来进一步调整量化宽松框架。我们的方法是根据2020年世界计量工具正式结果,在所有语言配对中获胜的解决方案。