Traditional hand-crafted linguistically-informed features have often been used for distinguishing between translated and original non-translated texts. By contrast, to date, neural architectures without manual feature engineering have been less explored for this task. In this work, we (i) compare the traditional feature-engineering-based approach to the feature-learning-based one and (ii) analyse the neural architectures in order to investigate how well the hand-crafted features explain the variance in the neural models' predictions. We use pre-trained neural word embeddings, as well as several end-to-end neural architectures in both monolingual and multilingual settings and compare them to feature-engineering-based SVM classifiers. We show that (i) neural architectures outperform other approaches by more than 20 accuracy points, with the BERT-based model performing the best in both the monolingual and multilingual settings; (ii) while many individual hand-crafted translationese features correlate with neural model predictions, feature importance analysis shows that the most important features for neural and classical architectures differ; and (iii) our multilingual experiments provide empirical evidence for translationese universals across languages.
翻译:在这项工作中,我们:(一) 将传统的基于地貌工程的方法与基于地貌学习的方法相比较,并(二) 分析神经结构,以调查手工制作的特征如何很好地解释神经模型预测的差异。我们使用经培训的神经字嵌入器,以及单语和多语言环境中的若干端到端神经结构,并将这些结构与基于地貌工程的SVM分类器进行比较。我们显示:(一) 神经结构比其他方法高出20多个精确点,而基于BERT的模型在单语和多语言环境中都发挥最佳作用;(二) 虽然许多个人手制作的翻译特征与神经模型预测相关,但特征重要分析表明,神经和古典结构最重要的特征不同;(三) 我们的多语言实验为跨越通用语言的翻译提供了经验证据。