Social media companies as well as authorities make extensive use of artificial intelligence (AI) tools to monitor postings of hate speech, celebrations of violence or profanity. Since AI software requires massive volumes of data to train computers, Machine Translation (MT) of the online content is commonly used to process posts written in several languages and hence augment the data needed for training. However, MT mistakes are a regular occurrence when translating sentiment-oriented user-generated content (UGC), especially when a low-resource language is involved. The adequacy of the whole process relies on the assumption that the evaluation metrics used give a reliable indication of the quality of the translation. In this paper, we assess the ability of automatic quality metrics to detect critical machine translation errors which can cause serious misunderstanding of the affect message. We compare the performance of three canonical metrics on meaningless translations where the semantic content is seriously impaired as compared to meaningful translations with a critical error which exclusively distorts the sentiment of the source text. We conclude that there is a need for fine-tuning of automatic metrics to make them more robust in detecting sentiment critical errors.
翻译:社交媒体公司和当局广泛使用人工智能工具来监测仇恨言论、暴力庆祝活动或亵渎行为。由于AI软件需要大量数据来培训计算机,因此网上内容的机器翻译(MT)通常用于处理以几种语言撰写的文章,从而增加培训所需的数据。然而,在翻译情绪导向用户生成的内容(UGC)时,MT错误经常发生,特别是在涉及低资源语言的情况下。整个过程的充足性取决于这样一种假设,即所使用的评价指标可靠地表明了翻译的质量。在本文件中,我们评估自动质量衡量标准是否有能力发现关键机器翻译错误,这可能会对影响信息造成严重误解。我们比较了在语言内容严重受损的情况下,对无意义的翻译的三种卡通性指标的性能,与仅仅歪曲源文本情绪的关键错误的3个关键错误的性能。我们的结论是,有必要对所使用的自动衡量标准进行微调,使其在检测情绪批评错误时更加稳健。