From both human translators (HT) and machine translation (MT) researchers' point of view, translation quality evaluation (TQE) is an essential task. Translation service providers (TSPs) have to deliver large volumes of translations which meet customer specifications with harsh constraints of required quality level in tight time-frames and costs. MT researchers strive to make their models better, which also requires reliable quality evaluation. While automatic machine translation evaluation (MTE) metrics and quality estimation (QE) tools are widely available and easy to access, existing automated tools are not good enough, and human assessment from professional translators (HAP) are often chosen as the golden standard \cite{han-etal-2021-TQA}. Human evaluations, however, are often accused of having low reliability and agreement. Is this caused by subjectivity or statistics is at play? How to avoid the entire text to be checked and be more efficient with TQE from cost and efficiency perspectives, and what is the optimal sample size of the translated text, so as to reliably estimate the translation quality of the entire material? This work carries out such motivated research to correctly estimate the confidence intervals \cite{Brown_etal2001Interval} depending on the sample size of the translated text, e.g. the amount of words or sentences, that needs to be processed on TQE workflow step for confident and reliable evaluation of overall translation quality. The methodology we applied for this work is from Bernoulli Statistical Distribution Modelling (BSDM) and Monte Carlo Sampling Analysis (MCSA).
翻译:翻译服务供应商(TSP)必须提供大量翻译,满足客户要求,在严格的时间框架和费用方面,质量水平要求有严格的限制。MT研究人员努力改进其模型,这也需要可靠的质量评估。虽然自动机器翻译评价(MTE)指标和质量估计工具可以广泛获得,而且容易获取,但现有的自动化工具不够好,专业翻译员(HAP)的人力评估往往被选为金标准\cite{han-con-al-2021-TQA}。然而,人类评价往往被指责为可靠性和协议性低,这是由主题性或统计因素造成的吗?如何避免整个文本从成本和效率角度加以检查,从TQE的测试中提高效率,以及翻译文本的最佳样本规模,以便可靠地估计整个材料的翻译质量?这项工作从2001年的SBERS(SB) 样本质量到SAL(SB) 质量的模型和步骤翻译的准确性分析。