In this paper, we investigate the use of linguistically motivated and computationally efficient structured language models for reranking N-best hypotheses in a statistical machine translation system. These language models, developed from Constraint Dependency Grammar parses, tightly integrate knowledge of words, morphological and lexical features, and syntactic dependency constraints. Two structured language models are applied for N-best rescoring, one is an almost-parsing language model, and the other utilizes more syntactic features by explicitly modeling syntactic dependencies between words. We also investigate effective and efficient language modeling methods to use N-grams extracted from up to 1 teraword of web documents. We apply all these language models for N-best re-ranking on the NIST and DARPA GALE program 2006 and 2007 machine translation evaluation tasks and find that the combination of these language models increases the BLEU score up to 1.6% absolutely on blind test sets.
翻译:在本文中,我们调查了在统计机器翻译系统中对重新排序N最佳假设使用语言动机和计算效率高的结构化语言模型的情况。这些语言模型是从严格依赖语法的剖析法中开发的,对文字、形态特征和词汇特征以及综合依赖性制约的知识的紧密结合。两种结构化语言模型适用于N-最佳重整,一种是几乎差别化的语言模型,而另一种则通过明确模拟语言之间的综合依赖性来利用更为综合的语言特征。我们还调查了使用从网络文件最多取出到1个之三词的N格的有效和高效语言模型方法。我们运用所有这些语言模型来最佳地重新排入NIST和DARPA GALE方案2006年和2007年机器翻译任务,发现这些语言模型的组合使得盲测试组的BLEU绝对得分增加到1.6 % 。