Although Perplexity is a widely used performance metric for language models, the values are highly dependent upon the number of words in the corpus and is useful to compare performance of the same corpus only. In this paper, we propose a new metric that can be used to evaluate language model performance with different vocabulary sizes. The proposed unigram-normalized Perplexity actually presents the performance improvement of the language models from that of simple unigram model, and is robust on the vocabulary size. Both theoretical analysis and computational experiments are reported.
翻译:虽然对语言模型来说,两难性是广泛使用的性能衡量标准,但价值高度取决于文体中的字数,并可用于比较同一文体的性能。在本文中,我们建议采用新的衡量标准,用不同的词汇大小来评估语言模型的性能。拟议的单格标准化的不易性实际上体现了语言模型与简单单词模型相比的性能改进,并且对词汇大小具有很强的说服力。报告了理论分析和计算实验。