While human evaluation is the most reliable metric for evaluating speech generation systems, it is generally costly and time-consuming. Previous studies on automatic speech quality assessment address the problem by predicting human evaluation scores with machine learning models. However, they rely on supervised learning and thus suffer from high annotation costs and domain-shift problems. We propose SpeechLMScore, an unsupervised metric to evaluate generated speech using a speech-language model. SpeechLMScore computes the average log-probability of a speech signal by mapping it into discrete tokens and measures the average probability of generating the sequence of tokens. Therefore, it does not require human annotation and is a highly scalable framework. Evaluation results demonstrate that the proposed metric shows a promising correlation with human evaluation scores on different speech generation tasks including voice conversion, text-to-speech, and speech enhancement.
翻译:虽然人类评价是评价发声系统最可靠的衡量标准,但一般是昂贵和耗时的。以前关于自动语言质量评估的研究通过预测机器学习模型的人类评价分数来解决这个问题,然而,它们依靠的是监督的学习,因此有很高的注解成本和域次问题。我们提出SpealesLMScore,这是用一种语言模型来评价产生的演讲的最可靠的衡量标准。SpeeLMSCore通过将一个语音信号绘制成离散的符号来计算一个语音信号的平均日志概率,并测量生成代号序列的平均概率。因此,它不需要人类注解,而是一个高度可扩展的框架。评价结果表明,拟议的衡量标准与不同发声任务(包括语音转换、文字对语音和语音增强)的人类评价分数有着很有希望的关联性。