There has been a growing demand for automated spoken language assessment systems in recent years. A standard pipeline for this process is to start with a speech recognition system and derive features, either hand-crafted or based on deep-learning, that exploit the transcription and audio. Though these approaches can yield high performance systems, they require speech recognition systems that can be used for L2 speakers, and preferably tuned to the specific form of test being deployed. Recently a self-supervised speech representation based scheme, requiring no speech recognition, was proposed. This work extends the initial analysis conducted on this approach to a large scale proficiency test, Linguaskill, that comprises multiple parts, each designed to assess different attributes of a candidate's speaking proficiency. The performance of the self-supervised, wav2vec 2.0, system is compared to a high performance hand-crafted assessment system and a BERT-based text system both of which use speech transcriptions. Though the wav2vec 2.0 based system is found to be sensitive to the nature of the response, it can be configured to yield comparable performance to systems requiring a speech transcription, and yields gains when appropriately combined with standard approaches.
翻译:近些年来,对自动口语评估系统的需求不断增长,这一进程的标准管道是,从语音识别系统开始,利用笔录和音频,以手工制作或深层学习的方式,开发出利用笔录和音频的功能。虽然这些方法可以产生高性能系统,但它们需要能够用于L2发言者的语音识别系统,并最好适应正在部署的测试的具体形式。最近,提出了一个基于自我监督的语音代表系统,不需要语音识别。这项工作将就这一方法进行的初步分析扩大到由多个部分组成的大规模熟练程度测试Luguaskill,每个部分都旨在评估候选人口述能力的不同属性。自我监督的 wav2vec 2.0系统的业绩与高性手动评估系统和基于BERT的文本系统相比,两者都使用语音记录。虽然基于 wav2vec 2.0 的系统被认为敏感于答复的性质,但可以配置为需要语音记录系统的类似性能,并在与标准方法适当结合时产生成果。