Methods for automatically assessing speech quality in real world environments are critical for developing robust human language technologies and assistive devices. Behavioral ratings provided by human raters (e.g., mean opinion scores; MOS) are considered the gold standard, but they are susceptible to variability between individual raters, cannot easily be generalized across corpora, and are labor-intensive to collect, thus limiting the acoustic challenges they can quantify. Here, we present a new, scalable method for automatically assessing speech quality: the self-supervised speech quality assessment (S3QA) model. First, we manipulated high quality utterances from multiple speech corpora, using a wide range of acoustic challenges intended to emulate common sources of quality degradation in the real-world: frequency filtering, reverberation, background noise, and digital compression. Second, we leveraged an existing, pre-trained speech foundation model, WavLM, to computationally derive a self-supervised training target that quantified speech degradation using the cosine distance between the clean and degraded versions of each utterance in the embedding space. Next, we trained a transformer-based model to predict these cosine distances, given only the degraded versions of the utterances. Finally, the trained model was evaluated on unseen test corpora of synthetic mixtures, NISQA, and VOiCES. We show that the S3QA model trained on this task accurately predicts degradation cosine distances across a wide range challenging acoustic conditions and is aligned with both behavioral ratings (MOS), speech technology performance (automatic speech recognition) and other important features of the held-out data (e.g., microphone distances). This model provides an automated, scalable method for assessing speech quality across a wide range of acoustic challenges.
翻译:暂无翻译