We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
翻译:我们研究经过培训的语音处理系统的能力,仅仅是为了预测互联网上大量音频记录;如果将多语种和多任务监督的时数缩减到680 000小时,所产生的模型一般都符合标准基准,并且往往与事先充分监督的结果具有竞争力,但是在不需要任何微调的零速传输设置中,这些模型与人类相比,其准确性和稳健性接近;我们正在发布模型和推论代码,作为进一步开展稳健的语音处理工作的基础。