A cornerstone in AI research has been the creation and adoption of standardized training and test datasets to earmark the progress of state-of-the-art models. A particularly successful example is the GLUE dataset for training and evaluating Natural Language Understanding (NLU) models for English. The large body of research around self-supervised BERT-based language models revolved around performance improvements on NLU tasks in GLUE. To evaluate language models in other languages, several language-specific GLUE datasets were created. The area of speech language understanding (SLU) has followed a similar trajectory. The success of large self-supervised models such as wav2vec2 enable creation of speech models with relatively easy to access unlabelled data. These models can then be evaluated on SLU tasks, such as the SUPERB benchmark. In this work, we extend this to Indic languages by releasing the IndicSUPERB benchmark. Specifically, we make the following three contributions. (i) We collect Kathbath containing 1,684 hours of labelled speech data across 12 Indian languages from 1,218 contributors located in 203 districts in India. (ii) Using Kathbath, we create benchmarks across 6 speech tasks: Automatic Speech Recognition, Speaker Verification, Speaker Identification (mono/multi), Language Identification, Query By Example, and Keyword Spotting for 12 languages. (iii) On the released benchmarks, we train and evaluate different self-supervised models alongside a commonly used baseline FBANK. We show that language-specific fine-tuned models are more accurate than baseline on most of the tasks, including a large gap of 76\% for the Language Identification task. However, for speaker identification, self-supervised models trained on large datasets demonstrate an advantage. We hope IndicSUPERB contributes to the progress of developing speech language understanding models for Indian languages.
翻译:AI研究的基石之一是创建和采用标准化培训和测试数据集,指定最先进的语言模型的进展。一个特别成功的例子是用于培训和评价英语自然语言理解模型的GLUE数据集。围绕以自我监督的BERT为基础的语言模型进行的大量研究围绕GLU任务的业绩改进。为了评价其他语言的语言模型,创建了若干语言专用的GLUE数据集。语言理解领域(SLU)也遵循了类似的轨迹。诸如 wav2vec2等大型自我监督模型成功地创建了语言理解模型,用于培训和评价英语的自然语言理解模型。这些模型随后可以在SLU的任务(如SUPERB基准)上进行大量评估。在这项工作中,我们通过发布IndicSUPERB基准,向Indics 提供以下三种贡献。 (i)我们收集了76BAhbath, 包含12个印度语言的标注的语音模型的1,684小时。 来自位于203个地区的常规语言基准的自我监督模型的成功创建, 包括203年的自我确认。 (我们展示了203年的首席领导人的自我评估) 的自我评估任务。