Personal Digital Assistants (PDAs) - such as Siri, Alexa and Google Assistant, to name a few - play an increasingly important role to access information and complete tasks spanning multiple domains, and by diverse groups of users. A text-to-speech (TTS) module allows PDAs to interact in a natural, human-like manner, and play a vital role when the interaction involves people with visual impairments or other disabilities. To cater to the needs of a diverse set of users, inclusive TTS is important to recognize and pronounce correctly text in different languages and dialects. Despite great progress in speech synthesis, the pronunciation accuracy of named entities in a multi-lingual setting still has a large room for improvement. Existing approaches to correct named entity (NE) mispronunciations, like retraining Grapheme-to-Phoneme (G2P) models, or maintaining a TTS pronunciation dictionary, require expensive annotation of the ground truth pronunciation, which is also time consuming. In this work, we present a highly-precise, PDA-compatible pronunciation learning framework for the task of TTS mispronunciation detection and correction. In addition, we also propose a novel mispronunciation detection model called DTW-SiameseNet, which employs metric learning with a Siamese architecture for Dynamic Time Warping (DTW) with triplet loss. We demonstrate that a locale-agnostic, privacy-preserving solution to the problem of TTS mispronunciation detection is feasible. We evaluate our approach on a real-world dataset, and a corpus of NE pronunciations of an anonymized audio dataset of person names recorded by participants from 10 different locales. Human evaluation shows our proposed approach improves pronunciation accuracy on average by ~6% compared to strong phoneme-based and audio-based baselines.
翻译:个人数字助理(PDAs)-例如Siri、Alexa和Google A助理(可以点几个名字)-在获取信息和完成跨多个域的任务和由不同用户群体完成的任务方面发挥着越来越重要的作用。 文本到语音模块(TTS)使PDAs能够以自然的、人性化的方式互动,并在互动涉及视觉障碍或其他残疾的人时发挥重要作用。 为了满足多种用户的需求,包容性 TTS对于以不同语言和方言识别和表达正确文本非常重要。 尽管在语音合成方面取得了巨大进展,但多语言环境中命名实体的读音准确性仍然有很大的改进空间。 现有的纠正名称实体(NE) 错误的预言(G2P) 模式到 Phoneme 模型模型模型模型模型模型(TTTS) 模式(TTTV) 模型和智能存储系统(Oral-stal) 数据框架也称为时间变现。 我们的快速的语音读音化系统(Oral-station) 测试(Orview Strecial)-station Strecial dalation Stabition Stalationalation Stal dality dal dal dald dald))) 数据的升级。</s>