In this paper, we construct a Japanese audiobook speech corpus called "J-MAC" for speech synthesis research. With the success of reading-style speech synthesis, the research target is shifting to tasks that use complicated contexts. Audiobook speech synthesis is a good example that requires cross-sentence, expressiveness, etc. Unlike reading-style speech, speaker-specific expressiveness in audiobook speech also becomes the context. To enhance this research, we propose a method of constructing a corpus from audiobooks read by professional speakers. From many audiobooks and their texts, our method can automatically extract and refine the data without any language dependency. Specifically, we use vocal-instrumental separation to extract clean data, connectionist temporal classification to roughly align text and audio, and voice activity detection to refine the alignment. J-MAC is open-sourced in our project page. We also conduct audiobook speech synthesis evaluations, and the results give insights into audiobook speech synthesis.
翻译:在本文中,我们为语言合成研究建造了日本声书语言材料“J-MAC”。随着阅读式语言合成的成功,研究目标正在转向使用复杂背景的任务。音书语言合成是一个很好的例子,需要交叉解读、表达等。与阅读式演讲不同,音书演讲中发言者特有的表达性也成为背景。为了加强这一研究,我们建议了一种方法,用专业演讲者阅读的音书构建一个文集。从许多音书及其文本中,我们的方法可以自动提取和完善数据,而无需依赖任何语言。具体地说,我们使用声学结构分离来提取清洁数据,连接性时间分类,以大致对文本和音频进行调和语音活动检测,以完善校正。J-MAC在我们的项目网页中开源。我们还进行声书语言合成评估,结果为声书语言语言合成提供了洞察力。