Speaker adaptation in text-to-speech synthesis (TTS) is to finetune a pre-trained TTS model to adapt to new target speakers with limited data. While much effort has been conducted towards this task, seldom work has been performed for low computational resource scenarios due to the challenges raised by the requirement of the lightweight model and less computational complexity. In this paper, a tiny VITS-based TTS model, named AdaVITS, for low computing resource speaker adaptation is proposed. To effectively reduce parameters and computational complexity of VITS, an iSTFT-based wave construction decoder is proposed to replace the upsampling-based decoder which is resource-consuming in the original VITS. Besides, NanoFlow is introduced to share the density estimate across flow blocks to reduce the parameters of the prior encoder. Furthermore, to reduce the computational complexity of the textual encoder, scaled-dot attention is replaced with linear attention. To deal with the instability caused by the simplified model, instead of using the original text encoder, phonetic posteriorgram (PPG) is utilized as linguistic feature via a text-to-PPG module, which is then used as input for the encoder. Experiment shows that AdaVITS can generate stable and natural speech in speaker adaptation with 8.97M model parameters and 0.72GFlops computational complexity.
翻译:文本到语音合成(TTS)中的演讲人调整是微调一个经过预先培训的TTS模型,以适应数据有限的新目标发言者。虽然为完成这项任务已经做了大量努力,但由于轻量模型的要求和较少的计算复杂性带来的挑战,很少对低计算资源设想方案开展工作。在本文中,为低计算语言演讲人适应而提议了一个小的基于VITS的TTS模型(名为AdaVITS),以有效减少VITS的参数和计算复杂性。为了有效减少VITS的参数和计算复杂性,提议以iSTF为基础的波构造解码器取代原VITS中资源消耗的基于上层采样的解码器。此外,纳诺弗洛还采用共享跨流动区块的密度估计来减少先前编码器参数。此外,为了降低基于文本编码编码的TTTTS的计算复杂性,以线性关注取代了基于简化模式造成的不稳定性,而不是使用原文本编码、语音后台(PPG)的波调参数解码解码解码解码解码器(PPPG)取代了当时的语音变换语言变式语音语音话模块,而将G用于用于Sipecial-G的Limal-Gsimal-GSimal-Inder-IFFsmalentalentalentalmoditionalmotionalmoditionalmoditional-moditionalmoditionalmoditionalpmentalpmentalmodiction2,用于用于用于用于用于用于用于该语言发言的语音的Gsmoditionalmentalmentalmentalmentalmentalmentalmentalmentalmental-motional-modictionsmutmental-modal-modaldaldaldal-modal-modal-modal-modal-modal-modal-modal-Gmal-modal-modal-modal-Gmal-modal-modal-modal-modal-modal-modal-modal-Gsmodal-Gsmasmodaldaldal-Gsmodal-modal-modal-