Speech-to-speech translation (S2ST) converts input speech to speech in another language. A challenge of delivering S2ST in real time is the accumulated delay between the translation and speech synthesis modules. While recently incremental text-to-speech (iTTS) models have shown large quality improvements, they typically require additional future text inputs to reach optimal performance. In this work, we minimize the initial waiting time of iTTS by adapting the upstream speech translator to generate high-quality pseudo lookahead for the speech synthesizer. After mitigating the initial delay, we demonstrate that the duration of synthesized speech also plays a crucial role on latency. We formalize this as a latency metric and then present a simple yet effective duration-scaling approach for latency reduction. Our approaches consistently reduce latency by 0.2-0.5 second without sacrificing speech translation quality.
翻译:语音对语音翻译( S2ST) 将输入语音转换为另一种语言的语音。 实时发送 S2ST 的挑战在于翻译和语音合成模块之间的累积延迟。 虽然最近逐渐增加的文本对语音的模型质量有了很大的改进, 但通常需要在未来增加文本投入, 才能达到最佳性能 。 在这项工作中, 我们通过调整上游语音翻译器来为语音合成器生成高质量的假外观头来最大限度地减少 iTTS 的初始等待时间 。 在减轻最初的延迟之后, 我们证明合成语音的持续时间对于延缓也起着关键作用 。 我们将此正式化为延时度指标, 然后提出简单而有效的延时缩办法 。 我们的方法在不牺牲语音翻译质量的情况下, 持续将延时时间减少0. 0. 2 5 秒 。