Direct speech-to-speech translation (S2ST) with discrete units leverages recent progress in speech representation learning. Specifically, a sequence of discrete representations derived in a self-supervised manner are predicted from the model and passed to a vocoder for speech reconstruction, while still facing the following challenges: 1) Acoustic multimodality: the discrete units derived from speech with same content could be indeterministic due to the acoustic property (e.g., rhythm, pitch, and energy), which causes deterioration of translation accuracy; 2) high latency: current S2ST systems utilize autoregressive models which predict each unit conditioned on the sequence previously generated, failing to take full advantage of parallelism. In this work, we propose TranSpeech, a speech-to-speech translation model with bilateral perturbation. To alleviate the acoustic multimodal problem, we propose bilateral perturbation (BiP), which consists of the style normalization and information enhancement stages, to learn only the linguistic information from speech samples and generate more deterministic representations. With reduced multimodality, we step forward and become the first to establish a non-autoregressive S2ST technique, which repeatedly masks and predicts unit choices and produces high-accuracy results in just a few cycles. Experimental results on three language pairs demonstrate that BiP yields an improvement of 2.9 BLEU on average compared with a baseline textless S2ST model. Moreover, our parallel decoding shows a significant reduction of inference latency, enabling speedup up to 21.4x than autoregressive technique. Audio samples are available at \url{https://TranSpeech.github.io/}
翻译:直接语音对语音翻译 (S2ST) 具有离散单位的直接语音对语音翻译 (S2ST) 能够影响语音代表学习的最新进展。 具体地说, 从模型中预测出一系列以自我监督方式产生的离散表达方式序列,并传递到语音重建的vocoder, 但仍面临以下挑战:(1) 声学多式联运:由于声学属性( 例如, 节奏、 音调和能量) 导致翻译准确性下降, 声学( 例如, 节奏、 音调和能量) 的离散单位可能会不确定性; (2) 高度悬浮性: 当前的S2ST系统使用自动递增模型, 预测每个单位都以先前生成的顺序为条件, 无法充分利用平行的顺序。 在这项工作中,我们提出TranSpeech, 语音对语音对音频翻译模式的翻译模式, 我们建议双边的渗透性(BiPPP), 仅从语音样本中学习语言信息信息, 产生更多的确定性表示。</s>