This paper introduces a novel Token-and-Duration Transducer (TDT) architecture for sequence-to-sequence tasks. TDT extends conventional RNN-Transducer architectures by jointly predicting both a token and its duration, i.e. the number of input frames covered by the emitted token. This is achieved by using a joint network with two outputs which are independently normalized to generate distributions over tokens and durations. During inference, TDT models can skip input frames guided by the predicted duration output, which makes them significantly faster than conventional Transducers which process the encoder output frame by frame. TDT models achieve both better accuracy and significantly faster inference than conventional Transducers on different sequence transduction tasks. TDT models for Speech Recognition achieve better accuracy and up to 2.82X faster inference than RNN-Transducers. TDT models for Speech Translation achieve an absolute gain of over 1 BLEU on the MUST-C test compared with conventional Transducers, and its inference is 2.27X faster. In Speech Intent Classification and Slot Filling tasks, TDT models improve the intent accuracy up to over 1% (absolute) over conventional Transducers, while running up to 1.28X faster.
翻译:标题翻译:通过联合预测标记和持续时间实现高效的序列转换
摘要翻译:本文介绍了一个新颖的Token-and-Duration Transducer(TDT)结构,它用于序列到序列的任务。TDT通过联合预测标记和其持续时间,比传统的RNN-Transducer模型更准确且速度更快。在不同的序列转换任务中,TDT模型均能显著提高准确性和推理速度。TDT模型在语音识别任务中的效果尤为明显,其准确性比RNN-Transducer提高,并且速度快达2.82倍。在语音翻译、语音意图分类和槽填充任务中,TDT模型也能提高准确性和速度。在MUST-C数据集上,TDT模型的预测准确性比传统的Transducer模型高出1个BLEU点,且预测速度快达2.27倍。