A popular approach to streaming speech translation is to employ a single offline model with a \textit{wait-$k$} policy to support different latency requirements, which is simpler than training multiple online models with different latency constraints. However, there is a mismatch problem in using a model trained with complete utterances for streaming inference with partial input. We demonstrate that speech representations extracted at the end of a streaming input are significantly different from those extracted from a complete utterance. To address this issue, we propose a new approach called Future-Aware Streaming Translation (FAST) that adapts an offline ST model for streaming input. FAST includes a Future-Aware Inference (FAI) strategy that incorporates future context through a trainable masked embedding, and a Future-Aware Distillation (FAD) framework that transfers future context from an approximation of full speech to streaming input. Our experiments on the MuST-C EnDe, EnEs, and EnFr benchmarks show that FAST achieves better trade-offs between translation quality and latency than strong baselines. Extensive analyses suggest that our methods effectively alleviate the aforementioned mismatch problem between offline training and online inference.
翻译:流行的语音翻译流学方法是采用单一离线模式,采用“Textit{wait-$k$}政策”,支持不同的延迟要求,这比培训具有不同延迟限制的多个在线模型简单得多,但是,在使用一个经过全面培训的模型,在流出推理中使用部分投入,存在一个不匹配的问题。我们证明,在流出输入结尾处提取的语音演示与从完整发言中提取的演示有很大不同。为了解决这一问题,我们提议了一种名为“未来软件流出翻译(FAST)”的新方法,以调整一个用于流出输入的离线ST模型。 FAST包括一个未来软件推断(FAI)战略,通过可训练的隐蔽嵌入将未来环境纳入其中,以及一个未来软件蒸馏(FAD)框架,将未来背景从全文的近似到流出输入。我们关于MUST-C EnDe、EnEs和EnFr基准的实验显示,FST在流出翻译质量和嵌通度之间实现比强的基线之间更好的交易。</s>