In this paper, we describe our submission to the Simultaneous Speech Translation at IWSLT 2022. We explore strategies to utilize an offline model in a simultaneous setting without the need to modify the original model. In our experiments, we show that our onlinization algorithm is almost on par with the offline setting while being $3\times$ faster than offline in terms of latency on the test set. We also show that the onlinized offline model outperforms the best IWSLT2021 simultaneous system in medium and high latency regimes and is almost on par in the low latency regime. We make our system publicly available.
翻译:在本文中,我们描述了我们提交到2022年IWSLT IWSLT的同声语音翻译的呈件。我们探索了在同时使用离线模型的战略,而无需修改原始模型。在实验中,我们显示我们的离线算法几乎与离线设置相当,而测试集的延时比离线快3美元。我们还显示,离线模型比中高长期制度的最佳IWSLT2021同步系统要好,而且几乎与低长期制度相当。我们公开我们的系统。