Most lip-to-speech (LTS) synthesis models are trained and evaluated under the assumption that the audio-video pairs in the dataset are perfectly synchronized. In this work, we show that the commonly used audio-visual datasets, such as GRID, TCD-TIMIT, and Lip2Wav, can have data asynchrony issues. Training lip-to-speech with such datasets may further cause the model asynchrony issue -- that is, the generated speech and the input video are out of sync. To address these asynchrony issues, we propose a synchronized lip-to-speech (SLTS) model with an automatic synchronization mechanism (ASM) to correct data asynchrony and penalize model asynchrony. We further demonstrate the limitation of the commonly adopted evaluation metrics for LTS with asynchronous test data and introduce an audio alignment frontend before the metrics sensitive to time alignment for better evaluation. We compare our method with state-of-the-art approaches on conventional and time-aligned metrics to show the benefits of synchronization training.
翻译:多数口对口合成模型都经过培训和评估,假设数据集中的视听配对是完全同步的。在这项工作中,我们表明,通用的视听数据集,如全球资源数据库、TCD-TIMIT和Lip2Wav等,可能存在数据无同步问题。用这种数据集来培训口对口合成模型可能会进一步导致模型的无同步问题 -- -- 即生成的语音和输入的视频不同步问题。为了解决这些无同步问题,我们建议采用同步的口对口(SLTS)模型,采用自动同步机制(ASM)来纠正数据无同步和惩罚无同步模式。我们进一步展示了对LTS普遍采用的评价指标与无同步测试数据的限制,并在对时间进行敏感度调整以更好地评估之前引入音频前端对齐。我们比较了我们的方法与常规和时间校准的先进方法,以显示同步培训的好处。</s>