One challenge in spoken language translation is that plenty of spoken content is long-form, but short units are necessary for obtaining high-quality translations. To address this mismatch, we adapt large language models (LLM) to split long ASR transcripts into segments that can be independently translated so as to maximize the overall translation quality. To combat the tendency of hallucination by LLMs, we incorporate finite-state constraints during decoding to eliminate invalid outputs. We discover that LLMs are adaptable to transcripts containing ASR errors through prompt-tuning or fine-tuning. In comparison to a state-of-the-art automatic punctuation baseline, our best LLM improves the average BLEU for English-German, English-Spanish, and English-Arabic TED talk translation in 9 test sets by 2.9 points, just by improving segmentation.
翻译:暂无翻译