Spoken question answering (SQA) requires fine-grained understanding of both spoken documents and questions for the optimal answer prediction. In this paper, we propose novel training schemes for spoken question answering with a self-supervised training stage and a contrastive representation learning stage. In the self-supervised stage, we propose three auxiliary self-supervised tasks, including utterance restoration, utterance insertion, and question discrimination, and jointly train the model to capture consistency and coherence among speech documents without any additional data or annotations. We then propose to learn noise-invariant utterance representations in a contrastive objective by adopting multiple augmentation strategies, including span deletion and span substitution. Besides, we design a Temporal-Alignment attention to semantically align the speech-text clues in the learned common space and benefit the SQA tasks. By this means, the training schemes can more effectively guide the generation model to predict more proper answers. Experimental results show that our model achieves state-of-the-art results on three SQA benchmarks.
翻译:口问回答(SQA)要求精细理解口述文件和问题,以便作出最佳回答预测。在本文中,我们提出对口问回答的新培训计划,以自我监督的培训阶段和对比性代表性学习阶段为口问回答提供新的培训计划。在自我监督的阶段,我们提出三项辅助性自我监督的任务,包括语音恢复、语音插入和问题歧视,以及联合培训模式,以便在没有额外数据或说明的情况下实现发言文件的一致性和一致性。然后,我们提议通过采用多重增强战略,包括跨段删除和跨段替代,来学习反动性言论表述。此外,我们设计了一种时间调和调调,将共同学习空间的语音-文字线索进行语义协调,使SQA的任务受益。通过这种方式,培训计划可以更有效地指导生成模型预测更恰当的答案。实验结果显示,我们的模型在三个SQA基准上取得了最新的结果。