Speech summarization, which generates a text summary from speech, can be achieved by combining automatic speech recognition (ASR) and text summarization (TS). With this cascade approach, we can exploit state-of-the-art models and large training datasets for both subtasks, i.e., Transformer for ASR and Bidirectional Encoder Representations from Transformers (BERT) for TS. However, ASR errors directly affect the quality of the output summary in the cascade approach. We propose a cascade speech summarization model that is robust to ASR errors and that exploits multiple hypotheses generated by ASR to attenuate the effect of ASR errors on the summary. We investigate several schemes to combine ASR hypotheses. First, we propose using the sum of sub-word embedding vectors weighted by their posterior values provided by an ASR system as an input to a BERT-based TS system. Then, we introduce a more general scheme that uses an attention-based fusion module added to a pre-trained BERT module to align and combine several ASR hypotheses. Finally, we perform speech summarization experiments on the How2 dataset and a newly assembled TED-based dataset that we will release with this paper. These experiments show that retraining the BERT-based TS system with these schemes can improve summarization performance and that the attention-based fusion module is particularly effective.
翻译:语音摘要,通过将自动语音识别(ASR)和文本摘要(TS)结合起来,可以实现语音摘要的总结。通过这种级联方法,我们可以为两个子任务,即为ASR的变异器和来自变异器的双向编码显示器(BERT)的变异器,开发最先进的模型和大型培训数据集。然而,ASR错误直接影响到级联方法产出摘要的质量。我们建议采用一个级联语音摘要模型,该模型对ASR错误非常有力,并且利用ASR产生的多种假设来减轻ASR错误对摘要的影响。我们调查了将ASR假设组合起来的若干办法。首先,我们建议使用ASR系统对变异器和双向编码表示精度的矢量组合的子词总和,作为对以BERT为基础的产出摘要。然后,我们提出一个更笼统的办法,在经过事先培训的BERT模块中添加一个基于注意的组合模块,以调整和合并一些ASR错误对摘要的影响。我们用几个方案来将ASR假设合并。首先,我们用ASR的模型和新推出的纸质实验,我们将展示这些演讲的模块。