Audio-visual speech enhancement system is regarded to be one of promising solutions for isolating and enhancing speech of desired speaker. Conventional methods focus on predicting clean speech spectrum via a naive convolution neural network based encoder-decoder architecture, and these methods a) not adequate to use data fully and effectively, b) cannot process features selectively. The proposed model addresses these drawbacks, by a) applying a model that fuses audio and visual features layer by layer in encoding phase, and that feeds fused audio-visual features to each corresponding decoder layer, and more importantly, b) introducing soft threshold attention into the model to select the informative modality softly. This paper proposes attentional audio-visual multi-layer feature fusion model, in which soft threshold attention unit are applied on feature mapping at every layer of decoder. The proposed model demonstrates the superior performance of the network against the state-of-the-art models.
翻译:常规方法侧重于通过基于编码器-解码器的天真神经网络结构预测清洁的言语频谱,这些方法a)不足以充分有效地使用数据,b)不能有选择地处理特征;拟议模式处理这些缺陷,办法是:(a) 采用一种模型,在编码阶段逐层地结合视听特征层,并给每个相应的解码器层注入混合的视听特征,更重要的是,b) 在模型中引入软门槛关注,以便以软方式选择信息模式;本文建议采用有注意的视听多层特征聚合模型,在每一个解码器层的地貌绘图中应用软门槛关注单位。