State-of-the-art audio captioning methods typically use the encoder-decoder structure with pretrained audio neural networks (PANNs) as encoders for feature extraction. However, the convolution operation used in PANNs is limited in capturing the long-time dependencies within an audio signal, thereby leading to potential performance degradation in audio captioning. This letter presents a novel method using graph attention (GraphAC) for encoder-decoder based audio captioning. In the encoder, a graph attention module is introduced after the PANNs to learn contextual association (i.e. the dependency among the audio features over different time frames) through an adjacency graph, and a top-k mask is used to mitigate the interference from noisy nodes. The learnt contextual association leads to a more effective feature representation with feature node aggregation. As a result, the decoder can predict important semantic information about the acoustic scene and events based on the contextual associations learned from the audio signal. Experimental results show that GraphAC outperforms the state-of-the-art methods with PANNs as the encoders, thanks to the incorporation of the graph attention module into the encoder for capturing the long-time dependencies within the audio signal. The source code is available at https://github.com/LittleFlyingSheep/GraphAC.
翻译:最先进的音频字幕方法通常使用编码器-解码器结构,其中预训练音频神经网络(PANNs)作为特征提取器的编码器。然而,PANNs中使用的卷积操作在捕捉音频信号内部的长期依赖性方面存在局限性,从而可能导致音频字幕性能下降。本文提出一种新型方法,使用图形注意力(GraphAC)进行基于编码器-解码器的音频字幕。在编码器中,引入了一个图形注意模块,在PANNs之后学习结上下文关联(即不同时间帧上音频特征之间的依赖关系)通过邻接图,并使用前k个蒙版来减轻噪声节点的干扰。所学习的上下文关联通过功能节点聚合,导致更有效的特征表示。因此,解码器可以基于从音频信号中学习的上下文关联来预测有关声学场景和事件的重要语义信息。实验结果表明,由于将图形注意模块引入编码器以捕捉音频信号内部的长期依赖关系,因此GraphAC表现优于以PANNs为编码器的最先进方法。源代码可在https://github.com/LittleFlyingSheep/GraphAC上获得。