This paper proposes a novel multimodal self-supervised architecture for energy-efficient audio-visual (AV) speech enhancement that integrates Graph Neural Networks with canonical correlation analysis (CCA-GNN). The proposed approach lays its foundations on a state-of-the-art CCA-GNN that learns representative embeddings by maximizing the correlation between pairs of augmented views of the same input while decorrelating disconnected features. The key idea of the conventional CCA-GNN involves discarding augmentation-variant information and preserving augmentation-invariant information while preventing capturing of redundant information. Our proposed AV CCA-GNN model deals with multimodal representation learning context. Specifically, our model improves contextual AV speech processing by maximizing canonical correlation from augmented views of the same channel and canonical correlation from audio and visual embeddings. In addition, it proposes a positional node encoding that considers a prior-frame sequence distance instead of a feature-space representation when computing the node's nearest neighbors, introducing temporal information in the embeddings through the neighborhood's connectivity. Experiments conducted on the benchmark ChiME3 dataset show that our proposed prior frame-based AV CCA-GNN ensures a better feature learning in the temporal context, leading to more energy-efficient speech reconstruction than state-of-the-art CCA-GNN, multilayer perceptron (MLP), and Long short-term memory (LSTM) models.
翻译:本文建议建立一个新型的多式联运自我监督结构,用于节能视听语音强化,将图形神经网络与卡通相关分析(CCA-GNNN)相结合。拟议方法以先进的CCA-GNN 为基础,通过最大限度地利用相同投入的增强观点对齐之间的相关性来学习代表性嵌入。常规CCA-GNNN的关键理念是丢弃增量变异信息,保护增量变异信息,同时防止获取多余信息。我们提议的AVCCA-GNN 模型涉及多式代表性学习环境。具体地说,我们的模型通过从同一频道的扩大观点和视听嵌入的卡通相关性中最大限度地利用背景的AV语音处理。此外,它提出了一种定位节点编码,在计算最近的节点邻居时考虑一个前框架序列,在嵌入中引入时间信息,通过邻域互连通性信息。在基点的基点上对A-GNNNF进行实验,在基点基点的CM-CS-CS-CS-CS-S-CS-CS-Simmeal Primeal-CLAF-CR-C-C-IF-CS-S-IF-IF-IF-IF-IF-IF-ILMF-S-S-IF-S-SD-IF IM-S-S-S-S-IFSD-SD-SM-S-S-S-SD-S-S-S-IF-SD-SD-SD-SD-SD-SD-SD-IF-SD-IF-SD-SD-SD-SD-SD-IF-SD-SD-SD-IF-SD-S-S-SD-SD-P-SD-SD-SD-SD-SD-SBSD-SD-SD-SD-SD-SF-I-I-SB-I-I-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S