We propose a novel decentralized feature extraction approach in federated learning to address privacy-preservation issues for speech recognition. It is built upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction, and a recurrent neural network (RNN) based end-to-end acoustic model (AM). To enhance model parameter protection in a decentralized architecture, an input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram, and the corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters. The encoded features are then down-streamed to the local RNN model for the final recognition. The proposed decentralized framework takes advantage of the quantum learning progress to secure models and to avoid privacy leakage attacks. Testing on the Google Speech Commands Dataset, the proposed QCNN encoder attains a competitive accuracy of 95.12% in a decentralized model, which is better than the previous architectures using centralized RNN models with convolutional features. We also conduct an in-depth study of different quantum circuit encoder architectures to provide insights into designing QCNN-based feature extractors. Neural saliency analyses demonstrate a correlation between the proposed QCNN features, class activation maps, and input spectrograms. We provide an implementation for future studies.
翻译:我们提议在联合学习中采用新颖的分散式特征提取方法,以解决隐私保护问题,供语音识别。它基于一个量子进化神经网络(QCNN),由用于地貌提取的量子电路编码器组成,以及一个基于端对端声学模型(AM)的经常性神经网络(RNN),为了在分权架构中加强模型参数保护,输入式讲话首先被上传到量子计算服务器,以提取Mel-spectrogrogram,而相应的脉动特性则使用带有随机参数的量子电路算法编码。然后将编码的特性下流到本地 RNNN 模型,供最后识别。拟议的分散式框架利用量子学习进度来确保模型的安全并避免隐私渗漏攻击。测试Google语音指令数据集,在分权化模型中实现95.12%的竞争性准确度,这比以前使用集中式RNNNN模型的结构更好。我们还深入研究了不同的量子电路结构结构,供最后确认。拟议的分散式结构利用NCN系统结构,以提供模型的深度分析,供设计阶段化模型,供我们进行。