Keyword spotting (KWS) on mobile devices generally requires a small memory footprint. However, most current models still maintain a large number of parameters in order to ensure good performance. In this paper, we propose a temporally pooled attention module which can capture global features better than the AveragePool. Besides, we design a separable temporal convolution network which leverages depthwise separable and temporal convolution to reduce the number of parameter and calculations. Finally, taking advantage of separable temporal convolution and temporally pooled attention, a efficient neural network (ST-AttNet) is designed for KWS system. We evaluate the models on the publicly available Google speech commands data sets V1. The number of parameters of proposed model (48K) is 1/6 of state-of-the-art TC-ResNet14-1.5 model (305K). The proposed model achieves a 96.6% accuracy, which is comparable to the TC-ResNet14-1.5 model (96.6%).
翻译:在移动设备上显示关键字标记(KWS)通常需要少量的记忆足迹。 然而,大多数当前模型仍然保留着大量参数,以确保良好的性能。 在本文中,我们提议了一个暂时集中的关注模块,该模块可以更好地捕捉全球特征。此外,我们设计了一个可分离的时变网络,利用深度可分离和时间变换来减少参数和计算数量。最后,利用可分离的时间变换和时间集合的关注,为KWS系统设计了一个高效的神经网络(ST-AttNet)。我们评估了公开提供的谷歌语音命令数据集V1的模型模型。拟议模型(48K)的参数数是最新技术(TC-ResNet14-1.5)模型(305K)的1/6。拟议模型实现了96.6%的精确度,这与TC-ResNet14-1.5模型(96.6%)相似。