The deep neural networks, such as the Deep-FSMN, have been widely studied for keyword spotting (KWS) applications. However, computational resources for these networks are significantly constrained since they usually run on-call on edge devices. In this paper, we present BiFSMN, an accurate and extreme-efficient binary neural network for KWS. We first construct a High-frequency Enhancement Distillation scheme for the binarization-aware training, which emphasizes the high-frequency information from the full-precision network's representation that is more crucial for the optimization of the binarized network. Then, to allow the instant and adaptive accuracy-efficiency trade-offs at runtime, we also propose a Thinnable Binarization Architecture to further liberate the acceleration potential of the binarized network from the topology perspective. Moreover, we implement a Fast Bitwise Computation Kernel for BiFSMN on ARMv8 devices which fully utilizes registers and increases instruction throughput to push the limit of deployment efficiency. Extensive experiments show that BiFSMN outperforms existing binarization methods by convincing margins on various datasets and is even comparable with the full-precision counterpart (e.g., less than 3% drop on Speech Commands V1-12). We highlight that benefiting from the thinnable architecture and the optimized 1-bit implementation, BiFSMN can achieve an impressive 22.3x speedup and 15.5x storage-saving on real-world edge hardware.
翻译:深 FSMN 等深心神经网络已被广泛研究用于关键词检测( KWS) 应用。 然而, 这些网络的计算资源由于通常在边缘设备上运行,因此受到极大限制, 因为这些网络的计算资源通常在运行时会受到显著限制。 在本文中, 我们为 KWS 提供了一个精确和极高效的二进制神经网络。 我们首先为二进制- 觉悟培训建立一个高频增强蒸馏机制, 该系统强调全精度网络代表中的高频信息, 这对于优化二进化网络( KWS) 应用更为关键。 然后, 为了允许这些网络的快速和适应性准确效率交易在运行时进行, 我们还提议一个可移植的BIFSMN, 一个精度和极高效的双进制神经网络加速潜力。 此外, 我们在 ARMVVV8 设备上实施一个快速的略微调调调心的KNN, 这个工具充分利用了登记册和增加通过输入指令来提高部署效率的限度。 广泛的实验显示, BFSMMMN 超越了现有的双进精度方法, 3, 在15进精度的存储系统上, 甚至可以实现一个比级的硬化结构, 25进式的平级结构, 。