Deep neural speech and audio processing systems have a large number of trainable parameters, a relatively complex architecture, and require a vast amount of training data and computational power. These constraints make it more challenging to integrate such systems into embedded devices and utilise them for real-time, real-world applications. We tackle these limitations by introducing DeepSpectrumLite, an open-source, lightweight transfer learning framework for on-device speech and audio recognition using pre-trained image convolutional neural networks (CNNs). The framework creates and augments Mel-spectrogram plots on-the-fly from raw audio signals which are then used to finetune specific pre-trained CNNs for the target classification task. Subsequently, the whole pipeline can be run in real-time with a mean inference lag of 242.0 ms when a DenseNet121 model is used on a consumer-grade Motorola moto e7 plus smartphone. DeepSpectrumLite operates decentralised, eliminating the need for data upload for further processing. By obtaining state-of-the-art results on a set of paralinguistics tasks, we demonstrate the suitability of the proposed transfer learning approach for embedded audio signal processing, even when data is scarce. We provide an extensive command-line interface for users and developers which is comprehensively documented and publicly available at https://github.com/DeepSpectrum/DeepSpectrumLite.
翻译:深神经语音和音频处理系统有大量可培训的参数,相对复杂,需要大量的培训数据和计算能力。这些制约因素使得将这类系统纳入嵌入装置并用于实时和现实应用更具挑战性。我们通过采用深频谱Lite(DeepSpectrumLite),一个开放源码、轻量传输学习框架,用于在设计前训练的图像神经神经网络(CNNs)进行现场语音和音频识别,来应对这些限制。这个框架创建和增强来自原始音频信号的Mel-spectrogram现场,然后用于微调特定事先训练的CNN系统,用于目标分类任务。随后,整个管道可以实时运行,当DenseNet121模型用于消费级摩托罗7加智能手机时,其平均误差为242.0ms。深频频频/直线系统处理任务中,我们展示了拟议的音频传输/甚深频路路路路界面的状态结果。我们展示了在数据库处理过程中进行广泛学习的磁频路段/甚深路路段/平路段系统。