As more speech processing applications execute locally on edge devices, a set of resource constraints must be considered. In this work we address one of these constraints, namely over-the-network data budgets for transferring models from server to device. We present neural update approaches for release of subsequent speech model generations abiding by a data budget. We detail two architecture-agnostic methods which learn compact representations for transmission to devices. We experimentally validate our techniques with results on two tasks (automatic speech recognition and spoken language understanding) on open source data sets by demonstrating when applied in succession, our budgeted updates outperform comparable model compression baselines by significant margins.
翻译:随着更多的语音处理应用程序在边缘设备上在当地应用,必须考虑一系列资源限制。在这项工作中,我们处理这些制约因素之一,即将模型从服务器转换为设备所需的超网络数据预算。我们提出了释放随后的语音模型代代的神经更新方法,以遵守数据预算。我们详细说明了两种结构学-不可知性方法,即学习向设备传输的缩写。我们实验性地验证了我们的技术,在开放源数据集上的两项任务(自动语音识别和口语理解)的结果,在连续应用时进行演示,我们的预算更新在显著的边距上超过了可比的压缩模型基线。