In this paper, we propose and experiment with techniques for extreme compression of neural natural language understanding (NLU) models, making them suitable for execution on resource-constrained devices. We propose a task-aware, end-to-end compression approach that performs word-embedding compression jointly with NLU task learning. We show our results on a large-scale, commercial NLU system trained on a varied set of intents with huge vocabulary sizes. Our approach outperforms a range of baselines and achieves a compression rate of 97.4% with less than 3.7% degradation in predictive performance. Our analysis indicates that the signal from the downstream task is important for effective compression with minimal degradation in performance.
翻译:在本文中,我们提出并试验极端压缩自然神经语言理解模型的技术,使之适合在资源限制装置上执行。我们建议采用任务认知、端到端压缩方法,与NLU任务学习一起进行字形压缩。我们展示了大规模商业的NLU系统的结果,该系统经过培训,其用意各异,词汇大小很大。我们的方法优于一系列基线,压缩率达到97.4%,预测性能退化率小于3.7%。我们的分析表明,下游任务发出的信号对于有效压缩且性能退化最小很重要。