In recent years, the fields of natural language processing (NLP) and information retrieval (IR) have made tremendous progress thanks to deep learning models like Recurrent Neural Networks (RNNs), Gated Recurrent Units (GRUs) and Long Short-Term Memory (LSTMs) networks, and Transformer based models like Bidirectional Encoder Representations from Transformers (BERT). But these models are humongous in size. On the other hand, real world applications demand small model size, low response times and low computational power wattage. In this survey, we discuss six different types of methods (Pruning, Quantization, Knowledge Distillation, Parameter Sharing, Tensor Decomposition, and Linear Transformer based methods) for compression of such models to enable their deployment in real industry NLP projects. Given the critical need of building applications with efficient and small models, and the large amount of recently published work in this area, we believe that this survey organizes the plethora of work done by the 'deep learning for NLP' community in the past few years and presents it as a coherent story.
翻译:近年来,自然语言处理(NLP)和信息检索(IR)领域取得了巨大进展,这要归功于长期神经网络(RNN)、GD经常单元(GRUs)和长短期内存(LSTMs)网络等深层次学习模型,以及变异器双向编码器(BERT)等基于变异器的模型。但这些模型规模巨大。另一方面,现实世界应用程序要求小模型规模、低反应时间和低计算功率瓦。在本次调查中,我们讨论了六种不同方法(运行、量化、知识蒸馏、帕拉米分享、天线性或分解和线性变换方法),以压缩这些模型,以便能够在实际工业NLP项目中部署这些模型。鉴于使用高效和小型模型的建筑应用程序的迫切需要,以及最近在这一领域出版的大量工作,我们认为这项调查将“不断学习”的NLP社区在过去几年里完成的大量工作编成一个连贯的故事。