Large pre-trained language models drastically changed the natural language processing(NLP) landscape. Nowadays, they represent the go-to framework to tackle diverse NLP tasks, even with a limited number of annotations. However, using those models in production, either in the cloud or at the edge, remains a challenge due to the memory footprint and/or inference costs. As an alternative, recent work on efficient NLP has shown that small weight-efficient models can reach competitive performance at a fraction of the costs. Here, we introduce pNLP-Mixer, an embbedding-free model based on the MLP-Mixer architecture that achieves high weight-efficiency thanks to a novel linguistically informed projection layer. We evaluate our model on two multi-lingual semantic parsing datasets, MTOP and multiATIS. On MTOP our pNLP-Mixer almost matches the performance of mBERT, which has 38 times more parameters, and outperforms the state-of-the-art of tiny models (pQRNN) with 3 times fewer parameters. On a long-sequence classification task (Hyperpartisan) our pNLP-Mixer without pretraining outperforms RoBERTa, which has 100 times more parameters, demonstrating the potential of this architecture.
翻译:受过培训的大型语言模型大幅改变了自然语言处理( NLP) 的景观。 如今, 它们代表了用于处理多种语言处理( NLP) 任务的框架, 即使说明数量有限。 但是, 在生产过程中使用这些模型, 无论是在云中还是在边缘, 由于记忆足迹和/或推断成本, 仍然是一项挑战。 作为替代办法, 最近关于高效的 NLP 的工作表明, 小体重效率模型可以以成本的一小部分达到竞争性性能。 在这里, 我们引入了 PNLP- Mixer, 这是一种基于 MLP- Mixer 结构的不叠式模型, 由于语言信息化的投影层新颖而实现高重量效率。 我们评估了我们关于两种多语种语系解解析数据集、 MTP 和 多重ATIS 的模型。 关于我们的PNLP- Mixer- Mixer 工作几乎可以达到 mBERT的性能。 这里我们引入了38倍的参数, 超越了小型模型( PQNNNNN) 的状态, 模型的装模模型, 和三倍的参数都低于我们的RoPIP 的参数。