Natural Language Processing (NLP) has recently achieved success by using huge pre-trained Transformer networks. However, these models often contain hundreds of millions or even billions of parameters, bringing challenges to online deployment due to latency constraints. Recently, hardware manufacturers have introduced dedicated hardware for NxM sparsity to provide the flexibility of unstructured pruning with the runtime efficiency of structured approaches. NxM sparsity permits arbitrarily selecting M parameters to retain from a contiguous group of N in the dense representation. However, due to the extremely high complexity of pre-trained models, the standard sparse fine-tuning techniques often fail to generalize well on downstream tasks, which have limited data resources. To address such an issue in a principled manner, we introduce a new learning framework, called NxMTransformer, to induce NxM semi-structured sparsity on pretrained language models for natural language understanding to obtain better performance. In particular, we propose to formulate the NxM sparsity as a constrained optimization problem and use Alternating Direction Method of Multipliers (ADMM) to optimize the downstream tasks while taking the underlying hardware constraints into consideration. ADMM decomposes the NxM sparsification problem into two sub-problems that can be solved sequentially, generating sparsified Transformer networks that achieve high accuracy while being able to effectively execute on newly released hardware. We apply our approach to a wide range of NLP tasks, and our proposed method is able to achieve 1.7 points higher accuracy in GLUE score than current practices. Moreover, we perform detailed analysis on our approach and shed light on how ADMM affects fine-tuning accuracy for downstream tasks. Finally, we illustrate how NxMTransformer achieves performance improvement with knowledge distillation.
翻译:自然语言处理(NLP)最近通过使用经过事先培训的巨型变压器网络取得了成功。然而,这些模型往往包含数亿甚至数十亿个参数,由于延缓限制,给在线部署带来了挑战。最近,硬件制造商引入了专用的NxM 宽度硬件,为结构化方法的运行效率提供无结构的剪裁灵活性。NxM 宽度允许任意选择M参数,以便从密集的N相邻的N组中保留。然而,由于经过培训的模型极其复杂,标准稀少的微调技术往往无法对下游任务进行广泛推广,而下游任务的数据资源有限。为了以有原则的方式解决这一问题,我们引入了一个新的学习框架,称为NxM Transformority(Nx Transmissional),使NxMML半结构松散,以便获得更好的表现。我们提议将NxMSloaddal 升级方法作为压缩的优化方法,我们如何在不断更新的下游任务中实现精细的精度,同时将精度的硬度分析结果转化为升级,而将解的RalMMLML 也成为了。