Transformer-based pre-trained language models (PLMs) mostly suffer from excessive overhead despite their advanced capacity. For resource-constrained devices, there is an urgent need for a spatially and temporally efficient model which retains the major capacity of PLMs. However, existing statically compressed models are unaware of the diverse complexities between input instances, potentially resulting in redundancy and inadequacy for simple and complex inputs. Also, miniature models with early exiting encounter challenges in the trade-off between making predictions and serving the deeper layers. Motivated by such considerations, we propose a collaborative optimization for PLMs that integrates static model compression and dynamic inference acceleration. Specifically, the PLM is slenderized in width while the depth remains intact, complementing layer-wise early exiting to speed up inference dynamically. To address the trade-off of early exiting, we propose a joint training approach that calibrates slenderization and preserves contributive structures to each exit instead of only the final layer. Experiments are conducted on GLUE benchmark and the results verify the Pareto optimality of our approach at high compression and acceleration rate with 1/8 parameters and 1/19 FLOPs of BERT.
翻译:在资源受限制的装置方面,迫切需要一种空间和时间效率高的模型,以保持PLM的主要能力。然而,现有的静态压缩模型并不知道输入实例之间的复杂程度,可能导致简单和复杂投入的冗余和不足。此外,早期退出的微型模型在作出预测和为更深层服务之间的交易中遇到挑战。出于这些考虑,我们提议对PLMS进行协作优化,将静态模型压缩和动态加速加速纳入其中。具体地说,PLM在宽度上处于滑动状态,而深度保持完好,从而补充了层态早期退出以动态加快推断的速度。为了解决早期退出的偏差,我们提议采用联合培训办法,校准细化和保持每个退出的配给结构,而不是仅为最后层。我们根据GLUE基准进行了实验,并用1/8参数和1/19FOPLERB的参数和1/19FOPERB/FOPLs核实了我们方法在高压缩和加速率方面的最佳性。