In this paper, we proposed a novel adjustable fine-tuning method that improves the training and inference time of the BERT model on downstream tasks. In the proposed method, we first detect more important word vectors in each layer by our proposed redundancy metric and then eliminate the less important word vectors with our proposed strategy. In our method, the word vector elimination rate in each layer is controlled by the Tilt-Rate hyper-parameter, and the model learns to work with a considerably lower number of Floating Point Operations (FLOPs) than the original BERT\textsubscript{base} model. Our proposed method does not need any extra training steps, and also it can be generalized to other transformer-based models. We perform extensive experiments that show the word vectors in higher layers have an impressive amount of redundancy that can be eliminated and decrease the training and inference time. Experimental results on extensive sentiment analysis, classification and regression datasets, and benchmarks like IMDB and GLUE showed that our proposed method is effective in various datasets. By applying our method on the BERT\textsubscript{base} model, we decrease the inference time up to 5.3 times with less than 0.85\% accuracy degradation on average. After the fine-tuning stage, the inference time of our model can be adjusted with our method offline-tuning property for a wide range of the Tilt-Rate value selections. Also, we propose a mathematical speedup analysis that can estimate the speedup of our method accurately. With the help of this analysis, the Tilt-Rate hyper-parameter can be selected before fine-tuning or while offline-tuning stages.
翻译:在本文中,我们提出了一种新的可调整微调方法,该方法将改进BERT模型在下游任务方面的培训和推算时间。在拟议方法中,我们首先通过拟议的冗余度度标准在每层中检测出更重要的字矢量,然后用我们拟议的战略消除较不重要的字矢量。在我们的方法中,每个层的字矢量消除率由Tilt-Rate超参数控制,该模型学会使用比原 BERT\textsuppram{base} 模型少得多得多得多的浮点操作(FLOOPs) 。在最初的 BERT\ textsubscript{Base} 模型中,我们提议的方法不需要任何额外的培训步骤,还可以推广到其他基于变压器的模型中。我们进行广泛的实验,显示高层的字矢量量的字矢量有惊人的冗余量,可以消除,减少培训和回归时间时间时间。 广泛情绪分析的实验结果、分类和回归数据集,以及IMDB和GLUE等基准表明,我们提出的方法在各种数据集中是有效的。 通过在BERtext下应用我们所选择的精度选择的精度分析方法, 我们的精度的精度分析可以降低的精度值, 方向的精度分析, 度分析,我们比5. 的精度的精度的精度的精度分析可以降低到比。