The existence of a plethora of language models makes the problem of selecting the best one for a custom task challenging. Most state-of-the-art methods leverage transformer-based models (e.g., BERT) or their variants. Training such models and exploring their hyperparameter space, however, is computationally expensive. Prior work proposes several neural architecture search (NAS) methods that employ performance predictors (e.g., surrogate models) to address this issue; however, analysis has been limited to homogeneous models that use fixed dimensionality throughout the network. This leads to sub-optimal architectures. To address this limitation, we propose a suite of heterogeneous and flexible models, namely FlexiBERT, that have varied encoder layers with a diverse set of possible operations and different hidden dimensions. For better-posed surrogate modeling in this expanded design space, we propose a new graph-similarity-based embedding scheme. We also propose a novel NAS policy, called BOSHNAS, that leverages this new scheme, Bayesian modeling, and second-order optimization, to quickly train and use a neural surrogate model to converge to the optimal architecture. A comprehensive set of experiments shows that the proposed policy, when applied to the FlexiBERT design space, pushes the performance frontier upwards compared to traditional models. FlexiBERT-Mini, one of our proposed models, has 3% fewer parameters than BERT-Mini and achieves 8.9% higher GLUE score. A FlexiBERT model with equivalent performance as the best homogeneous model achieves 2.6x smaller size. FlexiBERT-Large, another proposed model, achieves state-of-the-art results, outperforming the baseline models by at least 5.7% on the GLUE benchmark.
翻译:多种语言模型的存在使得选择最适合自定义任务的最佳模型的问题具有挑战性。大多数最先进的方法都利用以变压器为基础的模型(例如,BERT)或其变异体。但是,培训这些模型和探索其超参数空间的计算成本很高。先前的工作提出了几种使用性能预测器(例如,代用模型)的神经结构搜索方法(NAS)来解决这一问题;然而,分析仅限于使用整个网络的固定维度参数的同质模型。这导致亚最佳结构。为了应对这一限制,我们建议了一套混合和灵活的模型,即FlexiBERT,这些模型具有不同的编码层,有各种各样的可能的运作和不同的隐藏层面。为了在这个扩大的设计空间中更好的模拟模型模型,我们提出了一个新的基于图形相似性的嵌入计划。我们还提出了一个新的NASS政策,称为BOSLNAS, 以新方案、Bayesian 模型和第二等值的FERB 模型优化,以快速地培训并使用一个模化的模范式A-FERA。我们的最佳模型,从而实现最佳的模制成。