With the wide and deep adoption of deep learning models in real applications, there is an increasing need to model and learn the representations of the neural networks themselves. These models can be used to estimate attributes of different neural network architectures such as the accuracy and latency, without running the actual training or inference tasks. In this paper, we propose a neural architecture representation model that can be used to estimate these attributes holistically. Specifically, we first propose a simple and effective tokenizer to encode both the operation and topology information of a neural network into a single sequence. Then, we design a multi-stage fusion transformer to build a compact vector representation from the converted sequence. For efficient model training, we further propose an information flow consistency augmentation and correspondingly design an architecture consistency loss, which brings more benefits with less augmentation samples compared with previous random augmentation strategies. Experiment results on NAS-Bench-101, NAS-Bench-201, DARTS search space and NNLQP show that our proposed framework can be used to predict the aforementioned latency and accuracy attributes of both cell architectures and whole deep neural networks, and achieves promising performance. Code is available at https://github.com/yuny220/NAR-Former.
翻译:随着深度学习模型在实际应用中的广泛使用,人们越来越需要对神经网络本身进行建模和学习表征模型。这些模型可以用来估计不同神经网络架构的属性,例如准确率和延迟,而无需运行实际的训练或推理任务。在本文中,我们提出了一种神经架构表征模型,用于综合预测这些属性。具体而言,我们首先提出了一种简单而有效的标记器,将神经网络的运算和拓扑信息编码成一个单独的序列。然后,我们设计了一个多阶段融合变压器,从转换后的序列构建一个紧凑的向量表示。为了实现高效的模型训练,我们进一步提出了信息流程一致性增强,相应地设计了一种架构一致性损失,与以前的随机增强策略相比,它在使用更少增强样本时带来了更多的好处。在NAS-Bench-101、NAS-Bench-201、DARTS搜索空间和NNLQP上的实验结果表明,我们提出的框架可以用于预测单个细胞和整个深度神经网络的延迟和准确率属性,并实现了良好的性能。代码可在https://github.com/yuny220/NAR-Former上找到。