Deep neural network (DNN) latency characterization is a time-consuming process and adds significant cost to Neural Architecture Search (NAS) processes when searching for efficient convolutional neural networks for embedded vision applications. DNN Latency is a hardware dependent metric and requires direct measurement or inference on target hardware. A recently introduced latency estimation technique known as MAPLE predicts DNN execution time on previously unseen hardware devices by using hardware performance counters. Leveraging these hardware counters in the form of an implicit prior, MAPLE achieves state-of-the-art performance in latency prediction. Here, we propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency to better account for model stability and robustness. First, by identifying DNN architectures that exhibit a similar latency to each other, we can generate multiple virtual examples to significantly improve the accuracy over MAPLE. Secondly, the hardware specifications are used to determine the similarity between training and test hardware to emphasize training samples captured from comparable devices (domains) and encourages improved domain alignment. Experimental results using a convolution neural network NAS benchmark across different types of devices, including an Intel processor that is now used for embedded vision applications, demonstrate a 5% improvement over MAPLE and 9% over HELP. Furthermore, we include ablation studies to independently assess the benefits of virtual examples and hardware-based sample importance.
翻译:深心内网络( DNNN) 长期性特征描述是一个耗时的过程,在为嵌入的视觉应用寻找高效的进化神经神经网络时,会给神经结构搜索进程增加大量成本。 DNNLantency是一种硬件依赖度,需要直接测量或推断目标硬件。最近采用的一种称为MAPLE 的潜伏估计技术(MAPLE) 预测DNNN 使用硬件性能反硬件设备在以前不为人知的硬件设备上的执行时间。利用这些硬件反射器在隐含的前方进行利用,MAPLEE在延缓冲预测中达到最新水平的虚拟性能。在这里,我们建议MAPLE-X扩展MAPLE,将硬件设备的明确先前知识与 DNNNE 结构延展延,以更好地计算模型的稳定性和稳健性。首先,通过确定显示彼此相近的 DNNNNEF结构,我们可以生成多个虚拟例子,以大大提高基于MAPLE的准确性。第二, 用于确定培训和测试硬件与强调从可比设备( DOILE)所采集的培训样品( DODLEB)中采集( 现在)获得的培训效益,鼓励独立地对5进行实地调整。