In this paper, we provide a fine-grain machine learning-based method, PerfNetV2, which improves the accuracy of our previous work for modeling the neural network performance on a variety of GPU accelerators. Given an application, the proposed method can be used to predict the inference time and training time of the convolutional neural networks used in the application, which enables the system developer to optimize the performance by choosing the neural networks and/or incorporating the hardware accelerators to deliver satisfactory results in time. Furthermore, the proposed method is capable of predicting the performance of an unseen or non-existing device, e.g. a new GPU which has a higher operating frequency with less processor cores, but more memory capacity. This allows a system developer to quickly search the hardware design space and/or fine-tune the system configuration. Compared to the previous works, PerfNetV2 delivers more accurate results by modeling detailed host-accelerator interactions in executing the full neural networks and improving the architecture of the machine learning model used in the predictor. Our case studies show that PerfNetV2 yields a mean absolute percentage error within 13.1% on LeNet, AlexNet, and VGG16 on NVIDIA GTX-1080Ti, while the error rate on a previous work published in ICBD 2018 could be as large as 200%.
翻译:在本文中,我们提供了一种基于精微重力机器学习法,即PerfNetV2,该方法提高了我们先前在各种 GPU 加速器上模拟神经网络性能模型的工作的准确性。在应用中,可以使用拟议方法预测应用程序中使用的卷发神经网络的推算时间和培训时间,使系统开发者能够通过选择神经网络和/或纳入硬件加速器来优化性能,从而及时交付令人满意的结果。此外,拟议方法能够预测一个无形或不存在的装置的性能,例如一个新的GPU,其操作频率较高,而处理核心较少,但记忆能力则更多。这样可以让系统开发者快速搜索硬件设计空间和/或微调系统配置。与以往的工程相比, PerfNetV2可以提供更准确的结果,方法是在实施完整神经网络和改进在预测器中使用的机器学习模型的结构,例如,例如,新的GPPUP,其操作频率较高,而处理核心较少,但记忆能力也更高。我们的案例研究表明,在前期的13-10 %的VGFA 绝对误差率中,我们的案例研究显示,在前的LefBA AS2 AS1 中,在前的错误中,在13-10 ALxxx中,在前的精确率中将一个数值中,在前的VGBVBVBVBVLA1中将一个最大百分比中,我们案例研究。